OpenCores
URL https://opencores.org/ocsvn/openrisc/openrisc/trunk

Subversion Repositories openrisc

[/] [openrisc/] [trunk/] [rtos/] [ecos-3.0/] [packages/] [kernel/] [current/] [doc/] [kernel.sgml] - Blame information for rev 851

Go to most recent revision | Details | Compare with Previous | View Log

Line No. Rev Author Line
1 786 skrzyp
2
 
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
 
33
34
 
35
36
  The eCos Kernel
37
 
38
39
 
40
  
41
 
42
    
43
    Kernel Overview
44
    
45
 
46
    
47
      Kernel
48
      Overview of the eCos Kernel
49
    
50
 
51
52
 
53
    
54
      Description
55
      
56
The kernel is one of the key packages in all of eCos. It provides the
57
core functionality needed for developing multi-threaded applications:
58
      
59
      
60
        
61
The ability to create new threads in the system, either during startup
62
or when the system is already running.
63
        
64
        
65
Control over the various threads in the system, for example
66
manipulating their priorities.
67
        
68
        
69
A choice of schedulers, determining which thread should currently be
70
running.
71
        
72
        
73
A range of synchronization primitives, allowing threads to interact
74
and share data safely.
75
        
76
        
77
Integration with the system's support for interrupts and exceptions.
78
        
79
      
80
      
81
In some other operating systems the kernel provides additional
82
functionality. For example the kernel may also provide memory
83
allocation functionality, and device drivers may be part of the kernel
84
as well. This is not the case for eCos. Memory allocation is handled
85
by a separate package. Similary each device driver will typically be a
86
separate package. Various packages are combined and configured using
87
the eCos configuration technology to meet the requirements of the
88
application.
89
      
90
      
91
The eCos kernel package is optional. It is possible to write
92
single-threaded applications which do not use any kernel
93
functionality, for example RedBoot. Typically such applications are
94
based around a central polling loop, continually checking all devices
95
and taking appropriate action when I/O occurs. A small amount of
96
calculation is possible every iteration, at the cost of an increased
97
delay between an I/O event occurring and the polling loop detecting
98
the event. When the requirements are straightforward it may well be
99
easier to develop the application using a polling loop, avoiding the
100
complexities of multiple threads and synchronization between threads.
101
As requirements get more complicated a multi-threaded solution becomes
102
more appropriate, requiring the use of the kernel. In fact some of the
103
more advanced packages in eCos, for example the TCP/IP stack, use
104
multi-threading internally. Therefore if the application uses any of
105
those packages then the kernel becomes a required package, not an
106
optional one.
107
      
108
      
109
The kernel functionality can be used in one of two ways. The kernel
110
provides its own C API, with functions like
111
cyg_thread_create and
112
cyg_mutex_lock. These can be called directly from
113
application code or from other packages. Alternatively there are a
114
number of packages which provide compatibility with existing API's,
115
for example POSIX threads or µITRON. These allow application
116
code to call standard functions such as
117
pthread_create, and those functions are
118
implemented using the basic functionality provided by the eCos kernel.
119
Using compatibility packages in an eCos application can make it much
120
easier to reuse code developed in other environments, and to share
121
code.
122
      
123
      
124
Although the different compatibility packages have similar
125
requirements on the underlying kernel, for example the ability to
126
create a new thread, there are differences in the exact semantics. For
127
example, strict µITRON compliance requires that kernel
128
timeslicing is disabled. This is achieved largely through the
129
configuration technology. The kernel provides a number of
130
configuration options that control the exact semantics that are
131
provided, and the various compatibility packages require particular
132
settings for those options. This has two important consequences.
133
First, it is not usually possible to have two different compatibility
134
packages in one eCos configuration because they will have conflicting
135
requirements on the underlying kernel. Second, the semantics of the
136
kernel's own API are only loosely defined because of the many
137
configuration options. For example cyg_mutex_lock
138
will always attempt to lock a mutex, but various configuration options
139
determine the behaviour when the mutex is already locked and there is
140
a possibility of priority inversion.
141
      
142
      
143
The optional nature of the kernel package presents some complications
144
for other code, especially device drivers. Wherever possible a device
145
driver should work whether or not the kernel is present. However there
146
are some parts of the system, especially those related to interrupt
147
handling, which should be implemented differently in multi-threaded
148
environments containing the eCos kernel and in single-threaded
149
environments without the kernel. To cope with both scenarios the
150
common HAL package provides a driver API, with functions such as
151
cyg_drv_interrupt_attach. When the kernel package
152
is present these driver API functions map directly on to the
153
equivalent kernel functions such as
154
cyg_interrupt_attach, using macros to avoid any
155
overheads. When the kernel is absent the common HAL package implements
156
the driver API directly, but this implementation is simpler than the
157
one in the kernel because it can assume a single-threaded environment.
158
      
159
    
160
 
161
162
163
 
164
    
165
      Schedulers
166
      
167
When a system involves multiple threads, a scheduler is needed to
168
determine which thread should currently be running. The eCos kernel
169
can be configured with one of two schedulers, the bitmap scheduler and
170
the multi-level queue (MLQ) scheduler. The bitmap scheduler is
171
somewhat more efficient, but has a number of limitations. Most systems
172
will instead use the MLQ scheduler. Other schedulers may be added in
173
the future, either as extensions to the kernel package or in separate
174
packages.
175
      
176
      
177
Both the bitmap and the MLQ scheduler use a simple numerical priority
178
to determine which thread should be running. The number of priority
179
levels is configurable via the option
180
CYGNUM_KERNEL_SCHED_PRIORITIES, but a typical
181
system will have up to 32 priority levels. Therefore thread priorities
182
will be in the range 0 to 31, with 0 being the highest priority and 31
183
the lowest. Usually only the system's idle thread will run at the
184
lowest priority. Thread priorities are absolute, so the kernel will
185
only run a lower-priority thread if all higher-priority threads are
186
currently blocked.
187
      
188
      
189
The bitmap scheduler only allows one thread per priority level, so if
190
the system is configured with 32 priority levels then it is limited to
191
only 32 threads — still enough for many applications. A simple
192
bitmap can be used to keep track of which threads are currently
193
runnable. Bitmaps can also be used to keep track of threads waiting on
194
a mutex or other synchronization primitive. Identifying the
195
highest-priority runnable or waiting thread involves a simple
196
operation on the bitmap, and an array index operation can then be used
197
to get hold of the thread data structure itself. This makes the
198
bitmap scheduler fast and totally deterministic.
199
      
200
      
201
The MLQ scheduler allows multiple threads to run at the same priority.
202
This means that there is no limit on the number of threads in the
203
system, other than the amount of memory available. However operations
204
such as finding the highest priority runnable thread are a little bit
205
more expensive than for the bitmap scheduler.
206
      
207
      
208
Optionally the MLQ scheduler supports timeslicing, where the scheduler
209
automatically switches from one runnable thread to another when some
210
number of clock ticks have occurred. Timeslicing only comes into play
211
when there are two runnable threads at the same priority and no higher
212
priority runnable threads. If timeslicing is disabled then a thread
213
will not be preempted by another thread of the same priority, and will
214
continue running until either it explicitly yields the processor or
215
until it blocks by, for example, waiting on a synchronization
216
primitive. The configuration options
217
CYGSEM_KERNEL_SCHED_TIMESLICE and
218
CYGNUM_KERNEL_SCHED_TIMESLICE_TICKS control
219
timeslicing. The bitmap scheduler does not provide timeslicing
220
support. It only allows one thread per priority level, so it is not
221
possible to preempt the current thread in favour of another one with
222
the same priority.
223
      
224
      
225
Another important configuration option that affects the MLQ scheduler
226
is CYGIMP_KERNEL_SCHED_SORTED_QUEUES. This
227
determines what happens when a thread blocks, for example by waiting
228
on a semaphore which has no pending events. The default behaviour of
229
the system is last-in-first-out queuing. For example if several
230
threads are waiting on a semaphore and an event is posted, the thread
231
that gets woken up is the last one that called
232
cyg_semaphore_wait. This allows for a simple and
233
fast implementation of both the queue and dequeue operations. However
234
if there are several queued threads with different priorities, it may
235
not be the highest priority one that gets woken up. In practice this
236
is rarely a problem: usually there will be at most one thread waiting
237
on a queue, or when there are several threads they will be of the same
238
priority. However if the application does require strict priority
239
queueing then the option
240
CYGIMP_KERNEL_SCHED_SORTED_QUEUES should be
241
enabled. There are disadvantages: more work is needed whenever a
242
thread is queued, and the scheduler needs to be locked for this
243
operation so the system's dispatch latency is worse. If the bitmap
244
scheduler is used then priority queueing is automatic and does not
245
involve any penalties.
246
      
247
      
248
Some kernel functionality is currently only supported with the MLQ
249
scheduler, not the bitmap scheduler. This includes support for SMP
250
systems, and protection against priority inversion using either mutex
251
priority ceilings or priority inheritance.
252
      
253
    
254
 
255
256
257
 
258
    
259
      Synchronization Primitives
260
      
261
The eCos kernel provides a number of different synchronization
262
primitives: mutexes,
263
condition variables,
264
counting semaphores,
265
mail boxes and
266
event flags.
267
      
268
      
269
Mutexes serve a very different purpose from the other primitives. A
270
mutex allows multiple threads to share a resource safely: a thread
271
locks a mutex, manipulates the shared resource, and then unlocks the
272
mutex again. The other primitives are used to communicate information
273
between threads, or alternatively from a DSR associated with an
274
interrupt handler to a thread.
275
      
276
      
277
When a thread that has locked a mutex needs to wait for some condition
278
to become true, it should use a condition variable. A condition
279
variable is essentially just a place for a thread to wait, and which
280
another thread, or DSR, can use to wake it up. When a thread waits on
281
a condition variable it releases the mutex before waiting, and when it
282
wakes up it reacquires it before proceeding. These operations are
283
atomic so that synchronization race conditions cannot be introduced.
284
      
285
      
286
A counting semaphore is used to indicate that a particular event has
287
occurred. A consumer thread can wait for this event to occur, and a
288
producer thread or a DSR can post the event. There is a count
289
associated with the semaphore so if the event occurs multiple times in
290
quick succession this information is not lost, and the appropriate
291
number of semaphore wait operations will succeed.
292
      
293
      
294
Mail boxes are also used to indicate that a particular event has
295
occurred, and allows for one item of data to be exchanged per event.
296
Typically this item of data would be a pointer to some data structure.
297
Because of the need to store this extra data, mail boxes have a
298
finite capacity. If a producer thread generates mail box events
299
faster than they can be consumed then, to avoid overflow, it will be
300
blocked until space is again available in the mail box. This means
301
that mail boxes usually cannot be used by a DSR to wake up a
302
thread. Instead mail boxes are typically only used between threads.
303
      
304
      
305
Event flags can be used to wait on some number of different events,
306
and to signal that one or several of these events have occurred. This
307
is achieved by associating bits in a bit mask with the different
308
events. Unlike a counting semaphore no attempt is made to keep track
309
of the number of events that have occurred, only the fact that an
310
event has occurred at least once. Unlike a mail box it is not
311
possible to send additional data with the event, but this does mean
312
that there is no possibility of an overflow and hence event flags can
313
be used between a DSR and a thread as well as between threads.
314
      
315
      
316
The eCos common HAL package provides its own device driver API which
317
contains some of the above synchronization primitives. These allow
318
the DSR for an interrupt handler to signal events to higher-level
319
code. If the configuration includes the eCos kernel package then
320
the driver API routines map directly on to the equivalent kernel
321
routines, allowing interrupt handlers to interact with threads. If the
322
kernel package is not included and the application consists of just a
323
single thread running in polled mode then the driver API is
324
implemented entirely within the common HAL, and with no need to worry
325
about multiple threads the implementation can obviously be rather
326
simpler.
327
      
328
    
329
 
330
331
332
 
333
    
334
      Threads and Interrupt Handling
335
      
336
During normal operation the processor will be running one of the
337
threads in the system. This may be an application thread, a system
338
thread running inside say the TCP/IP stack, or the idle thread. From
339
time to time a hardware interrupt will occur, causing control to be
340
transferred briefly to an interrupt handler. When the interrupt has
341
been completed the system's scheduler will decide whether to return
342
control to the interrupted thread or to some other runnable thread.
343
      
344
      
345
Threads and interrupt handlers must be able to interact. If a thread
346
is waiting for some I/O operation to complete, the interrupt handler
347
associated with that I/O must be able to inform the thread that the
348
operation has completed. This can be achieved in a number of ways. One
349
very simple approach is for the interrupt handler to set a volatile
350
variable. A thread can then poll continuously until this flag is set,
351
possibly sleeping for a clock tick in between. Polling continuously
352
means that the cpu time is not available for other activities, which
353
may be acceptable for some but not all applications. Polling once
354
every clock tick imposes much less overhead, but means that the thread
355
may not detect that the I/O event has occurred until an entire clock
356
tick has elapsed. In typical systems this could be as long as 10
357
milliseconds. Such a delay might be acceptable for some applications,
358
but not all.
359
      
360
      
361
A better solution would be to use one of the synchronization
362
primitives. The interrupt handler could signal a condition variable,
363
post to a semaphore, or use one of the other primitives. The thread
364
would perform a wait operation on the same primitive. It would not
365
consume any cpu cycles until the I/O event had occurred, and when the
366
event does occur the thread can start running again immediately
367
(subject to any higher priority threads that might also be runnable).
368
      
369
      
370
Synchronization primitives constitute shared data, so care must be
371
taken to avoid problems with concurrent access. If the thread that was
372
interrupted was just performing some calculations then the interrupt
373
handler could manipulate the synchronization primitive quite safely.
374
However if the interrupted thread happened to be inside some kernel
375
call then there is a real possibility that some kernel data structure
376
will be corrupted.
377
      
378
      
379
One way of avoiding such problems would be for the kernel functions to
380
disable interrupts when executing any critical region. On most
381
architectures this would be simple to implement and very fast, but it
382
would mean that interrupts would be disabled often and for quite a
383
long time. For some applications that might not matter, but many
384
embedded applications require that the interrupt handler run as soon
385
as possible after the hardware interrupt has occurred. If the kernel
386
relied on disabling interrupts then it would not be able to support
387
such applications.
388
      
389
      
390
Instead the kernel uses a two-level approach to interrupt handling.
391
Associated with every interrupt vector is an Interrupt Service Routine
392
or ISR, which will run as quickly as possible so that it can service
393
the hardware. However an ISR can make only a small number of kernel
394
calls, mostly related to the interrupt subsystem, and it cannot make
395
any call that would cause a thread to wake up. If an ISR detects that
396
an I/O operation has completed and hence that a thread should be woken
397
up, it can cause the associated Deferred Service Routine or DSR to
398
run. A DSR is allowed to make more kernel calls, for example it can
399
signal a condition variable or post to a semaphore.
400
      
401
      
402
Disabling interrupts prevents ISRs from running, but very few parts of
403
the system disable interrupts and then only for short periods of time.
404
The main reason for a thread to disable interrupts is to manipulate
405
some state that is shared with an ISR. For example if a thread needs
406
to add another buffer to a linked list of free buffers and the ISR may
407
remove a buffer from this list at any time, the thread would need to
408
disable interrupts for the few instructions needed to manipulate the
409
list. If the hardware raises an interrupt at this time, it remains
410
pending until interrupts are reenabled.
411
      
412
      
413
Analogous to interrupts being disabled or enabled, the kernel has a
414
scheduler lock. The various kernel functions such as
415
cyg_mutex_lock and
416
cyg_semaphore_post will claim the scheduler lock,
417
manipulate the kernel data structures, and then release the scheduler
418
lock. If an interrupt results in a DSR being requested and the
419
scheduler is currently locked, the DSR remains pending. When the
420
scheduler lock is released any pending DSRs will run. These may post
421
events to synchronization primitives, causing other higher priority
422
threads to be woken up.
423
      
424
      
425
For an example, consider the following scenario. The system has a high
426
priority thread A, responsible for processing some data coming from an
427
external device. This device will raise an interrupt when data is
428
available. There are two other threads B and C which spend their time
429
performing calculations and occasionally writing results to a display
430
of some sort. This display is a shared resource so a mutex is used to
431
control access.
432
      
433
      
434
At a particular moment in time thread A is likely to be blocked,
435
waiting on a semaphore or another synchronization primitive until data
436
is available. Thread B might be running performing some calculations,
437
and thread C is runnable waiting for its next timeslice. Interrupts
438
are enabled, and the scheduler is unlocked because none of the threads
439
are in the middle of a kernel operation. At this point the device
440
raises an interrupt. The hardware transfers control to a low-level
441
interrupt handler provided by eCos which works out exactly which
442
interrupt occurs, and then the corresponding ISR is run. This ISR
443
manipulates the hardware as appropriate, determines that there is now
444
data available, and wants to wake up thread A by posting to the
445
semaphore. However ISR's are not allowed to call
446
cyg_semaphore_post directly, so instead the ISR
447
requests that its associated DSR be run and returns. There are no more
448
interrupts to be processed, so the kernel next checks for DSR's. One
449
DSR is pending and the scheduler is currently unlocked, so the DSR can
450
run immediately and post the semaphore. This will have the effect of
451
making thread A runnable again, so the scheduler's data structures are
452
adjusted accordingly. When the DSR returns thread B is no longer the
453
highest priority runnable thread so it will be suspended, and instead
454
thread A gains control over the cpu.
455
      
456
      
457
In the above example no kernel data structures were being manipulated
458
at the exact moment that the interrupt happened. However that cannot
459
be assumed. Suppose that thread B had finished its current set of
460
calculations and wanted to write the results to the display. It would
461
claim the appropriate mutex and manipulate the display. Now suppose
462
that thread B was timesliced in favour of thread C, and that thread C
463
also finished its calculations and wanted to write the results to the
464
display. It would call cyg_mutex_lock. This
465
kernel call locks the scheduler, examines the current state of the
466
mutex, discovers that the mutex is already owned by another thread,
467
suspends the current thread, and switches control to another runnable
468
thread. Another interrupt happens in the middle of this
469
cyg_mutex_lock call, causing the ISR to run
470
immediately. The ISR decides that thread A should be woken up so it
471
requests that its DSR be run and returns back to the kernel. At this
472
point there is a pending DSR, but the scheduler is still locked by the
473
call to cyg_mutex_lock so the DSR cannot run
474
immediately. Instead the call to cyg_mutex_lock
475
is allowed to continue, which at some point involves unlocking the
476
scheduler. The pending DSR can now run, safely post the semaphore, and
477
thus wake up thread A.
478
      
479
      
480
If the ISR had called cyg_semaphore_post directly
481
rather than leaving it to a DSR, it is likely that there would have
482
been some sort of corruption of a kernel data structure. For example
483
the kernel might have completely lost track of one of the threads, and
484
that thread would never have run again. The two-level approach to
485
interrupt handling, ISR's and DSR's, prevents such problems with no
486
need to disable interrupts.
487
      
488
    
489
 
490
491
492
 
493
    
494
      Calling Contexts
495
      
496
eCos defines a number of contexts. Only certain calls are allowed from
497
inside each context, for example most operations on threads or
498
synchronization primitives are not allowed from ISR context. The
499
different contexts are initialization, thread, ISR and DSR.
500
      
501
      
502
When eCos starts up it goes through a number of phases, including
503
setting up the hardware and invoking C++ static constructors. During
504
this time interrupts are disabled and the scheduler is locked. When a
505
configuration includes the kernel package the final operation is a
506
call to 
507
linkend="kernel-schedcontrol">cyg_scheduler_start.
508
At this point interrupts are enabled, the scheduler is unlocked, and
509
control is transferred to the highest priority runnable thread. If the
510
configuration also includes the C library package then usually the C
511
library startup package will have created a thread which will call the
512
application's main entry point.
513
      
514
      
515
Some application code can also run before the scheduler is started,
516
and this code runs in initialization context. If the application is
517
written partly or completely in C++ then the constructors for any
518
static objects will be run. Alternatively application code can define
519
a function cyg_user_start which gets called after
520
any C++ static constructors. This allows applications to be written
521
entirely in C.
522
      
523
      
524
void
525
cyg_user_start(void)
526
{
527
    /* Perform application-specific initialization here */
528
}
529
      
530
      
531
It is not necessary for applications to provide a
532
cyg_user_start function since the system will
533
provide a default implementation which does nothing.
534
      
535
      
536
Typical operations that are performed from inside static constructors
537
or cyg_user_start include creating threads,
538
synchronization primitives, setting up alarms, and registering
539
application-specific interrupt handlers. In fact for many applications
540
all such creation operations happen at this time, using statically
541
allocated data, avoiding any need for dynamic memory allocation or
542
other overheads.
543
      
544
      
545
Code running in initialization context runs with interrupts disabled
546
and the scheduler locked. It is not permitted to reenable interrupts
547
or unlock the scheduler because the system is not guaranteed to be in
548
a totally consistent state at this point. A consequence is that
549
initialization code cannot use synchronization primitives such as
550
cyg_semaphore_wait to wait for an external event.
551
It is permitted to lock and unlock a mutex: there are no other threads
552
running so it is guaranteed that the mutex is not yet locked, and
553
therefore the lock operation will never block; this is useful when
554
making library calls that may use a mutex internally.
555
      
556
      
557
At the end of the startup sequence the system will call
558
cyg_scheduler_start and the various threads will
559
start running. In thread context nearly all of the kernel functions
560
are available. There may be some restrictions on interrupt-related
561
operations, depending on the target hardware. For example the hardware
562
may require that interrupts be acknowledged in the ISR or DSR before
563
control returns to thread context, in which case
564
cyg_interrupt_acknowledge should not be called
565
by a thread.
566
      
567
      
568
At any time the processor may receive an external interrupt, causing
569
control to be transferred from the current thread. Typically a VSR
570
provided by eCos will run and determine exactly which interrupt
571
occurred. Then the VSR will switch to the appropriate ISR, which can
572
be provided by a HAL package, a device driver, or by the application.
573
During this time the system is running at ISR context, and most of the
574
kernel function calls are disallowed. This includes the various
575
synchronization primitives, so for example an ISR is not allowed to
576
post to a semaphore to indicate that an event has happened. Usually
577
the only operations that should be performed from inside an ISR are
578
ones related to the interrupt subsystem itself, for example masking an
579
interrupt or acknowledging that an interrupt has been processed. On
580
SMP systems it is also possible to use spinlocks from ISR context.
581
      
582
      
583
When an ISR returns it can request that the corresponding DSR be run
584
as soon as it is safe to do so, and that will run in DSR context. This
585
context is also used for running alarm functions, and threads can
586
switch temporarily to DSR context by locking the scheduler. Only
587
certain kernel functions can be called from DSR context, although more
588
than in ISR context. In particular it is possible to use any
589
synchronization primitives which cannot block.  These include
590
cyg_semaphore_post,
591
cyg_cond_signal,
592
cyg_cond_broadcast,
593
cyg_flag_setbits, and
594
cyg_mbox_tryput. It is not possible to use any
595
primitives that may block such as
596
cyg_semaphore_wait,
597
cyg_mutex_lock, or
598
cyg_mbox_put. Calling such functions from inside
599
a DSR may cause the system to hang.
600
      
601
      
602
The specific documentation for the various kernel functions gives more
603
details about valid contexts.
604
      
605
    
606
 
607
608
609
 
610
    
611
      Error Handling and Assertions
612
      
613
In many APIs each function is expected to perform some validation of
614
its parameters and possibly of the current state of the system. This
615
is supposed to ensure that each function is used correctly, and that
616
application code is not attempting to perform a semaphore operation on
617
a mutex or anything like that. If an error is detected then a suitable
618
error code is returned, for example the POSIX function
619
pthread_mutex_lock can return various error codes
620
including EINVAL and EDEADLK.
621
There are a number of problems with this approach, especially in the
622
context of deeply embedded systems:
623
      
624
      
625
        
626
Performing these checks inside the mutex lock and all the other
627
functions requires extra cpu cycles and adds significantly to the code
628
size. Even if the application is written correctly and only makes
629
system function calls with sensible arguments and under the right
630
conditions, these overheads still exist.
631
        
632
        
633
Returning an error code is only useful if the calling code detects
634
these error codes and takes appropriate action. In practice the
635
calling code will often ignore any errors because the programmer
636
“knows” that the function is being
637
used correctly. If the programmer is mistaken then an error condition
638
may be detected and reported, but the application continues running
639
anyway and is likely to fail some time later in mysterious ways.
640
        
641
        
642
If the calling code does always check for error codes, that adds yet
643
more cpu cycles and code size overhead.
644
        
645
        
646
Usually there will be no way to recover from certain errors, so if the
647
application code detected an error such as EINVAL
648
then all it could do is abort the application somehow.
649
        
650
      
651
      
652
The approach taken within the eCos kernel is different. Functions such
653
as cyg_mutex_lock will not return an error code.
654
Instead they contain various assertions, which can be enabled or
655
disabled. During the development process assertions are normally left
656
enabled, and the various kernel functions will perform parameter
657
checks and other system consistency checks. If a problem is detected
658
then an assertion failure will be reported and the application will be
659
terminated. In a typical debug session a suitable breakpoint will have
660
been installed and the developer can now examine the state of the
661
system and work out exactly what is going on. Towards the end of the
662
development cycle assertions will be disabled by manipulating
663
configuration options within the eCos infrastructure package, and all
664
assertions will be eliminated at compile-time. The assumption is that
665
by this time the application code has been mostly debugged: the
666
initial version of the code might have tried to perform a semaphore
667
operation on a mutex, but any problems like that will have been fixed
668
some time ago. This approach has a number of advantages:
669
      
670
      
671
        
672
In the final application there will be no overheads for checking
673
parameters and other conditions. All that code will have been
674
eliminated at compile-time.
675
        
676
        
677
Because the final application will not suffer any overheads, it is
678
reasonable for the system to do more work during the development
679
process. In particular the various assertions can test for more error
680
conditions and more complicated errors. When an error is detected
681
it is possible to give a text message describing the error rather than
682
just return an error code.
683
        
684
        
685
There is no need for application programmers to handle error codes
686
returned by various kernel function calls. This simplifies the
687
application code.
688
        
689
        
690
If an error is detected then an assertion failure will be reported
691
immediately and the application will be halted. There is no
692
possibility of an error condition being ignored because application
693
code did not check for an error code.
694
        
695
      
696
      
697
Although none of the kernel functions return an error code, many of
698
them do return a status condition. For example the function
699
cyg_semaphore_timed_wait waits until either an
700
event has been posted to a semaphore, or until a certain number of
701
clock ticks have occurred. Usually the calling code will need to know
702
whether the wait operation succeeded or whether a timeout occurred.
703
cyg_semaphore_timed_wait returns a boolean: a
704
return value of zero or false indicates a timeout, a non-zero return
705
value indicates that the wait succeeded.
706
      
707
      
708
In conventional APIs one common error conditions is lack of memory.
709
For example the POSIX function pthread_create
710
usually has to allocate some memory dynamically for the thread stack
711
and other per-thread data. If the target hardware does not have enough
712
memory to meet all demands, or more commonly if the application
713
contains a memory leak, then there may not be enough memory available
714
and the function call would fail. The eCos kernel avoids such problems
715
by never performing any dynamic memory allocation. Instead it is the
716
responsibility of the application code to provide all the memory
717
required for kernel data structures and other needs. In the case of
718
cyg_thread_create this means a
719
cyg_thread data structure to hold the thread
720
details, and a char array for the thread stack.
721
      
722
      
723
In many applications this approach results in all data structures
724
being allocated statically rather than dynamically. This has several
725
advantages. If the application is in fact too large for the target
726
hardware's memory then there will be an error at link-time rather than
727
at run-time, making the problem much easier to diagnose. Static
728
allocation does not involve any of the usual overheads associated with
729
dynamic allocation, for example there is no need to keep track of the
730
various free blocks in the system, and it may be possible to eliminate
731
malloc from the system completely. Problems such
732
as fragmentation and memory leaks cannot occur if all data is
733
allocated statically. However, some applications are sufficiently
734
complicated that dynamic memory allocation is required, and the
735
various kernel functions do not distinguish between statically and
736
dynamically allocated memory. It still remains the responsibility of
737
the calling code to ensure that sufficient memory is available, and
738
passing null pointers to the kernel will result in assertions or
739
system failure.
740
      
741
    
742
 
743
744
 
745
  
746
 
747
748
749
 
750
  
751
 
752
    
753
    SMP Support
754
    
755
 
756
    
757
      SMP
758
      Support Symmetric Multiprocessing Systems
759
    
760
 
761
    
762
      Description
763
      
764
eCos contains support for limited Symmetric Multi-Processing (SMP).
765
This is only available on selected architectures and platforms.
766
The implementation has a number of restrictions on the kind of
767
hardware supported. These are described in .
768
    
769
 
770
    
771
The following sections describe the changes that have been made to the
772
eCos kernel to support SMP operation.
773
    
774
    
775
 
776
    
777
      System Startup
778
      
779
The system startup sequence needs to be somewhat different on an SMP
780
system, although this is largely transparent to application code. The
781
main startup takes place on only one CPU, called the primary CPU. All
782
other CPUs, the secondary CPUs, are either placed in suspended state
783
at reset, or are captured by the HAL and put into a spin as they start
784
up. The primary CPU is responsible for copying the DATA segment and
785
zeroing the BSS (if required), calling HAL variant and platform
786
initialization routines and invoking constructors. It then calls
787
cyg_start to enter the application. The
788
application may then create extra threads and other objects.
789
      
790
      
791
It is only when the application calls
792
cyg_scheduler_start that the secondary CPUs are
793
initialized. This routine scans the list of available secondary CPUs
794
and invokes HAL_SMP_CPU_START to start each
795
CPU. Finally it calls an internal function
796
Cyg_Scheduler::start_cpu to enter the scheduler
797
for the primary CPU.
798
      
799
      
800
Each secondary CPU starts in the HAL, where it completes any per-CPU
801
initialization before calling into the kernel at
802
cyg_kernel_cpu_startup. Here it claims the
803
scheduler lock and calls
804
Cyg_Scheduler::start_cpu.
805
      
806
      
807
Cyg_Scheduler::start_cpu is common to both the
808
primary and secondary CPUs. The first thing this code does is to
809
install an interrupt object for this CPU's inter-CPU interrupt. From
810
this point on the code is the same as for the single CPU case: an
811
initial thread is chosen and entered.
812
      
813
      
814
From this point on the CPUs are all equal, eCos makes no further
815
distinction between the primary and secondary CPUs. However, the
816
hardware may still distinguish between them as far as interrupt
817
delivery is concerned.
818
      
819
    
820
 
821
    
822
      Scheduling
823
      
824
To function correctly an operating system kernel must protect its
825
vital data structures, such as the run queues, from concurrent
826
access. In a single CPU system the only concurrent activities to worry
827
about are asynchronous interrupts. The kernel can easily guard its
828
data structures against these by disabling interrupts. However, in a
829
multi-CPU system, this is inadequate since it does not block access by
830
other CPUs.
831
      
832
      
833
The eCos kernel protects its vital data structures using the scheduler
834
lock. In single CPU systems this is a simple counter that is
835
atomically incremented to acquire the lock and decremented to release
836
it. If the lock is decremented to zero then the scheduler may be
837
invoked to choose a different thread to run. Because interrupts may
838
continue to be serviced while the scheduler lock is claimed, ISRs are
839
not allowed to access kernel data structures, or call kernel routines
840
that can. Instead all such operations are deferred to an associated
841
DSR routine that is run during the lock release operation, when the
842
data structures are in a consistent state.
843
      
844
      
845
By choosing a kernel locking mechanism that does not rely on interrupt
846
manipulation to protect data structures, it is easier to convert eCos
847
to SMP than would otherwise be the case. The principal change needed to
848
make eCos SMP-safe is to convert the scheduler lock into a nestable
849
spin lock. This is done by adding a spinlock and a CPU id to the
850
original counter.
851
      
852
      
853
The algorithm for acquiring the scheduler lock is very simple. If the
854
scheduler lock's CPU id matches the current CPU then it can just increment
855
the counter and continue. If it does not match, the CPU must spin on
856
the spinlock, after which it may increment the counter and store its
857
own identity in the CPU id.
858
      
859
      
860
To release the lock, the counter is decremented. If it goes to zero
861
the CPU id value must be set to NONE and the spinlock cleared.
862
      
863
      
864
To protect these sequences against interrupts, they must be performed
865
with interrupts disabled. However, since these are very short code
866
sequences, they will not have an adverse effect on the interrupt
867
latency.
868
      
869
      
870
Beyond converting the scheduler lock, further preparing the kernel for
871
SMP is a relatively minor matter. The main changes are to convert
872
various scalar housekeeping variables into arrays indexed by CPU
873
id. These include the current thread pointer, the need_reschedule
874
flag and the timeslice counter.
875
      
876
      
877
At present only the Multi-Level Queue (MLQ) scheduler is capable of
878
supporting SMP configurations. The main change made to this scheduler
879
is to cope with having several threads in execution at the same
880
time. Running threads are marked with the CPU that they are executing on.
881
When scheduling a thread, the scheduler skips past any running threads
882
until it finds a thread that is pending. While not a constant-time
883
algorithm, as in the single CPU case, this is still deterministic,
884
since the worst case time is bounded by the number of CPUs in the
885
system.
886
      
887
      
888
A second change to the scheduler is in the code used to decide when
889
the scheduler should be called to choose a new thread. The scheduler
890
attempts to keep the n CPUs running the
891
n highest priority threads. Since an event or
892
interrupt on one CPU may require a reschedule on another CPU, there
893
must be a mechanism for deciding this. The algorithm currently
894
implemented is very simple. Given a thread that has just been awakened
895
(or had its priority changed), the scheduler scans the CPUs, starting
896
with the one it is currently running on, for a current thread that is
897
of lower priority than the new one. If one is found then a reschedule
898
interrupt is sent to that CPU and the scan continues, but now using
899
the current thread of the rescheduled CPU as the candidate thread. In
900
this way the new thread gets to run as quickly as possible, hopefully
901
on the current CPU, and the remaining CPUs will pick up the remaining
902
highest priority threads as a consequence of processing the reschedule
903
interrupt.
904
      
905
      
906
The final change to the scheduler is in the handling of
907
timeslicing. Only one CPU receives timer interrupts, although all CPUs
908
must handle timeslicing. To make this work, the CPU that receives the
909
timer interrupt decrements the timeslice counter for all CPUs, not
910
just its own. If the counter for a CPU reaches zero, then it sends a
911
timeslice interrupt to that CPU. On receiving the interrupt the
912
destination CPU enters the scheduler and looks for another thread at
913
the same priority to run. This is somewhat more efficient than
914
distributing clock ticks to all CPUs, since the interrupt is only
915
needed when a timeslice occurs.
916
      
917
      
918
All existing synchronization mechanisms work as before in an SMP
919
system. Additional synchronization mechanisms have been added to
920
provide explicit synchronization for SMP, in the form of
921
spinlocks.
922
      
923
    
924
 
925
    
926
      SMP Interrupt Handling
927
      
928
The main area where the SMP nature of a system requires special
929
attention is in device drivers and especially interrupt handling. It
930
is quite possible for the ISR, DSR and thread components of a device
931
driver to execute on different CPUs. For this reason it is much more
932
important that SMP-capable device drivers use the interrupt-related
933
functions correctly. Typically a device driver would use the driver
934
API rather than call the kernel directly, but it is unlikely that
935
anybody would attempt to use a multiprocessor system without the
936
kernel package.
937
      
938
      
939
Two new functions have been added to the Kernel API
940
to do interrupt
941
routing: cyg_interrupt_set_cpu and
942
cyg_interrupt_get_cpu. Although not currently
943
supported, special values for the cpu argument may be used in future
944
to indicate that the interrupt is being routed dynamically or is
945
CPU-local. Once a vector has been routed to a new CPU, all other
946
interrupt masking and configuration operations are relative to that
947
CPU, where relevant.
948
      
949
 
950
      
951
There are more details of how interrupts should be handled in SMP
952
systems in .
953
      
954
    
955
 
956
  
957
 
958
959
 
960
961
 
962
  
963
 
964
    
965
    Thread creation
966
    
967
 
968
    
969
      cyg_thread_create
970
      Create a new thread
971
    
972
 
973
    
974
      
975
        
976
#include <cyg/kernel/kapi.h>
977
        
978
        
979
          void cyg_thread_create
980
          cyg_addrword_t sched_info
981
          cyg_thread_entry_t* entry
982
          cyg_addrword_t entry_data
983
          char* name
984
          void* stack_base
985
          cyg_ucount32 stack_size
986
          cyg_handle_t* handle
987
          cyg_thread* thread
988
        
989
      
990
    
991
 
992
    Description
993
      
994
The cyg_thread_create function allows application
995
code and eCos packages to create new threads. In many applications
996
this only happens during system initialization and all required data
997
is allocated statically.  However additional threads can be created at
998
any time, if necessary. A newly created thread is always in suspended
999
state and will not start running until it has been resumed via a call
1000
to cyg_thread_resume. Also, if threads are
1001
created during system initialization then they will not start running
1002
until the eCos scheduler has been started.
1003
      
1004
      
1005
The name argument is used
1006
primarily for debugging purposes, making it easier to keep track of
1007
which cyg_thread structure is associated with
1008
which application-level thread. The kernel configuration option
1009
CYGVAR_KERNEL_THREADS_NAME controls whether or not
1010
this name is actually used.
1011
      
1012
      
1013
On creation each thread is assigned a unique handle, and this will be
1014
stored in the location pointed at by the 
1015
class="function">handle argument. Subsequent operations on
1016
this thread including the required
1017
cyg_thread_resume should use this handle to
1018
identify the thread.
1019
      
1020
      
1021
The kernel requires a small amount of space for each thread, in the
1022
form of a cyg_thread data structure, to hold
1023
information such as the current state of that thread. To avoid any
1024
need for dynamic memory allocation within the kernel this space has to
1025
be provided by higher-level code, typically in the form of a static
1026
variable. The thread argument
1027
provides this space.
1028
      
1029
    
1030
 
1031
    Thread Entry Point
1032
      
1033
The entry point for a thread takes the form:
1034
      
1035
      
1036
void
1037
thread_entry_function(cyg_addrword_t data)
1038
{
1039
1040
}
1041
      
1042
      
1043
The second argument to cyg_thread_create is a
1044
pointer to such a function. The third argument 
1045
class="function">entry_data is used to pass additional
1046
data to the function. Typically this takes the form of a pointer to
1047
some static data, or a small integer, or 0 if the
1048
thread does not require any additional data.
1049
      
1050
      
1051
If the thread entry function ever returns then this is equivalent to
1052
the thread calling cyg_thread_exit. Even though
1053
the thread will no longer run again, it remains registered with the
1054
scheduler. If the application needs to re-use the
1055
cyg_thread data structure then a call to
1056
cyg_thread_delete is required first.
1057
      
1058
    
1059
 
1060
    Thread Priorities
1061
      
1062
The sched_info argument
1063
provides additional information to the scheduler. The exact details
1064
depend on the scheduler being used. For the bitmap and mlqueue
1065
schedulers it is a small integer, typically in the range 0 to 31, with
1066
 
1067
only by the system's idle thread. The exact number of priorities is
1068
controlled by the kernel configuration option
1069
CYGNUM_KERNEL_SCHED_PRIORITIES.
1070
      
1071
      
1072
It is the responsibility of the application developer to be aware of
1073
the various threads in the system, including those created by eCos
1074
packages, and to ensure that all threads run at suitable priorities.
1075
For threads created by other packages the documentation provided by
1076
those packages should indicate any requirements.
1077
      
1078
      
1079
The functions cyg_thread_set_priority,
1080
cyg_thread_get_priority, and
1081
cyg_thread_get_current_priority can be used to
1082
manipulate a thread's priority.
1083
      
1084
    
1085
 
1086
    Stacks and Stack Sizes
1087
      
1088
Each thread needs its own stack for local variables and to keep track
1089
of function calls and returns. Again it is expected that this stack is
1090
provided by the calling code, usually in the form of static data, so
1091
that the kernel does not need any dynamic memory allocation
1092
facilities. cyg_thread_create takes two arguments
1093
related to the stack, a pointer to the base of the stack and the total
1094
size of this stack. On many processors stacks actually descend from the
1095
top down, so the kernel will add the stack size to the base address to
1096
determine the starting location.
1097
      
1098
      
1099
The exact stack size requirements for any given thread depend on a
1100
number of factors. The most important is of course the code that will
1101
be executed in the context of this code: if this involves significant
1102
nesting of function calls, recursion, or large local arrays, then the
1103
stack size needs to be set to a suitably high value. There are some
1104
architectural issues, for example the number of cpu registers and the
1105
calling conventions will have some effect on stack usage. Also,
1106
depending on the configuration, it is possible that some other code
1107
such as interrupt handlers will occasionally run on the current
1108
thread's stack. This depends in part on configuration options such as
1109
CYGIMP_HAL_COMMON_INTERRUPTS_USE_INTERRUPT_STACK
1110
and CYGSEM_HAL_COMMON_INTERRUPTS_ALLOW_NESTING.
1111
      
1112
      
1113
Determining an application's actual stack size requirements is the
1114
responsibility of the application developer, since the kernel cannot
1115
know in advance what code a given thread will run. However, the system
1116
does provide some hints about reasonable stack sizes in the form of
1117
two constants: CYGNUM_HAL_STACK_SIZE_MINIMUM and
1118
CYGNUM_HAL_STACK_SIZE_TYPICAL. These are defined by
1119
the appropriate HAL package. The MINIMUM value is
1120
appropriate for a thread that just runs a single function and makes
1121
very simple system calls. Trying to create a thread with a smaller
1122
stack than this is illegal. The TYPICAL value is
1123
appropriate for applications where application calls are nested no
1124
more than half a dozen or so levels, and there are no large arrays on
1125
the stack.
1126
      
1127
      
1128
If the stack sizes are not estimated correctly and a stack overflow
1129
occurs, the probably result is some form of memory corruption. This
1130
can be very hard to track down. The kernel does contain some code to
1131
help detect stack overflows, controlled by the configuration option
1132
CYGFUN_KERNEL_THREADS_STACK_CHECKING: a small
1133
amount of space is reserved at the stack limit and filled with a
1134
special signature: every time a thread context switch occurs this
1135
signature is checked, and if invalid that is a good indication (but
1136
not absolute proof) that a stack overflow has occurred. This form of
1137
stack checking is enabled by default when the system is built with
1138
debugging enabled. A related configuration option is
1139
CYGFUN_KERNEL_THREADS_STACK_MEASUREMENT: enabling
1140
this option means that a thread can call the function
1141
cyg_thread_measure_stack_usage to find out the
1142
maximum stack usage to date. Note that this is not necessarily the
1143
true maximum because, for example, it is possible that in the current
1144
run no interrupt occurred at the worst possible moment.
1145
      
1146
    
1147
 
1148
    Valid contexts
1149
      
1150
cyg_thread_create may be called during
1151
initialization and from within thread context. It may not be called
1152
from inside a DSR.
1153
      
1154
    
1155
 
1156
    Example
1157
      
1158
A simple example of thread creation is shown below. This involves
1159
creating five threads, one producer and four consumers or workers. The
1160
threads are created in the system's
1161
cyg_user_start: depending on the configuration it
1162
might be more appropriate to do this elsewhere, for example inside
1163
main.
1164
      
1165
      
1166
#include <cyg/hal/hal_arch.h>
1167
#include <cyg/kernel/kapi.h>
1168
 
1169
// These numbers depend entirely on your application
1170
#define NUMBER_OF_WORKERS    4
1171
#define PRODUCER_PRIORITY   10
1172
#define WORKER_PRIORITY     11
1173
#define PRODUCER_STACKSIZE  CYGNUM_HAL_STACK_SIZE_TYPICAL
1174
#define WORKER_STACKSIZE    (CYGNUM_HAL_STACK_SIZE_MINIMUM + 1024)
1175
 
1176
static unsigned char producer_stack[PRODUCER_STACKSIZE];
1177
static unsigned char worker_stacks[NUMBER_OF_WORKERS][WORKER_STACKSIZE];
1178
static cyg_handle_t producer_handle, worker_handles[NUMBER_OF_WORKERS];
1179
static cyg_thread   producer_thread, worker_threads[NUMBER_OF_WORKERS];
1180
 
1181
static void
1182
producer(cyg_addrword_t data)
1183
{
1184
1185
}
1186
 
1187
static void
1188
worker(cyg_addrword_t data)
1189
{
1190
1191
}
1192
 
1193
void
1194
cyg_user_start(void)
1195
{
1196
    int i;
1197
 
1198
    cyg_thread_create(PRODUCER_PRIORITY, &producer, 0, "producer",
1199
                      producer_stack, PRODUCER_STACKSIZE,
1200
                      &producer_handle, &producer_thread);
1201
    cyg_thread_resume(producer_handle);
1202
    for (i = 0; i < NUMBER_OF_WORKERS; i++) {
1203
        cyg_thread_create(WORKER_PRIORITY, &worker, i, "worker",
1204
                          worker_stacks[i], WORKER_STACKSIZE,
1205
                          &(worker_handles[i]), &(worker_threads[i]));
1206
        cyg_thread_resume(worker_handles[i]);
1207
    }
1208
}
1209
      
1210
    
1211
 
1212
 
1213
    Thread Entry Points and C++
1214
      
1215
For code written in C++ the thread entry function must be either a
1216
static member function of a class or an ordinary function outside any
1217
class. It cannot be a normal member function of a class because such
1218
member functions take an implicit additional argument
1219
this, and the kernel has no way of knowing what
1220
value to use for this argument. One way around this problem is to make
1221
use of a special static member function, for example:
1222
      
1223
      
1224
class fred {
1225
  public:
1226
    void thread_function();
1227
    static void static_thread_aux(cyg_addrword_t);
1228
};
1229
 
1230
void
1231
fred::static_thread_aux(cyg_addrword_t objptr)
1232
{
1233
    fred* object = reinterpret_cast<fred*>(objptr);
1234
    object->thread_function();
1235
}
1236
 
1237
static fred instance;
1238
 
1239
extern "C" void
1240
cyg_start( void )
1241
{
1242
1243
    cyg_thread_create( …,
1244
                      &fred::static_thread_aux,
1245
                      reinterpret_cast<cyg_addrword_t>(&instance),
1246
                      …);
1247
1248
}
1249
      
1250
      
1251
Effectively this uses the 
1252
class="function">entry_data argument to
1253
cyg_thread_create to hold the
1254
this pointer. Unfortunately this approach does
1255
require the use of some C++ casts, so some of the type safety that can
1256
be achieved when programming in C++ is lost.
1257
      
1258
    
1259
 
1260
  
1261
 
1262
1263
1264
 
1265
  
1266
 
1267
    
1268
    Thread information
1269
    
1270
 
1271
    
1272
      cyg_thread_self
1273
      cyg_thread_idle_thread
1274
      cyg_thread_get_stack_base
1275
      cyg_thread_get_stack_size
1276
      cyg_thread_measure_stack_usage
1277
      cyg_thread_get_next
1278
      cyg_thread_get_info
1279
      cyg_thread_get_id
1280
      cyg_thread_find
1281
      Get basic thread information
1282
    
1283
 
1284
    
1285
      
1286
        
1287
#include <cyg/kernel/kapi.h>
1288
        
1289
        
1290
          cyg_handle_t cyg_thread_self
1291
          
1292
        
1293
        
1294
          cyg_handle_t cyg_thread_idle_thread
1295
          
1296
        
1297
        
1298
          cyg_addrword_t cyg_thread_get_stack_base
1299
          cyg_handle_t thread
1300
        
1301
        
1302
          cyg_uint32 cyg_thread_get_stack_size
1303
          cyg_handle_t thread
1304
        
1305
        
1306
          cyg_uint32 cyg_thread_measure_stack_usage
1307
          cyg_handle_t thread
1308
        
1309
        
1310
          cyg_bool cyg_thread_get_next
1311
          cyg_handle_t *thread
1312
          cyg_uint16 *id
1313
        
1314
        
1315
          cyg_bool cyg_thread_get_info
1316
          cyg_handle_t thread
1317
          cyg_uint16 id
1318
          cyg_thread_info *info
1319
        
1320
        
1321
          cyg_uint16 cyg_thread_get_id
1322
          cyg_handle_t thread
1323
        
1324
        
1325
          cyg_handle_t cyg_thread_find
1326
          cyg_uint16 id
1327
        
1328
      
1329
    
1330
 
1331
    Description
1332
      
1333
These functions can be used to obtain some basic information about
1334
various threads in the system. Typically they serve little or no
1335
purpose in real applications, but they can be useful during debugging.
1336
      
1337
      
1338
cyg_thread_self returns a handle corresponding
1339
to the current thread. It will be the same as the value filled in by
1340
cyg_thread_create when the current thread was
1341
created. This handle can then be passed to other functions such as
1342
cyg_thread_get_priority.
1343
      
1344
      
1345
cyg_thread_idle_thread returns the handle
1346
corresponding to the idle thread. This thread is created automatically
1347
by the kernel, so application-code has no other way of getting hold of
1348
this information.
1349
      
1350
      
1351
cyg_thread_get_stack_base and
1352
cyg_thread_get_stack_size return information
1353
about a specific thread's stack. The values returned will match the
1354
values passed to cyg_thread_create when this
1355
thread was created.
1356
      
1357
      
1358
cyg_thread_measure_stack_usage is only available
1359
if the configuration option
1360
CYGFUN_KERNEL_THREADS_STACK_MEASUREMENT is enabled.
1361
The return value is the maximum number of bytes of stack space used so
1362
far by the specified thread. Note that this should not be considered a
1363
true upper bound, for example it is possible that in the current test
1364
run the specified thread has not yet been interrupted at the deepest
1365
point in the function call graph. Never the less the value returned
1366
can give some useful indication of the thread's stack requirements.
1367
      
1368
      
1369
cyg_thread_get_next is used to enumerate all the
1370
current threads in the system. It should be called initially with the
1371
locations pointed to by thread and
1372
id set to zero. On return these will be set to
1373
the handle and ID of the first thread. On subsequent calls, these
1374
parameters should be left set to the values returned by the previous
1375
call.  The handle and ID of the next thread in the system will be
1376
installed each time, until a false return value
1377
indicates the end of the list.
1378
      
1379
      
1380
cyg_thread_get_info fills in the
1381
cyg_thread_info structure with information about the
1382
thread described by the thread and
1383
id arguments. The information returned includes
1384
the thread's handle and id, its state and name, priorities and stack
1385
parameters. If the thread does not exist the function returns
1386
false.
1387
    
1388
    
1389
The cyg_thread_info structure is defined as follows by
1390
<cyg/kernel/kapi.h>, but may
1391
be extended in future with additional members, and so its size should
1392
not be relied upon:
1393
1394
typedef struct
1395
{
1396
    cyg_handle_t        handle;
1397
    cyg_uint16          id;
1398
    cyg_uint32          state;
1399
    char                *name;
1400
    cyg_priority_t      set_pri;
1401
    cyg_priority_t      cur_pri;
1402
    cyg_addrword_t      stack_base;
1403
    cyg_uint32          stack_size;
1404
    cyg_uint32          stack_used;
1405
} cyg_thread_info;
1406
1407
    
1408
    
1409
cyg_thread_get_id returns the unique thread ID for
1410
the thread identified by thread.
1411
    
1412
    
1413
cyg_thread_find returns a handle for the thread
1414
whose ID is id. If no such thread exists, a
1415
zero handle is returned.
1416
    
1417
    
1418
 
1419
    Valid contexts
1420
      
1421
cyg_thread_self may only be called from thread
1422
context. cyg_thread_idle_thread may be called
1423
from thread or DSR context, but only after the system has been
1424
initialized. cyg_thread_get_stack_base,
1425
cyg_thread_get_stack_size and
1426
cyg_thread_measure_stack_usage may be called
1427
any time after the specified thread has been created, but measuring
1428
stack usage involves looping over at least part of the thread's stack
1429
so this should normally only be done from thread context.
1430
cyg_thread_get_id may be called from any context
1431
as long as the caller can guarantee that the supplied thread handle
1432
remains valid.
1433
      
1434
    
1435
 
1436
    Examples
1437
      
1438
A simple example of the use of the
1439
cyg_thread_get_next and
1440
cyg_thread_get_info follows:
1441
      
1442
      
1443
 
1444
#include <cyg/kernel/kapi.h>
1445
#include <stdio.h>
1446
 
1447
void show_threads(void)
1448
{
1449
    cyg_handle_t thread = 0;
1450
    cyg_uint16 id = 0;
1451
 
1452
    while( cyg_thread_get_next( &thread, &id ) )
1453
    {
1454
        cyg_thread_info info;
1455
 
1456
        if( !cyg_thread_get_info( thread, id, &info ) )
1457
            break;
1458
 
1459
        printf("ID: %04x name: %10s pri: %d\n",
1460
                info.id, info.name?info.name:"----", info.set_pri );
1461
    }
1462
}
1463
 
1464
      
1465
    
1466
 
1467
  
1468
 
1469
1470
1471
 
1472
  
1473
 
1474
    
1475
    Thread control
1476
    
1477
 
1478
    
1479
      cyg_thread_yield
1480
      cyg_thread_delay
1481
      cyg_thread_suspend
1482
      cyg_thread_resume
1483
      cyg_thread_release
1484
      Control whether or not a thread is running
1485
    
1486
 
1487
    
1488
      
1489
        
1490
#include <cyg/kernel/kapi.h>
1491
        
1492
        
1493
          void cyg_thread_yield
1494
          
1495
        
1496
        
1497
          void cyg_thread_delay
1498
          cyg_tick_count_t delay
1499
        
1500
        
1501
           void cyg_thread_suspend
1502
           cyg_handle_t thread
1503
        
1504
        
1505
           void cyg_thread_resume
1506
           cyg_handle_t thread
1507
        
1508
        
1509
           void cyg_thread_release
1510
           cyg_handle_t thread
1511
        
1512
      
1513
    
1514
 
1515
    Description
1516
      
1517
These functions provide some control over whether or not a particular
1518
thread can run. Apart from the required use of
1519
cyg_thread_resume to start a newly-created
1520
thread, application code should normally use proper synchronization
1521
primitives such as condition variables or mail boxes.
1522
      
1523
    
1524
 
1525
    Yield
1526
      
1527
cyg_thread_yield allows a thread to relinquish
1528
control of the processor to some other runnable thread which has the
1529
same priority. This can have no effect on any higher-priority thread
1530
since, if such a thread were runnable, the current thread would have
1531
been preempted in its favour. Similarly it can have no effect on any
1532
lower-priority thread because the current thread will always be run in
1533
preference to those. As a consequence this function is only useful
1534
in configurations with a scheduler that allows multiple threads to run
1535
at the same priority, for example the mlqueue scheduler. If instead
1536
the bitmap scheduler was being used then
1537
cyg_thread_yield() would serve no purpose.
1538
      
1539
      
1540
Even if a suitable scheduler such as the mlqueue scheduler has been
1541
configured, cyg_thread_yield will still rarely
1542
prove useful: instead timeslicing will be used to ensure that all
1543
threads of a given priority get a fair slice of the available
1544
processor time. However it is possible to disable timeslicing via the
1545
configuration option CYGSEM_KERNEL_SCHED_TIMESLICE,
1546
in which case cyg_thread_yield can be used to
1547
implement a form of cooperative multitasking.
1548
      
1549
    
1550
 
1551
    Delay
1552
      
1553
cyg_thread_delay allows a thread to suspend until
1554
the specified number of clock ticks have occurred. For example, if a
1555
value of 1 is used and the system clock runs at a frequency of 100Hz
1556
then the thread will sleep for up to 10 milliseconds. This
1557
functionality depends on the presence of a real-time system clock, as
1558
controlled by the configuration option
1559
CYGVAR_KERNEL_COUNTERS_CLOCK.
1560
      
1561
      
1562
If the application requires delays measured in milliseconds or similar
1563
units rather than in clock ticks, some calculations are needed to
1564
convert between these units as described in 
1565
linkend="kernel-clocks">. Usually these calculations can be done by
1566
the application developer, or at compile-time. Performing such
1567
calculations prior to every call to
1568
cyg_thread_delay adds unnecessary overhead to the
1569
system.
1570
      
1571
    
1572
 
1573
    Suspend and Resume
1574
      
1575
Associated with each thread is a suspend counter. When a thread is
1576
first created this counter is initialized to 1.
1577
cyg_thread_suspend can be used to increment the
1578
suspend counter, and cyg_thread_resume decrements
1579
it. The scheduler will never run a thread with a non-zero suspend
1580
counter. Therefore a newly created thread will not run until it has
1581
been resumed.
1582
      
1583
      
1584
An occasional problem with the use of suspend and resume functionality
1585
is that a thread gets suspended more times than it is resumed and
1586
hence never becomes runnable again. This can lead to very confusing
1587
behaviour. To help with debugging such problems the kernel provides a
1588
configuration option
1589
CYGNUM_KERNEL_MAX_SUSPEND_COUNT_ASSERT which
1590
imposes an upper bound on the number of suspend calls without matching
1591
resumes, with a reasonable default value. This functionality depends
1592
on infrastructure assertions being enabled.
1593
      
1594
    
1595
 
1596
    Releasing a Blocked Thread
1597
      
1598
When a thread is blocked on a synchronization primitive such as a
1599
semaphore or a mutex, or when it is waiting for an alarm to trigger,
1600
it can be forcibly woken up using
1601
cyg_thread_release. Typically this will call the
1602
affected synchronization primitive to return false, indicating that
1603
the operation was not completed successfully. This function has to be
1604
used with great care, and in particular it should only be used on
1605
threads that have been designed appropriately and check all return
1606
codes. If instead it were to be used on, say, an arbitrary thread that
1607
is attempting to claim a mutex then that thread might not bother to
1608
check the result of the mutex lock operation - usually there would be
1609
no reason to do so. Therefore the thread will now continue running in
1610
the false belief that it has successfully claimed a mutex lock, and
1611
the resulting behaviour is undefined. If the system has been built
1612
with assertions enabled then it is possible that an assertion will
1613
trigger when the thread tries to release the mutex it does not
1614
actually own.
1615
      
1616
      
1617
The main use of cyg_thread_release is in the
1618
POSIX compatibility layer, where it is used in the implementation of
1619
per-thread signals and cancellation handlers.
1620
      
1621
    
1622
 
1623
    Valid contexts
1624
      
1625
cyg_thread_yield can only be called from thread
1626
context, A DSR must always run to completion and cannot yield the
1627
processor to some thread. cyg_thread_suspend,
1628
cyg_thread_resume, and
1629
cyg_thread_release may be called from thread or
1630
DSR context.
1631
      
1632
    
1633
 
1634
  
1635
 
1636
1637
1638
 
1639
  
1640
 
1641
    
1642
    Thread termination
1643
    
1644
 
1645
    
1646
      cyg_thread_exit
1647
      cyg_thread_kill
1648
      cyg_thread_delete
1649
      Allow threads to terminate
1650
    
1651
 
1652
    
1653
      
1654
        
1655
#include <cyg/kernel/kapi.h>
1656
        
1657
        
1658
          void cyg_thread_exit
1659
          
1660
        
1661
        
1662
          void cyg_thread_kill
1663
          cyg_handle_t thread
1664
        
1665
        
1666
          cyg_bool_t cyg_thread_delete
1667
          cyg_handle_t thread
1668
        
1669
      
1670
    
1671
 
1672
    Description
1673
      
1674
In many embedded systems the various threads are allocated statically,
1675
created during initialization, and never need to terminate. This
1676
avoids any need for dynamic memory allocation or other resource
1677
management facilities. However if a given application does have a
1678
requirement that some threads be created dynamically, must terminate,
1679
and their resources such as the stack be reclaimed, then the kernel
1680
provides the functions cyg_thread_exit,
1681
cyg_thread_kill, and
1682
cyg_thread_delete.
1683
      
1684
      
1685
cyg_thread_exit allows a thread to terminate
1686
itself, thus ensuring that it will not be run again by the scheduler.
1687
However the cyg_thread data structure passed
1688
to cyg_thread_create remains in use, and the
1689
handle returned by cyg_thread_create remains
1690
valid. This allows other threads to perform certain operations on the
1691
terminated thread, for example to determine its stack usage via
1692
cyg_thread_measure_stack_usage. When the handle
1693
and cyg_thread structure are no longer
1694
required, cyg_thread_delete should be called to
1695
release these resources. If the stack was dynamically allocated then
1696
this should not be freed until after the call to
1697
cyg_thread_delete.
1698
      
1699
      
1700
Alternatively, one thread may use cyg_thread_kill
1701
on another This has much the same effect as the affected thread
1702
calling cyg_thread_exit. However killing a thread
1703
is generally rather dangerous because no attempt is made to unlock any
1704
synchronization primitives currently owned by that thread or release
1705
any other resources that thread may have claimed. Therefore use of
1706
this function should be avoided, and
1707
cyg_thread_exit is preferred.
1708
cyg_thread_kill cannot be used by a thread to
1709
kill itself.
1710
      
1711
      
1712
cyg_thread_delete should be used on a thread
1713
after it has exited and is no longer required. After this call the
1714
thread handle is no longer valid, and both the
1715
cyg_thread structure and the thread stack can
1716
be re-used or freed. If cyg_thread_delete is
1717
invoked on a thread that is still running then there is an implicit
1718
call to cyg_thread_kill. This function returns
1719
true if the delete was successful, and
1720
false if the delete did not happen. The delete
1721
may not happen for example if the thread being destroyed is a lower
1722
priority thread than the running thread, and will thus not wake up
1723
in order to exit until it is rescheduled.
1724
      
1725
    
1726
 
1727
    Valid contexts
1728
      
1729
cyg_thread_exit,
1730
cyg_thread_kill and
1731
cyg_thread_delete can only be called from thread
1732
context.
1733
      
1734
    
1735
 
1736
  
1737
 
1738
1739
1740
 
1741
  
1742
 
1743
    
1744
    Thread priorities
1745
    
1746
 
1747
    
1748
      cyg_thread_get_priority
1749
      cyg_thread_get_current_priority
1750
      cyg_thread_set_priority
1751
      Examine and manipulate thread priorities
1752
    
1753
    
1754
      
1755
        
1756
#include <cyg/kernel/kapi.h>
1757
        
1758
        
1759
          cyg_priority_t cyg_thread_get_priority
1760
          cyg_handle_t thread
1761
        
1762
        
1763
          cyg_priority_t cyg_thread_get_current_priority
1764
          cyg_handle_t thread
1765
        
1766
        
1767
          void cyg_thread_set_priority
1768
          cyg_handle_t thread
1769
          cyg_priority_t priority
1770
        
1771
      
1772
    
1773
 
1774
    Description
1775
      
1776
Typical schedulers use the concept of a thread priority to determine
1777
which thread should run next. Exactly what this priority consists of
1778
will depend on the scheduler, but a typical implementation would be a
1779
small integer in the range 0 to 31, with 0 being the highest priority.
1780
Usually only the idle thread will run at the lowest priority. The
1781
exact number of priority levels available depends on the
1782
configuration, typically the option
1783
CYGNUM_KERNEL_SCHED_PRIORITIES.
1784
      
1785
      
1786
cyg_thread_get_priority can be used to determine
1787
the priority of a thread, or more correctly the value last used in a
1788
cyg_thread_set_priority call or when the thread
1789
was first created. In some circumstances it is possible that the
1790
thread is actually running at a higher priority. For example, if it
1791
owns a mutex and priority ceilings or inheritance is being used to
1792
prevent priority inversion problems, then the thread's priority may
1793
have been boosted temporarily.
1794
cyg_thread_get_current_priority returns the real
1795
current priority.
1796
      
1797
      
1798
In many applications appropriate thread priorities can be determined
1799
and allocated statically. However, if it is necessary for a thread's
1800
priority to change at run-time then the
1801
cyg_thread_set_priority function provides this
1802
functionality.
1803
      
1804
    
1805
 
1806
    Valid contexts
1807
      
1808
cyg_thread_get_priority and
1809
cyg_thread_get_current_priority can be called
1810
from thread or DSR context, although the latter is rarely useful.
1811
cyg_thread_set_priority should also only be
1812
called from thread context.
1813
      
1814
    
1815
  
1816
 
1817
1818
1819
 
1820
  
1821
 
1822
    
1823
    Per-thread data
1824
    
1825
 
1826
    
1827
      cyg_thread_new_data_index
1828
      cyg_thread_free_data_index
1829
      cyg_thread_get_data
1830
      cyg_thread_get_data_ptr
1831
      cyg_thread_set_data
1832
      Manipulate per-thread data
1833
    
1834
    
1835
      
1836
        
1837
#include <cyg/kernel/kapi.h>
1838
        
1839
        
1840
          cyg_ucount32 cyg_thread_new_data_index
1841
          
1842
        
1843
        
1844
          void cyg_thread_free_data_index
1845
          cyg_ucount32 index
1846
        
1847
        
1848
          cyg_addrword_t cyg_thread_get_data
1849
          cyg_ucount32 index
1850
        
1851
        
1852
          cyg_addrword_t* cyg_thread_get_data_ptr
1853
          cyg_ucount32 index
1854
        
1855
        
1856
          void cyg_thread_set_data
1857
          cyg_ucount32 index
1858
          cyg_addrword_t data
1859
        
1860
      
1861
    
1862
 
1863
    Description
1864
      
1865
In some applications and libraries it is useful to have some data that
1866
is specific to each thread. For example, many of the functions in the
1867
POSIX compatibility package return -1 to indicate an error and store
1868
additional information in what appears to be a global variable
1869
errno. However, if multiple threads make concurrent
1870
calls into the POSIX library and if errno were
1871
really a global variable then a thread would have no way of knowing
1872
whether the current errno value really corresponded
1873
to the last POSIX call it made, or whether some other thread had run
1874
in the meantime and made a different POSIX call which updated the
1875
variable. To avoid such confusion errno is instead
1876
implemented as a per-thread variable, and each thread has its own
1877
instance.
1878
      
1879
      
1880
The support for per-thread data can be disabled via the configuration
1881
option CYGVAR_KERNEL_THREADS_DATA. If enabled, each
1882
cyg_thread data structure holds a small array
1883
of words. The size of this array is determined by the configuration
1884
option CYGNUM_KERNEL_THREADS_DATA_MAX. When a
1885
thread is created the array is filled with zeroes.
1886
      
1887
      
1888
If an application needs to use per-thread data then it needs an index
1889
into this array which has not yet been allocated to other code. This
1890
index can be obtained by calling
1891
cyg_thread_new_data_index, and then used in
1892
subsequent calls to cyg_thread_get_data.
1893
Typically indices are allocated during system initialization and
1894
stored in static variables. If for some reason a slot in the array is
1895
no longer required and can be re-used then it can be released by calling
1896
cyg_thread_free_data_index,
1897
      
1898
      
1899
The current per-thread data in a given slot can be obtained using
1900
cyg_thread_get_data. This implicitly operates on
1901
the current thread, and its single argument should be an index as
1902
returned by cyg_thread_new_data_index. The
1903
per-thread data can be updated using
1904
cyg_thread_set_data. If a particular item of
1905
per-thread data is needed repeatedly then
1906
cyg_thread_get_data_ptr can be used to obtain the
1907
address of the data, and indirecting through this pointer allows the
1908
data to be examined and updated efficiently.
1909
      
1910
      
1911
Some packages, for example the error and POSIX packages, have
1912
pre-allocated slots in the array of per-thread data. These slots
1913
should not normally be used by application code, and instead slots
1914
should be allocated during initialization by a call to
1915
cyg_thread_new_data_index. If it is known that,
1916
for example, the configuration will never include the POSIX
1917
compatibility package then application code may instead decide to
1918
re-use the slot allocated to that package,
1919
CYGNUM_KERNEL_THREADS_DATA_POSIX, but obviously
1920
this does involve a risk of strange and subtle bugs if the
1921
application's requirements ever change.
1922
      
1923
    
1924
 
1925
    Valid contexts
1926
      
1927
Typically cyg_thread_new_data_index is only
1928
called during initialization, but may also be called at any time in
1929
thread context. cyg_thread_free_data_index, if
1930
used at all, can also be called during initialization or from thread
1931
context. cyg_thread_get_data,
1932
cyg_thread_get_data_ptr, and
1933
cyg_thread_set_data may only be called from
1934
thread context because they implicitly operate on the current thread.
1935
      
1936
    
1937
 
1938
  
1939
 
1940
1941
1942
 
1943
  
1944
 
1945
    
1946
    Thread destructors
1947
    
1948
 
1949
    
1950
      cyg_thread_add_destructor
1951
      cyg_thread_rem_destructor
1952
      Call functions on thread termination
1953
    
1954
    
1955
      
1956
        
1957
#include <cyg/kernel/kapi.h>
1958
typedef void (*cyg_thread_destructor_fn)(cyg_addrword_t);
1959
        
1960
        
1961
          cyg_bool_t cyg_thread_add_destructor
1962
          cyg_thread_destructor_fn fn
1963
          cyg_addrword_t data
1964
        
1965
        
1966
          cyg_bool_t cyg_thread_rem_destructor
1967
          cyg_thread_destructor_fn fn
1968
          cyg_addrword_t data
1969
        
1970
      
1971
    
1972
 
1973
    Description
1974
      
1975
These functions are provided for cases when an application requires a
1976
function to be automatically called when a thread exits. This is often
1977
useful when, for example, freeing up resources allocated by the thread.
1978
      
1979
      
1980
This support must be enabled with the configuration option
1981
CYGPKG_KERNEL_THREADS_DESTRUCTORS. When enabled,
1982
you may register a function of type
1983
cyg_thread_destructor_fn to be called on thread
1984
termination using cyg_thread_add_destructor. You
1985
may also provide it with a piece of arbitrary information in the
1986
data argument which will be passed to the
1987
destructor function fn when the thread
1988
terminates. If you no longer wish to call a function previous
1989
registered with cyg_thread_add_destructor, you
1990
may call cyg_thread_rem_destructor with the same
1991
parameters used to register the destructor function. Both these
1992
functions return true on success and
1993
false on failure.
1994
      
1995
      
1996
By default, thread destructors are per-thread, which means that registering
1997
a destructor function only registers that function for the current thread.
1998
In other words, each thread has its own list of destructors.
1999
Alternatively you may disable the configuration option
2000
CYGSEM_KERNEL_THREADS_DESTRUCTORS_PER_THREAD in which
2001
case any registered destructors will be run when any
2002
threads exit. In other words, the thread destructor list is global and all
2003
threads have the same destructors.
2004
      
2005
      
2006
There is a limit to the number of destructors which may be registered,
2007
which can be controlled with the
2008
CYGNUM_KERNEL_THREADS_DESTRUCTORS configuration
2009
option. Increasing this value will very slightly increase the amount
2010
of memory in use, and when
2011
CYGSEM_KERNEL_THREADS_DESTRUCTORS_PER_THREAD is
2012
enabled, the amount of memory used per thread will increase. When the
2013
limit has been reached, cyg_thread_add_destructor
2014
will return false.
2015
      
2016
    
2017
 
2018
    Valid contexts
2019
      
2020
When CYGSEM_KERNEL_THREADS_DESTRUCTORS_PER_THREAD
2021
is enabled, these functions must only be called from a thread context
2022
as they implicitly operate on the current thread. When
2023
CYGSEM_KERNEL_THREADS_DESTRUCTORS_PER_THREAD is
2024
disabled, these functions may be called from thread or DSR context,
2025
or at initialization time.
2026
      
2027
    
2028
 
2029
  
2030
 
2031
2032
2033
 
2034
  
2035
 
2036
    
2037
    Exception handling
2038
    
2039
 
2040
    
2041
      cyg_exception_set_handler
2042
      cyg_exception_clear_handler
2043
      cyg_exception_call_handler
2044
      Handle processor exceptions
2045
    
2046
    
2047
      
2048
        
2049
#include <cyg/kernel/kapi.h>
2050
        
2051
        
2052
          void cyg_exception_set_handler
2053
          cyg_code_t exception_number
2054
          cyg_exception_handler_t* new_handler
2055
          cyg_addrword_t new_data
2056
          cyg_exception_handler_t** old_handler
2057
          cyg_addrword_t* old_data
2058
        
2059
        
2060
          void cyg_exception_clear_handler
2061
          cyg_code_t exception_number
2062
        
2063
        
2064
          void cyg_exception_call_handler
2065
          cyg_handle_t thread
2066
          cyg_code_t exception_number
2067
          cyg_addrword_t exception_info
2068
        
2069
      
2070
    
2071
 
2072
    Description
2073
      
2074
Sometimes code attempts operations that are not legal on the current
2075
hardware, for example dividing by zero, or accessing data through a
2076
pointer that is not properly aligned. When this happens the hardware
2077
will raise an exception. This is very similar to an interrupt, but
2078
happens synchronously with code execution rather than asynchronously
2079
and hence can be tied to the thread that is currently running.
2080
      
2081
      
2082
The exceptions that can be raised depend very much on the hardware,
2083
especially the processor. The corresponding documentation should be
2084
consulted for more details. Alternatively the architectural HAL header
2085
file hal_intr.h, or one of the
2086
variant or platform header files it includes, will contain appropriate
2087
definitions. The details of how to handle exceptions, including
2088
whether or not it is possible to recover from them, also depend on the
2089
hardware.
2090
      
2091
      
2092
Exception handling is optional, and can be disabled through the
2093
configuration option CYGPKG_KERNEL_EXCEPTIONS. If
2094
an application has been exhaustively tested and is trusted never to
2095
raise a hardware exception then this option can be disabled and code
2096
and data sizes will be reduced somewhat. If exceptions are left
2097
enabled then the system will provide default handlers for the various
2098
exceptions, but these do nothing. Even the specific type of exception
2099
is ignored, so there is no point in attempting to decode this and
2100
distinguish between say a divide-by-zero and an unaligned access.
2101
If the application installs its own handlers and wants details of the
2102
specific exception being raised then the configuration option
2103
CYGSEM_KERNEL_EXCEPTIONS_DECODE has to be enabled.
2104
      
2105
      
2106
An alternative handler can be installed using
2107
cyg_exception_set_handler. This requires a code
2108
for the exception, a function pointer for the new exception handler,
2109
and a parameter to be passed to this handler. Details of the
2110
previously installed exception handler will be returned via the
2111
remaining two arguments, allowing that handler to be reinstated, or
2112
null pointers can be used if this information is of no interest. An
2113
exception handling function should take the following form:
2114
      
2115
      
2116
void
2117
my_exception_handler(cyg_addrword_t data, cyg_code_t exception, cyg_addrword_t info)
2118
{
2119
2120
}
2121
      
2122
      
2123
The data argument corresponds to the new_data
2124
parameter supplied to cyg_exception_set_handler.
2125
The exception code is provided as well, in case a single handler is
2126
expected to support multiple exceptions. The info
2127
argument will depend on the hardware and on the specific exception.
2128
      
2129
      
2130
cyg_exception_clear_handler can be used to
2131
restore the default handler, if desired. It is also possible for
2132
software to raise an exception and cause the current handler to be
2133
invoked, but generally this is useful only for testing.
2134
      
2135
      
2136
By default the system maintains a single set of global exception
2137
handlers. However, since exceptions occur synchronously it is
2138
sometimes useful to handle them on a per-thread basis, and have a
2139
different set of handlers for each thread. This behaviour can be
2140
obtained by disabling the configuration option
2141
CYGSEM_KERNEL_EXCEPTIONS_GLOBAL. If per-thread
2142
exception handlers are being used then
2143
cyg_exception_set_handler and
2144
cyg_exception_clear_handler apply to the current
2145
thread. Otherwise they apply to the global set of handlers.
2146
      
2147
 
2148
      
2149
In the current implementation
2150
cyg_exception_call_handler can only be used on
2151
the current thread. There is no support for delivering an exception to
2152
another thread.
2153
      
2154
      
2155
Exceptions at the eCos kernel level refer specifically to
2156
hardware-related events such as unaligned accesses to memory or
2157
division by zero. There is no relation with other concepts that are
2158
also known as exceptions, for example the throw and
2159
catch facilities associated with C++.
2160
      
2161
 
2162
    
2163
 
2164
    Valid contexts
2165
      
2166
If the system is configured with a single set of global exception
2167
handlers then
2168
cyg_exception_set_handler and
2169
cyg_exception_clear_handler may be called during
2170
initialization or from thread context. If instead per-thread exception
2171
handlers are being used then it is not possible to install new
2172
handlers during initialization because the functions operate
2173
implicitly on the current thread, so they can only be called from
2174
thread context. cyg_exception_call_handler should
2175
only be called from thread context.
2176
      
2177
    
2178
 
2179
  
2180
 
2181
2182
2183
 
2184
  
2185
 
2186
    
2187
    Counters
2188
    
2189
 
2190
    
2191
      cyg_counter_create
2192
      cyg_counter_delete
2193
      cyg_counter_current_value
2194
      cyg_counter_set_value
2195
      cyg_counter_tick
2196
      Count event occurrences
2197
    
2198
 
2199
    
2200
      
2201
        
2202
#include <cyg/kernel/kapi.h>
2203
        
2204
        
2205
          void cyg_counter_create
2206
          cyg_handle_t* handle
2207
          cyg_counter* counter
2208
        
2209
        
2210
          void cyg_counter_delete
2211
          cyg_handle_t counter
2212
        
2213
        
2214
          cyg_tick_count_t cyg_counter_current_value
2215
          cyg_handle_t counter
2216
        
2217
        
2218
          void cyg_counter_set_value
2219
          cyg_handle_t counter
2220
          cyg_tick_count_t new_value
2221
        
2222
        
2223
          void cyg_counter_tick
2224
          cyg_handle_t counter
2225
        
2226
      
2227
    
2228
 
2229
    Description
2230
      
2231
Kernel counters can be used to keep track of how many times a
2232
particular event has occurred. Usually this event is an external
2233
signal of some sort. The most common use of counters is in the
2234
implementation of clocks, but they can be useful with other event
2235
sources as well. Application code can attach 
2236
linkend="kernel-alarms">alarms to counters, causing a function
2237
to be called when some number of events have occurred.
2238
      
2239
      
2240
A new counter is initialized by a call to
2241
cyg_counter_create. The first argument is used to
2242
return a handle to the new counter which can be used for subsequent
2243
operations. The second argument allows the application to provide the
2244
memory needed for the object, thus eliminating any need for dynamic
2245
memory allocation within the kernel. If a counter is no longer
2246
required and does not have any alarms attached then
2247
cyg_counter_delete can be used to release the
2248
resources, allowing the cyg_counter data
2249
structure to be re-used.
2250
      
2251
      
2252
Initializing a counter does not automatically attach it to any source
2253
of events. Instead some other code needs to call
2254
cyg_counter_tick whenever a suitable event
2255
occurs, which will cause the counter to be incremented and may cause
2256
alarms to trigger. The current value associated with the counter can
2257
be retrieved using cyg_counter_current_value and
2258
modified with cyg_counter_set_value. Typically
2259
the latter function is only used during initialization, for example to
2260
set a clock to wallclock time, but it can be used to reset a counter
2261
if necessary. However cyg_counter_set_value will
2262
never trigger any alarms. A newly initialized counter has a starting
2263
value of 0.
2264
      
2265
      
2266
The kernel provides two different implementations of counters. The
2267
default is CYGIMP_KERNEL_COUNTERS_SINGLE_LIST which
2268
stores all alarms attached to the counter on a single list. This is
2269
simple and usually efficient. However when a tick occurs the kernel
2270
code has to traverse this list, typically at DSR level, so if there
2271
are a significant number of alarms attached to a single counter this
2272
will affect the system's dispatch latency. The alternative
2273
implementation, CYGIMP_KERNEL_COUNTERS_MULTI_LIST,
2274
stores each alarm in one of an array of lists such that at most one of
2275
the lists needs to be searched per clock tick. This involves extra
2276
code and data, but can improve real-time responsiveness in some
2277
circumstances. Another configuration option that is relevant here
2278
is CYGIMP_KERNEL_COUNTERS_SORT_LIST, which is
2279
disabled by default. This provides a trade off between doing work
2280
whenever a new alarm is added to a counter and doing work whenever a
2281
tick occurs. It is application-dependent which of these is more
2282
appropriate.
2283
      
2284
    
2285
 
2286
    Valid contexts
2287
      
2288
cyg_counter_create is typically called during
2289
system initialization but may also be called in thread context.
2290
Similarly cyg_counter_delete may be called during
2291
initialization or in thread context.
2292
cyg_counter_current_value,
2293
cyg_counter_set_value and
2294
cyg_counter_tick may be called during
2295
initialization or from thread or DSR context. In fact,
2296
cyg_counter_tick is usually called from inside a
2297
DSR in response to an external event of some sort.
2298
      
2299
    
2300
 
2301
  
2302
 
2303
2304
2305
 
2306
  
2307
 
2308
    
2309
    Clocks
2310
    
2311
 
2312
    
2313
      cyg_clock_create
2314
      cyg_clock_delete
2315
      cyg_clock_to_counter
2316
      cyg_clock_set_resolution
2317
      cyg_clock_get_resolution
2318
      cyg_real_time_clock
2319
      cyg_current_time
2320
      Provide system clocks
2321
    
2322
 
2323
    
2324
      
2325
        
2326
#include <cyg/kernel/kapi.h>
2327
        
2328
        
2329
          void cyg_clock_create
2330
          cyg_resolution_t resolution
2331
          cyg_handle_t* handle
2332
          cyg_clock* clock
2333
        
2334
        
2335
          void cyg_clock_delete
2336
          cyg_handle_t clock
2337
        
2338
        
2339
          void cyg_clock_to_counter
2340
          cyg_handle_t clock
2341
          cyg_handle_t* counter
2342
        
2343
        
2344
          void cyg_clock_set_resolution
2345
          cyg_handle_t clock
2346
          cyg_resolution_t resolution
2347
        
2348
        
2349
          cyg_resolution_t cyg_clock_get_resolution
2350
          cyg_handle_t clock
2351
        
2352
        
2353
          cyg_handle_t cyg_real_time_clock
2354
          
2355
        
2356
        
2357
          cyg_tick_count_t cyg_current_time
2358
          
2359
        
2360
      
2361
    
2362
 
2363
    Description
2364
      
2365
In the eCos kernel clock objects are a special form of 
2366
linkend="kernel-counters">counter objects. They are attached to
2367
a specific type of hardware, clocks that generate ticks at very
2368
specific time intervals, whereas counters can be used with any event
2369
source.
2370
      
2371
      
2372
In a default configuration the kernel provides a single clock
2373
instance, the real-time clock. This gets used for timeslicing and for
2374
operations that involve a timeout, for example
2375
cyg_semaphore_timed_wait. If this functionality
2376
is not required it can be removed from the system using the
2377
configuration option CYGVAR_KERNEL_COUNTERS_CLOCK.
2378
Otherwise the real-time clock can be accessed by a call to
2379
cyg_real_time_clock, allowing applications to
2380
attach alarms, and the current counter value can be obtained using
2381
cyg_current_time.
2382
      
2383
      
2384
Applications can create and destroy additional clocks if desired,
2385
using cyg_clock_create and
2386
cyg_clock_delete. The first argument to
2387
cyg_clock_create specifies the
2388
resolution this clock
2389
will run at. The second argument is used to return a handle for this
2390
clock object, and the third argument provides the kernel with the
2391
memory needed to hold this object. This clock will not actually tick
2392
by itself. Instead it is the responsibility of application code to
2393
initialize a suitable hardware timer to generate interrupts at the
2394
appropriate frequency, install an interrupt handler for this, and
2395
call cyg_counter_tick from inside the DSR.
2396
Associated with each clock is a kernel counter, a handle for which can
2397
be obtained using cyg_clock_to_counter.
2398
      
2399
    
2400
 
2401
    Clock Resolutions and Ticks
2402
      
2403
At the kernel level all clock-related operations including delays,
2404
timeouts and alarms work in units of clock ticks, rather than in units
2405
of seconds or milliseconds. If the calling code, whether the
2406
application or some other package, needs to operate using units such
2407
as milliseconds then it has to convert from these units to clock
2408
ticks.
2409
      
2410
      
2411
The main reason for this is that it accurately reflects the
2412
hardware: calling something like nanosleep with a
2413
delay of ten nanoseconds will not work as intended on any real
2414
hardware because timer interrupts simply will not happen that
2415
frequently; instead calling cyg_thread_delay with
2416
the equivalent delay of 0 ticks gives a much clearer indication that
2417
the application is attempting something inappropriate for the target
2418
hardware. Similarly, passing a delay of five ticks to
2419
cyg_thread_delay makes it fairly obvious that
2420
the current thread will be suspended for somewhere between four and
2421
five clock periods, as opposed to passing 50000000 to
2422
nanosleep which suggests a granularity that is
2423
not actually provided.
2424
      
2425
      
2426
A secondary reason is that conversion between clock ticks and units
2427
such as milliseconds can be somewhat expensive, and whenever possible
2428
should be done at compile-time or by the application developer rather
2429
than at run-time. This saves code size and cpu cycles.
2430
      
2431
      
2432
The information needed to perform these conversions is the clock
2433
resolution. This is a structure with two fields, a dividend and a
2434
divisor, and specifies the number of nanoseconds between clock ticks.
2435
For example a clock that runs at 100Hz will have 10 milliseconds
2436
between clock ticks, or 10000000 nanoseconds. The ratio between the
2437
resolution's dividend and divisor will therefore be 10000000 to 1, and
2438
typical values for these might be 1000000000 and 100. If the clock
2439
runs at a different frequency, say 60Hz, the numbers could be
2440
1000000000 and 60 respectively. Given a delay in nanoseconds, this can
2441
be converted to clock ticks by multiplying with the the divisor and
2442
then dividing by the dividend. For example a delay of 50 milliseconds
2443
corresponds to 50000000 nanoseconds, and with a clock frequency of
2444
100Hz this can be converted to
2445
((50000000 * 100) / 1000000000) = 5
2446
clock ticks. Given the large numbers involved this arithmetic normally
2447
has to be done using 64-bit precision and the
2448
long long data type, but allows code to run on
2449
hardware with unusual clock frequencies.
2450
      
2451
      
2452
The default frequency for the real-time clock on any platform is
2453
usually about 100Hz, but platform-specific documentation should be
2454
consulted for this information. Usually it is possible to override
2455
this default by configuration options, but again this depends on the
2456
capabilities of the underlying hardware. The resolution for any clock
2457
can be obtained using cyg_clock_get_resolution.
2458
For clocks created by application code, there is also a function
2459
cyg_clock_set_resolution. This does not affect
2460
the underlying hardware timer in any way, it merely updates the
2461
information that will be returned in subsequent calls to
2462
cyg_clock_get_resolution: changing the actual
2463
underlying clock frequency will require appropriate manipulation of
2464
the timer hardware.
2465
      
2466
    
2467
 
2468
    Valid contexts
2469
      
2470
cyg_clock_create is usually only called during
2471
system initialization (if at all), but may also be called from thread
2472
context. The same applies to cyg_clock_delete.
2473
The remaining functions may be called during initialization, from
2474
thread context, or from DSR context, although it should be noted that
2475
there is no locking between
2476
cyg_clock_get_resolution and
2477
cyg_clock_set_resolution so theoretically it is
2478
possible that the former returns an inconsistent data structure.
2479
      
2480
    
2481
 
2482
  
2483
 
2484
2485
2486
 
2487
  
2488
 
2489
    
2490
    Alarms
2491
    
2492
 
2493
    
2494
      cyg_alarm_create
2495
      cyg_alarm_delete
2496
      cyg_alarm_initialize
2497
      cyg_alarm_enable
2498
      cyg_alarm_disable
2499
      Run an alarm function when a number of events have occurred
2500
    
2501
 
2502
    
2503
      
2504
        
2505
#include <cyg/kernel/kapi.h>
2506
        
2507
        
2508
          void cyg_alarm_create
2509
          cyg_handle_t counter
2510
          cyg_alarm_t* alarmfn
2511
          cyg_addrword_t data
2512
          cyg_handle_t* handle
2513
          cyg_alarm* alarm
2514
        
2515
        
2516
          void cyg_alarm_delete
2517
          cyg_handle_t alarm
2518
        
2519
        
2520
          void cyg_alarm_initialize
2521
          cyg_handle_t alarm
2522
          cyg_tick_count_t trigger
2523
          cyg_tick_count_t interval
2524
        
2525
        
2526
          void cyg_alarm_enable
2527
          cyg_handle_t alarm
2528
        
2529
        
2530
          void cyg_alarm_disable
2531
          cyg_handle_t alarm
2532
        
2533
      
2534
    
2535
 
2536
    Description
2537
      
2538
Kernel alarms are used together with counters and allow for action to
2539
be taken when a certain number of events have occurred. If the counter
2540
is associated with a clock then the alarm action happens when the
2541
appropriate number of clock ticks have occurred, in other words after
2542
a certain period of time.
2543
      
2544
      
2545
Setting up an alarm involves a two-step process. First the alarm must
2546
be created with a call to cyg_alarm_create. This
2547
takes five arguments. The first identifies the counter to which the
2548
alarm should be attached. If the alarm should be attached to the
2549
system's real-time clock then cyg_real_time_clock
2550
and cyg_clock_to_counter can be used to get hold
2551
of the appropriate handle. The next two arguments specify the action
2552
to be taken when the alarm is triggered, in the form of a function
2553
pointer and some data. This function should take the form:
2554
      
2555
      
2556
void
2557
alarm_handler(cyg_handle_t alarm, cyg_addrword_t data)
2558
{
2559
2560
}
2561
      
2562
      
2563
The data argument passed to the alarm function corresponds to the
2564
third argument passed to cyg_alarm_create.
2565
The fourth argument to cyg_alarm_create is used
2566
to return a handle to the newly-created alarm object, and the final
2567
argument provides the memory needed for the alarm object and thus
2568
avoids any need for dynamic memory allocation within the kernel.
2569
      
2570
      
2571
Once an alarm has been created a further call to
2572
cyg_alarm_initialize is needed to activate it.
2573
The first argument specifies the alarm. The second argument indicates
2574
the absolute value of the attached counter which will result in the
2575
alarm being triggered. If the third argument is 0 then the alarm
2576
will only trigger once. A non-zero value specifies that the alarm
2577
should trigger repeatedly, with an interval of the specified number of
2578
events.
2579
      
2580
      
2581
Alarms can be temporarily disabled and reenabled using
2582
cyg_alarm_disable and
2583
cyg_alarm_enable. Alternatively another call to
2584
cyg_alarm_initialize can be used to modify the
2585
behaviour of an existing alarm. If an alarm is no longer required then
2586
the associated resources can be released using
2587
cyg_alarm_delete.
2588
      
2589
      
2590
The alarm function is invoked when a counter tick occurs, in other
2591
words when there is a call to cyg_counter_tick,
2592
and will happen in the same context. If the alarm is associated with
2593
the system's real-time clock then this will be DSR context, following
2594
a clock interrupt. If the alarm is associated with some other
2595
application-specific counter then the details will depend on how that
2596
counter is updated.
2597
      
2598
      
2599
If two or more alarms are registered for precisely the same counter tick,
2600
the order of execution of the alarm functions is unspecified.
2601
      
2602
    
2603
 
2604
    Valid contexts
2605
      
2606
cyg_alarm_create
2607
cyg_alarm_initialize is typically called during
2608
system initialization but may also be called in thread context. The
2609
same applies to cyg_alarm_delete.
2610
cyg_alarm_initialize,
2611
cyg_alarm_disable and
2612
cyg_alarm_enable may be called during
2613
initialization or from thread or DSR context, but
2614
cyg_alarm_enable and
2615
cyg_alarm_initialize may be expensive operations
2616
and should only be called when necessary.
2617
      
2618
    
2619
 
2620
  
2621
 
2622
2623
2624
 
2625
  
2626
 
2627
    
2628
    Mutexes
2629
    
2630
 
2631
    
2632
      cyg_mutex_init
2633
      cyg_mutex_destroy
2634
      cyg_mutex_lock
2635
      cyg_mutex_trylock
2636
      cyg_mutex_unlock
2637
      cyg_mutex_release
2638
      cyg_mutex_set_ceiling
2639
      cyg_mutex_set_protocol
2640
      Synchronization primitive
2641
    
2642
 
2643
    
2644
      
2645
        
2646
#include <cyg/kernel/kapi.h>
2647
        
2648
        
2649
          void cyg_mutex_init
2650
          cyg_mutex_t* mutex
2651
        
2652
        
2653
          void cyg_mutex_destroy
2654
          cyg_mutex_t* mutex
2655
        
2656
        
2657
          cyg_bool_t cyg_mutex_lock
2658
          cyg_mutex_t* mutex
2659
        
2660
        
2661
          cyg_bool_t cyg_mutex_trylock
2662
          cyg_mutex_t* mutex
2663
        
2664
        
2665
          void cyg_mutex_unlock
2666
          cyg_mutex_t* mutex
2667
        
2668
        
2669
          void cyg_mutex_release
2670
          cyg_mutex_t* mutex
2671
        
2672
        
2673
          void cyg_mutex_set_ceiling
2674
          cyg_mutex_t* mutex
2675
          cyg_priority_t priority
2676
        
2677
        
2678
          void cyg_mutex_set_protocol
2679
          cyg_mutex_t* mutex
2680
          enum cyg_mutex_protocol protocol/
2681
        
2682
      
2683
    
2684
 
2685
    Description
2686
      
2687
The purpose of mutexes is to let threads share resources safely. If
2688
two or more threads attempt to manipulate a data structure with no
2689
locking between them then the system may run for quite some time
2690
without apparent problems, but sooner or later the data structure will
2691
become inconsistent and the application will start behaving strangely
2692
and is quite likely to crash. The same can apply even when
2693
manipulating a single variable or some other resource. For example,
2694
consider:
2695
      
2696
2697
static volatile int counter = 0;
2698
 
2699
void
2700
process_event(void)
2701
{
2702
2703
 
2704
    counter++;
2705
}
2706
2707
      
2708
Assume that after a certain period of time counter
2709
has a value of 42, and two threads A and B running at the same
2710
priority call process_event. Typically thread A
2711
will read the value of counter into a register,
2712
increment this register to 43, and write this updated value back to
2713
memory. Thread B will do the same, so usually
2714
counter will end up with a value of 44. However if
2715
thread A is timesliced after reading the old value 42 but before
2716
writing back 43, thread B will still read back the old value and will
2717
also write back 43. The net result is that the counter only gets
2718
incremented once, not twice, which depending on the application may
2719
prove disastrous.
2720
      
2721
      
2722
Sections of code like the above which involve manipulating shared data
2723
are generally known as critical regions. Code should claim a lock
2724
before entering a critical region and release the lock when leaving.
2725
Mutexes provide an appropriate synchronization primitive for this.
2726
      
2727
      
2728
static volatile int counter = 0;
2729
static cyg_mutex_t  lock;
2730
 
2731
void
2732
process_event(void)
2733
{
2734
2735
 
2736
    cyg_mutex_lock(&lock);
2737
    counter++;
2738
    cyg_mutex_unlock(&lock);
2739
}
2740
      
2741
      
2742
A mutex must be initialized before it can be used, by calling
2743
cyg_mutex_init. This takes a pointer to a
2744
cyg_mutex_t data structure which is typically
2745
statically allocated, and may be part of a larger data structure. If a
2746
mutex is no longer required and there are no threads waiting on it
2747
then cyg_mutex_destroy can be used.
2748
      
2749
      
2750
The main functions for using a mutex are
2751
cyg_mutex_lock and
2752
cyg_mutex_unlock. In normal operation
2753
cyg_mutex_lock will return success after claiming
2754
the mutex lock, blocking if another thread currently owns the mutex.
2755
However the lock operation may fail if other code calls
2756
cyg_mutex_release or
2757
cyg_thread_release, so if these functions may get
2758
used then it is important to check the return value. The current owner
2759
of a mutex should call cyg_mutex_unlock when a
2760
lock is no longer required. This operation must be performed by the
2761
owner, not by another thread.
2762
      
2763
      
2764
cyg_mutex_trylock is a variant of
2765
cyg_mutex_lock that will always return
2766
immediately, returning success or failure as appropriate. This
2767
function is rarely useful. Typical code locks a mutex just before
2768
entering a critical region, so if the lock cannot be claimed then
2769
there may be nothing else for the current thread to do. Use of this
2770
function may also cause a form of priority inversion if the owner
2771
owner runs at a lower priority, because the priority inheritance code
2772
will not be triggered. Instead the current thread continues running,
2773
preventing the owner from getting any cpu time, completing the
2774
critical region, and releasing the mutex.
2775
      
2776
      
2777
cyg_mutex_release can be used to wake up all
2778
threads that are currently blocked inside a call to
2779
cyg_mutex_lock for a specific mutex. These lock
2780
calls will return failure. The current mutex owner is not affected.
2781
      
2782
    
2783
 
2784
    Priority Inversion
2785
      
2786
The use of mutexes gives rise to a problem known as priority
2787
inversion. In a typical scenario this requires three threads A, B, and
2788
C, running at high, medium and low priority respectively. Thread A and
2789
thread B are temporarily blocked waiting for some event, so thread C
2790
gets a chance to run, needs to enter a critical region, and locks
2791
a mutex. At this point threads A and B are woken up - the exact order
2792
does not matter. Thread A needs to claim the same mutex but has to
2793
wait until C has left the critical region and can release the mutex.
2794
Meanwhile thread B works on something completely different and can
2795
continue running without problems. Because thread C is running a lower
2796
priority than B it will not get a chance to run until B blocks for
2797
some reason, and hence thread A cannot run either. The overall effect
2798
is that a high-priority thread A cannot proceed because of a lower
2799
priority thread B, and priority inversion has occurred.
2800
      
2801
      
2802
In simple applications it may be possible to arrange the code such
2803
that priority inversion cannot occur, for example by ensuring that a
2804
given mutex is never shared by threads running at different priority
2805
levels. However this may not always be possible even at the
2806
application level. In addition mutexes may be used internally by
2807
underlying code, for example the memory allocation package, so careful
2808
analysis of the whole system would be needed to be sure that priority
2809
inversion cannot occur. Instead it is common practice to use one of
2810
two techniques: priority ceilings and priority inheritance.
2811
      
2812
      
2813
Priority ceilings involve associating a priority with each mutex.
2814
Usually this will match the highest priority thread that will ever
2815
lock the mutex. When a thread running at a lower priority makes a
2816
successful call to cyg_mutex_lock or
2817
cyg_mutex_trylock its priority will be boosted to
2818
that of the mutex. For example, given the previous example the
2819
priority associated with the mutex would be that of thread A, so for
2820
as long as it owns the mutex thread C will run in preference to thread
2821
B. When C releases the mutex its priority drops to the normal value
2822
again, allowing A to run and claim the mutex. Setting the
2823
priority for a mutex involves a call to
2824
cyg_mutex_set_ceiling, which is typically called
2825
during initialization. It is possible to change the ceiling
2826
dynamically but this will only affect subsequent lock operations, not
2827
the current owner of the mutex.
2828
      
2829
      
2830
Priority ceilings are very suitable for simple applications, where for
2831
every thread in the system it is possible to work out which mutexes
2832
will be accessed. For more complicated applications this may prove
2833
difficult, especially if thread priorities change at run-time. An
2834
additional problem occurs for any mutexes outside the application, for
2835
example used internally within eCos packages. A typical eCos package
2836
will be unaware of the details of the various threads in the system,
2837
so it will have no way of setting suitable ceilings for its internal
2838
mutexes. If those mutexes are not exported to application code then
2839
using priority ceilings may not be viable. The kernel does provide a
2840
configuration option
2841
CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_DEFAULT_PRIORITY
2842
that can be used to set the default priority ceiling for all mutexes,
2843
which may prove sufficient.
2844
      
2845
      
2846
The alternative approach is to use priority inheritance: if a thread
2847
calls cyg_mutex_lock for a mutex that it
2848
currently owned by a lower-priority thread, then the owner will have
2849
its priority raised to that of the current thread. Often this is more
2850
efficient than priority ceilings because priority boosting only
2851
happens when necessary, not for every lock operation, and the required
2852
priority is determined at run-time rather than by static analysis.
2853
However there are complications when multiple threads running at
2854
different priorities try to lock a single mutex, or when the current
2855
owner of a mutex then tries to lock additional mutexes, and this makes
2856
the implementation significantly more complicated than priority
2857
ceilings.
2858
      
2859
      
2860
There are a number of configuration options associated with priority
2861
inversion. First, if after careful analysis it is known that priority
2862
inversion cannot arise then the component
2863
CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL
2864
can be disabled. More commonly this component will be enabled, and one
2865
of either
2866
CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_INHERIT
2867
or
2868
CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_CEILING
2869
will be selected, so that one of the two protocols is available for
2870
all mutexes. It is possible to select multiple protocols, so that some
2871
mutexes can have priority ceilings while others use priority
2872
inheritance or no priority inversion protection at all. Obviously this
2873
flexibility will add to the code size and to the cost of mutex
2874
operations. The default for all mutexes will be controlled by
2875
CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_DEFAULT,
2876
and can be changed at run-time using
2877
cyg_mutex_set_protocol.
2878
      
2879
      
2880
Priority inversion problems can also occur with other synchronization
2881
primitives such as semaphores. For example there could be a situation
2882
where a high-priority thread A is waiting on a semaphore, a
2883
low-priority thread C needs to do just a little bit more work before
2884
posting the semaphore, but a medium priority thread B is running and
2885
preventing C from making progress. However a semaphore does not have
2886
the concept of an owner, so there is no way for the system to know
2887
that it is thread C which would next post to the semaphore. Hence
2888
there is no way for the system to boost the priority of C
2889
automatically and prevent the priority inversion. Instead situations
2890
like this have to be detected by application developers and
2891
appropriate precautions have to be taken, for example making sure that
2892
all the threads run at suitable priorities at all times.
2893
      
2894
      
2895
The current implementation of priority inheritance within the eCos
2896
kernel does not handle certain exceptional circumstances completely
2897
correctly. Problems will only arise if a thread owns one mutex,
2898
then attempts to claim another mutex, and there are other threads
2899
attempting to lock these same mutexes. Although the system will
2900
continue running, the current owners of the various mutexes involved
2901
may not run at the priority they should. This situation never arises
2902
in typical code because a mutex will only be locked for a small
2903
critical region, and there is no need to manipulate other shared resources
2904
inside this region. A more complicated implementation of priority
2905
inheritance is possible but would add significant overhead and certain
2906
operations would no longer be deterministic.
2907
      
2908
      
2909
Support for priority ceilings and priority inheritance is not
2910
implemented for all schedulers. In particular neither priority
2911
ceilings nor priority inheritance are currently available for the
2912
bitmap scheduler.
2913
      
2914
    
2915
 
2916
    Alternatives
2917
      
2918
In nearly all circumstances, if two or more threads need to share some
2919
data then protecting this data with a mutex is the correct thing to
2920
do. Mutexes are the only primitive that combine a locking mechanism
2921
and protection against priority inversion problems. However this
2922
functionality is achieved at a cost, and in exceptional circumstances
2923
such as an application's most critical inner loop it may be desirable
2924
to use some other means of locking.
2925
      
2926
      
2927
When a critical region is very very small it is possible to lock the
2928
scheduler, thus ensuring that no other thread can run until the
2929
scheduler is unlocked again. This is achieved with calls to 
2930
linkend="kernel-schedcontrol">cyg_scheduler_lock
2931
and cyg_scheduler_unlock. If the critical region
2932
is sufficiently small then this can actually improve both performance
2933
and dispatch latency because cyg_mutex_lock also
2934
locks the scheduler for a brief period of time. This approach will not
2935
work on SMP systems because another thread may already be running on a
2936
different processor and accessing the critical region.
2937
      
2938
      
2939
Another way of avoiding the use of mutexes is to make sure that all
2940
threads that access a particular critical region run at the same
2941
priority and configure the system with timeslicing disabled
2942
(CYGSEM_KERNEL_SCHED_TIMESLICE). Without
2943
timeslicing a thread can only be preempted by a higher-priority one,
2944
or if it performs some operation that can block. This approach
2945
requires that none of the operations in the critical region can block,
2946
so for example it is not legal to call
2947
cyg_semaphore_wait. It is also vulnerable to
2948
any changes in the configuration or to the various thread priorities:
2949
any such changes may now have unexpected side effects. It will not
2950
work on SMP systems.
2951
      
2952
    
2953
 
2954
    Recursive Mutexes
2955
      
2956
The implementation of mutexes within the eCos kernel does not support
2957
recursive locks. If a thread has locked a mutex and then attempts to
2958
lock the mutex again, typically as a result of some recursive call in
2959
a complicated call graph, then either an assertion failure will be
2960
reported or the thread will deadlock. This behaviour is deliberate.
2961
When a thread has just locked a mutex associated with some data
2962
structure, it can assume that that data structure is in a consistent
2963
state. Before unlocking the mutex again it must ensure that the data
2964
structure is again in a consistent state. Recursive mutexes allow a
2965
thread to make arbitrary changes to a data structure, then in a
2966
recursive call lock the mutex again while the data structure is still
2967
inconsistent. The net result is that code can no longer make any
2968
assumptions about data structure consistency, which defeats the
2969
purpose of using mutexes.
2970
      
2971
    
2972
 
2973
    Valid contexts
2974
      
2975
cyg_mutex_init,
2976
cyg_mutex_set_ceiling and
2977
cyg_mutex_set_protocol are normally called during
2978
initialization but may also be called from thread context. The
2979
remaining functions should only be called from thread context. Mutexes
2980
serve as a mutual exclusion mechanism between threads, and cannot be
2981
used to synchronize between threads and the interrupt handling
2982
subsystem. If a critical region is shared between a thread and a DSR
2983
then it must be protected using 
2984
linkend="kernel-schedcontrol">cyg_scheduler_lock
2985
and cyg_scheduler_unlock. If a critical region is
2986
shared between a thread and an ISR, it must be protected by disabling
2987
or masking interrupts. Obviously these operations must be used with
2988
care because they can affect dispatch and interrupt latencies.
2989
      
2990
    
2991
 
2992
  
2993
 
2994
2995
2996
 
2997
  
2998
 
2999
    
3000
    Condition Variables
3001
    
3002
 
3003
    
3004
      cyg_cond_init
3005
      cyg_cond_destroy
3006
      cyg_cond_wait
3007
      cyg_cond_timed_wait
3008
      cyg_cond_signal
3009
      cyg_cond_broadcast
3010
      Synchronization primitive
3011
    
3012
 
3013
    
3014
      
3015
        
3016
#include <cyg/kernel/kapi.h>
3017
        
3018
        
3019
          void cyg_cond_init
3020
          cyg_cond_t* cond
3021
          cyg_mutex_t* mutex
3022
        
3023
        
3024
          void cyg_cond_destroy
3025
          cyg_cond_t* cond
3026
        
3027
        
3028
          cyg_bool_t cyg_cond_wait
3029
          cyg_cond_t* cond
3030
        
3031
        
3032
          cyg_bool_t cyg_cond_timed_wait
3033
          cyg_cond_t* cond
3034
          cyg_tick_count_t abstime
3035
        
3036
        
3037
          void cyg_cond_signal
3038
          cyg_cond_t* cond
3039
        
3040
        
3041
          void cyg_cond_broadcast
3042
          cyg_cond_t* cond
3043
        
3044
      
3045
    
3046
 
3047
    Description
3048
 
3049
      
3050
Condition variables are used in conjunction with mutexes to implement
3051
long-term waits for some condition to become true. For example
3052
consider a set of functions that control access to a pool of
3053
resources:
3054
      
3055
 
3056
      
3057
 
3058
cyg_mutex_t res_lock;
3059
res_t res_pool[RES_MAX];
3060
int res_count = RES_MAX;
3061
 
3062
void res_init(void)
3063
{
3064
    cyg_mutex_init(&res_lock);
3065
    <fill pool with resources>
3066
}
3067
 
3068
res_t res_allocate(void)
3069
{
3070
    res_t res;
3071
 
3072
    cyg_mutex_lock(&res_lock);               // lock the mutex
3073
 
3074
    if( res_count == 0 )                     // check for free resource
3075
        res = RES_NONE;                      // return RES_NONE if none
3076
    else
3077
    {
3078
        res_count--;                         // allocate a resources
3079
        res = res_pool[res_count];
3080
    }
3081
 
3082
    cyg_mutex_unlock(&res_lock);             // unlock the mutex
3083
 
3084
    return res;
3085
}
3086
 
3087
void res_free(res_t res)
3088
{
3089
    cyg_mutex_lock(&res_lock);               // lock the mutex
3090
 
3091
    res_pool[res_count] = res;               // free the resource
3092
    res_count++;
3093
 
3094
    cyg_mutex_unlock(&res_lock);             // unlock the mutex
3095
}
3096
      
3097
 
3098
      
3099
These routines use the variable res_count to keep
3100
track of the resources available. If there are none then
3101
res_allocate returns RES_NONE,
3102
which the caller must check for and take appropriate error handling
3103
actions.
3104
      
3105
 
3106
      
3107
Now suppose that we do not want to return
3108
RES_NONE when there are no resources, but want to
3109
wait for one to become available. This is where a condition variable
3110
can be used:
3111
      
3112
 
3113
      
3114
 
3115
cyg_mutex_t res_lock;
3116
cyg_cond_t res_wait;
3117
res_t res_pool[RES_MAX];
3118
int res_count = RES_MAX;
3119
 
3120
void res_init(void)
3121
{
3122
    cyg_mutex_init(&res_lock);
3123
    cyg_cond_init(&res_wait, &res_lock);
3124
    <fill pool with resources>
3125
}
3126
 
3127
res_t res_allocate(void)
3128
{
3129
    res_t res;
3130
 
3131
    cyg_mutex_lock(&res_lock);               // lock the mutex
3132
 
3133
    while( res_count == 0 )                  // wait for a resources
3134
        cyg_cond_wait(&res_wait);
3135
 
3136
    res_count--;                             // allocate a resource
3137
    res = res_pool[res_count];
3138
 
3139
    cyg_mutex_unlock(&res_lock);             // unlock the mutex
3140
 
3141
    return res;
3142
}
3143
 
3144
void res_free(res_t res)
3145
{
3146
    cyg_mutex_lock(&res_lock);               // lock the mutex
3147
 
3148
    res_pool[res_count] = res;               // free the resource
3149
    res_count++;
3150
 
3151
    cyg_cond_signal(&res_wait);              // wake up any waiting allocators
3152
 
3153
    cyg_mutex_unlock(&res_lock);             // unlock the mutex
3154
}
3155
      
3156
 
3157
      
3158
In this version of the code, when res_allocate
3159
detects that there are no resources it calls
3160
cyg_cond_wait. This does two things: it unlocks
3161
the mutex, and puts the calling thread to sleep on the condition
3162
variable. When res_free is eventually called, it
3163
puts a resource back into the pool and calls
3164
cyg_cond_signal to wake up any thread waiting on
3165
the condition variable. When the waiting thread eventually gets to run again,
3166
it will re-lock the mutex before returning from
3167
cyg_cond_wait.
3168
      
3169
 
3170
      
3171
There are two important things to note about the way in which this
3172
code works. The first is that the mutex unlock and wait in
3173
cyg_cond_wait are atomic: no other thread can run
3174
between the unlock and the wait. If this were not the case then a call
3175
to res_free by that thread would release the
3176
resource but the call to cyg_cond_signal would be
3177
lost, and the first thread would end up waiting when there were
3178
resources available.
3179
      
3180
 
3181
      
3182
The second feature is that the call to
3183
cyg_cond_wait is in a while
3184
loop and not a simple if statement. This is because
3185
of the need to re-lock the mutex in cyg_cond_wait
3186
when the signalled thread reawakens. If there are other threads
3187
already queued to claim the lock then this thread must wait. Depending
3188
on the scheduler and the queue order, many other threads may have
3189
entered the critical section before this one gets to run. So the
3190
condition that it was waiting for may have been rendered false. Using
3191
a loop around all condition variable wait operations is the only way
3192
to guarantee that the condition being waited for is still true after
3193
waiting.
3194
      
3195
 
3196
      
3197
Before a condition variable can be used it must be initialized with a
3198
call to cyg_cond_init. This requires two
3199
arguments, memory for the data structure and a pointer to an existing
3200
mutex. This mutex will not be initialized by
3201
cyg_cond_init, instead a separate call to
3202
cyg_mutex_init is required. If a condition
3203
variable is no longer required and there are no threads waiting on it
3204
then cyg_cond_destroy can be used.
3205
      
3206
      
3207
When a thread needs to wait for a condition to be satisfied it can
3208
call cyg_cond_wait. The thread must have already
3209
locked the mutex that was specified in the
3210
cyg_cond_init call. This mutex will be unlocked
3211
and the current thread will be suspended in an atomic operation. When
3212
some other thread performs a signal or broadcast operation the current
3213
thread will be woken up and automatically reclaim ownership of the mutex
3214
again, allowing it to examine global state and determine whether or
3215
not the condition is now satisfied.
3216
      
3217
      
3218
The kernel supplies a variant of this function,
3219
cyg_cond_timed_wait, which can be used to wait on
3220
the condition variable or until some number of clock ticks have
3221
occurred. The number of ticks is specified as an absolute, not
3222
relative tick count, and so in order to wait for a relative number of
3223
ticks, the return value of the cyg_current_time()
3224
function should be added to determine the absolute number of ticks.
3225
The mutex will always be reclaimed before
3226
cyg_cond_timed_wait returns, regardless of
3227
whether it was a result of a signal operation or a timeout.
3228
      
3229
      
3230
There is no cyg_cond_trywait function because
3231
this would not serve any purpose. If a thread has locked the mutex and
3232
determined that the condition is satisfied, it can just release the
3233
mutex and return. There is no need to perform any operation on the
3234
condition variable.
3235
      
3236
      
3237
When a thread changes shared state that may affect some other thread
3238
blocked on a condition variable, it should call either
3239
cyg_cond_signal or
3240
cyg_cond_broadcast. These calls do not require
3241
ownership of the mutex, but usually the mutex will have been claimed
3242
before updating the shared state. A signal operation only wakes up the
3243
first thread that is waiting on the condition variable, while a
3244
broadcast wakes up all the threads. If there are no threads waiting on
3245
the condition variable at the time, then the signal or broadcast will
3246
have no effect: past signals are not counted up or remembered in any
3247
way. Typically a signal should be used when all threads will check the
3248
same condition and at most one thread can continue running. A
3249
broadcast should be used if threads check slightly different
3250
conditions, or if the change to the global state might allow multiple
3251
threads to proceed.
3252
      
3253
    
3254
 
3255
    Valid contexts
3256
      
3257
cyg_cond_init is typically called during system
3258
initialization but may also be called in thread context. The same
3259
applies to cyg_cond_delete.
3260
cyg_cond_wait and
3261
cyg_cond_timedwait may only be called from thread
3262
context since they may block. cyg_cond_signal and
3263
cyg_cond_broadcast may be called from thread or
3264
DSR context.
3265
      
3266
    
3267
 
3268
  
3269
 
3270
3271
3272
 
3273
  
3274
 
3275
    
3276
    Semaphores
3277
    
3278
 
3279
    
3280
      cyg_semaphore_init
3281
      cyg_semaphore_destroy
3282
      cyg_semaphore_wait
3283
      cyg_semaphore_timed_wait
3284
      cyg_semaphore_post
3285
      cyg_semaphore_peek
3286
      Synchronization primitive
3287
    
3288
 
3289
    
3290
      
3291
        
3292
#include <cyg/kernel/kapi.h>
3293
        
3294
        
3295
          void cyg_semaphore_init
3296
          cyg_sem_t* sem
3297
          cyg_count32 val
3298
        
3299
        
3300
          void cyg_semaphore_destroy
3301
          cyg_sem_t* sem
3302
        
3303
        
3304
          cyg_bool_t cyg_semaphore_wait
3305
          cyg_sem_t* sem
3306
        
3307
        
3308
          cyg_bool_t cyg_semaphore_timed_wait
3309
          cyg_sem_t* sem
3310
          cyg_tick_count_t abstime
3311
        
3312
        
3313
          cyg_bool_t cyg_semaphore_trywait
3314
          cyg_sem_t* sem
3315
        
3316
        
3317
          void cyg_semaphore_post
3318
          cyg_sem_t* sem
3319
        
3320
        
3321
          void cyg_semaphore_peek
3322
          cyg_sem_t* sem
3323
          cyg_count32* val
3324
        
3325
      
3326
    
3327
 
3328
    Description
3329
      
3330
Counting semaphores are a 
3331
linkend="kernel-overview-synch-primitives">synchronization
3332
primitive that allow threads to wait until an event has
3333
occurred. The event may be generated by a producer thread, or by a DSR
3334
in response to a hardware interrupt. Associated with each semaphore is
3335
an integer counter that keeps track of the number of events that have
3336
not yet been processed. If this counter is zero, an attempt by a
3337
consumer thread to wait on the semaphore will block until some other
3338
thread or a DSR posts a new event to the semaphore. If the counter is
3339
greater than zero then an attempt to wait on the semaphore will
3340
consume one event, in other words decrement the counter, and return
3341
immediately. Posting to a semaphore will wake up the first thread that
3342
is currently waiting, which will then resume inside the semaphore wait
3343
operation and decrement the counter again.
3344
      
3345
      
3346
Another use of semaphores is for certain forms of resource management.
3347
The counter would correspond to how many of a certain type of resource
3348
are currently available, with threads waiting on the semaphore to
3349
claim a resource and posting to release the resource again. In
3350
practice condition
3351
variables are usually much better suited for operations like
3352
this.
3353
      
3354
      
3355
cyg_semaphore_init is used to initialize a
3356
semaphore. It takes two arguments, a pointer to a
3357
cyg_sem_t structure and an initial value for
3358
the counter. Note that semaphore operations, unlike some other parts
3359
of the kernel API, use pointers to data structures rather than
3360
handles. This makes it easier to embed semaphores in a larger data
3361
structure. The initial counter value can be any number, zero, positive
3362
or negative, but typically a value of zero is used to indicate that no
3363
events have occurred yet.
3364
      
3365
      
3366
cyg_semaphore_wait is used by a consumer thread
3367
to wait for an event. If the current counter is greater than 0, in
3368
other words if the event has already occurred in the past, then the
3369
counter will be decremented and the call will return immediately.
3370
Otherwise the current thread will be blocked until there is a
3371
cyg_semaphore_post call.
3372
      
3373
      
3374
cyg_semaphore_post is called when an event has
3375
occurs. This increments the counter and wakes up the first thread
3376
waiting on the semaphore (if any). Usually that thread will then
3377
continue running inside cyg_semaphore_wait and
3378
decrement the counter again. However other scenarioes are possible.
3379
For example the thread calling cyg_semaphore_post
3380
may be running at high priority, some other thread running at medium
3381
priority may be about to call cyg_semaphore_wait
3382
when it next gets a chance to run, and a low priority thread may be
3383
waiting on the semaphore. What will happen is that the current high
3384
priority thread continues running until it is descheduled for some
3385
reason, then the medium priority thread runs and its call to
3386
cyg_semaphore_wait succeeds immediately, and
3387
later on the low priority thread runs again, discovers a counter value
3388
of 0, and blocks until another event is posted. If there are multiple
3389
threads blocked on a semaphore then the configuration option
3390
CYGIMP_KERNEL_SCHED_SORTED_QUEUES determines which
3391
one will be woken up by a post operation.
3392
      
3393
      
3394
cyg_semaphore_wait returns a boolean. Normally it
3395
will block until it has successfully decremented the counter, retrying
3396
as necessary, and return success. However the wait operation may be
3397
aborted by a call to 
3398
linkend="kernel-thread-control">cyg_thread_release,
3399
and cyg_semaphore_wait will then return false.
3400
      
3401
      
3402
cyg_semaphore_timed_wait is a variant of
3403
cyg_semaphore_wait. It can be used to wait until
3404
either an event has occurred or a number of clock ticks have happened.
3405
The number of ticks is specified as an absolute, not relative tick
3406
count, and so in order to wait for a relative number of ticks, the
3407
return value of the cyg_current_time() function
3408
should be added to determine the absolute number of ticks. The
3409
function returns success if the semaphore wait operation succeeded, or
3410
false if the operation timed out or was aborted by
3411
cyg_thread_release.
3412
If support for the real-time
3413
clock has been removed from the current configuration then this
3414
function will not be available.
3415
cyg_semaphore_trywait is another variant which
3416
will always return immediately rather than block, again returning
3417
success or failure. If cyg_semaphore_timedwait
3418
is given a timeout in the past, it operates like
3419
cyg_semaphore_trywait.
3420
      
3421
      
3422
cyg_semaphore_peek can be used to get hold of the
3423
current counter value. This function is rarely useful except for
3424
debugging purposes since the counter value may change at any time if
3425
some other thread or a DSR performs a semaphore operation.
3426
      
3427
    
3428
 
3429
    Valid contexts
3430
      
3431
cyg_semaphore_init is normally called during
3432
initialization but may also be called from thread context.
3433
cyg_semaphore_wait and
3434
cyg_semaphore_timed_wait may only be called from
3435
thread context because these operations may block.
3436
cyg_semaphore_trywait,
3437
cyg_semaphore_post and
3438
cyg_semaphore_peek may be called from thread or
3439
DSR context.
3440
      
3441
    
3442
 
3443
  
3444
 
3445
3446
3447
 
3448
  
3449
 
3450
    
3451
    Mail boxes
3452
    
3453
 
3454
    
3455
      cyg_mbox_create
3456
      cyg_mbox_delete
3457
      cyg_mbox_get
3458
      cyg_mbox_timed_get
3459
      cyg_mbox_tryget
3460
      cyg_mbox_peek_item
3461
      cyg_mbox_put
3462
      cyg_mbox_timed_put
3463
      cyg_mbox_tryput
3464
      cyg_mbox_peek
3465
      cyg_mbox_waiting_to_get
3466
      cyg_mbox_waiting_to_put
3467
      Synchronization primitive
3468
    
3469
 
3470
    
3471
      
3472
        
3473
#include <cyg/kernel/kapi.h>
3474
        
3475
        
3476
          void cyg_mbox_create
3477
          cyg_handle_t* handle
3478
          cyg_mbox* mbox
3479
        
3480
        
3481
          void cyg_mbox_delete
3482
          cyg_handle_t mbox
3483
        
3484
        
3485
          void* cyg_mbox_get
3486
          cyg_handle_t mbox
3487
        
3488
        
3489
          void* cyg_mbox_timed_get
3490
          cyg_handle_t mbox
3491
          cyg_tick_count_t abstime
3492
        
3493
        
3494
          void* cyg_mbox_tryget
3495
          cyg_handle_t mbox
3496
        
3497
        
3498
          cyg_count32 cyg_mbox_peek
3499
          cyg_handle_t mbox
3500
        
3501
        
3502
          void* cyg_mbox_peek_item
3503
          cyg_handle_t mbox
3504
        
3505
        
3506
          cyg_bool_t cyg_mbox_put
3507
          cyg_handle_t mbox
3508
          void* item
3509
        
3510
        
3511
          cyg_bool_t cyg_mbox_timed_put
3512
          cyg_handle_t mbox
3513
          void* item
3514
          cyg_tick_count_t abstime
3515
        
3516
        
3517
          cyg_bool_t cyg_mbox_tryput
3518
          cyg_handle_t mbox
3519
          void* item
3520
        
3521
        
3522
          cyg_bool_t cyg_mbox_waiting_to_get
3523
          cyg_handle_t mbox
3524
        
3525
        
3526
          cyg_bool_t cyg_mbox_waiting_to_put
3527
          cyg_handle_t mbox
3528
        
3529
      
3530
    
3531
 
3532
    Description
3533
      
3534
Mail boxes are a synchronization primitive. Like semaphores they
3535
can be used by a consumer thread to wait until a certain event has
3536
occurred, but the producer also has the ability to transmit some data
3537
along with each event. This data, the message, is normally a pointer
3538
to some data structure. It is stored in the mail box itself, so the
3539
producer thread that generates the event and provides the data usually
3540
does not have to block until some consumer thread is ready to receive
3541
the event. However a mail box will only have a finite capacity,
3542
typically ten slots. Even if the system is balanced and events are
3543
typically consumed at least as fast as they are generated, a burst of
3544
events can cause the mail box to fill up and the generating thread
3545
will block until space is available again. This behaviour is very
3546
different from semaphores, where it is only necessary to maintain a
3547
counter and hence an overflow is unlikely.
3548
      
3549
      
3550
Before a mail box can be used it must be created with a call to
3551
cyg_mbox_create. Each mail box has a unique
3552
handle which will be returned via the first argument and which should
3553
be used for subsequent operations.
3554
cyg_mbox_create also requires an area of memory
3555
for the kernel structure, which is provided by the
3556
cyg_mbox second argument. If a mail box is
3557
no longer required then cyg_mbox_delete can be
3558
used. This will simply discard any messages that remain posted.
3559
      
3560
      
3561
The main function for waiting on a mail box is
3562
cyg_mbox_get. If there is a pending message
3563
because of a call to cyg_mbox_put then
3564
cyg_mbox_get will return immediately with the
3565
message that was put into the mail box. Otherwise this function
3566
will block until there is a put operation. Exceptionally the thread
3567
can instead be unblocked by a call to
3568
cyg_thread_release, in which case
3569
cyg_mbox_get will return a null pointer. It is
3570
assumed that there will never be a call to
3571
cyg_mbox_put with a null pointer, because it
3572
would not be possible to distinguish between that and a release
3573
operation. Messages are always retrieved in the order in which they
3574
were put into the mail box, and there is no support for messages
3575
with different priorities.
3576
      
3577
      
3578
There are two variants of cyg_mbox_get. The
3579
first, cyg_mbox_timed_get will wait until either
3580
a message is available or until a number of clock ticks have occurred.
3581
The number of ticks is specified as an absolute, not relative tick
3582
count, and so in order to wait for a relative number of ticks, the
3583
return value of the cyg_current_time() function
3584
should be added to determine the absolute number of ticks.  If no
3585
message is posted within the timeout then a null pointer will be
3586
returned. cyg_mbox_tryget is a non-blocking
3587
operation which will either return a message if one is available or a
3588
null pointer.
3589
      
3590
      
3591
New messages are placed in the mail box by calling
3592
cyg_mbox_put or one of its variants. The main put
3593
function takes two arguments, a handle to the mail box and a
3594
pointer for the message itself. If there is a spare slot in the
3595
mail box then the new message can be placed there immediately, and
3596
if there is a waiting thread it will be woken up so that it can
3597
receive the message. If the mail box is currently full then
3598
cyg_mbox_put will block until there has been a
3599
get operation and a slot is available. The
3600
cyg_mbox_timed_put variant imposes a time limit
3601
on the put operation, returning false if the operation cannot be
3602
completed within the specified number of clock ticks and as for
3603
cyg_mbox_timed_get this is an absolute tick
3604
count. The cyg_mbox_tryput variant is
3605
non-blocking, returning false if there are no free slots available and
3606
the message cannot be posted without blocking.
3607
      
3608
      
3609
There are a further four functions available for examining the current
3610
state of a mailbox. The results of these functions must be used with
3611
care because usually the state can change at any time as a result of
3612
activity within other threads, but they may prove occasionally useful
3613
during debugging or in special situations.
3614
cyg_mbox_peek returns a count of the number of
3615
messages currently stored in the mail box.
3616
cyg_mbox_peek_item retrieves the first message,
3617
but it remains in the mail box until a get operation is performed.
3618
cyg_mbox_waiting_to_get and
3619
cyg_mbox_waiting_to_put indicate whether or not
3620
there are currently threads blocked in a get or a put operation on a
3621
given mail box.
3622
      
3623
      
3624
The number of slots in each mail box is controlled by a
3625
configuration option
3626
CYGNUM_KERNEL_SYNCH_MBOX_QUEUE_SIZE, with a default
3627
value of 10. All mail boxes are the same size.
3628
      
3629
    
3630
 
3631
    Valid contexts
3632
      
3633
cyg_mbox_create is typically called during
3634
system initialization but may also be called in thread context.
3635
The remaining functions are normally called only during thread
3636
context. Of special note is cyg_mbox_put which
3637
can be a blocking operation when the mail box is full, and which
3638
therefore must never be called from DSR context. It is permitted to
3639
call cyg_mbox_tryput,
3640
cyg_mbox_tryget, and the information functions
3641
from DSR context but this is rarely useful.
3642
      
3643
    
3644
 
3645
  
3646
 
3647
3648
3649
 
3650
  
3651
 
3652
    
3653
    Event Flags
3654
    
3655
 
3656
    
3657
      cyg_flag_init
3658
      cyg_flag_destroy
3659
      cyg_flag_setbits
3660
      cyg_flag_maskbits
3661
      cyg_flag_wait
3662
      cyg_flag_timed_wait
3663
      cyg_flag_poll
3664
      cyg_flag_peek
3665
      cyg_flag_waiting
3666
      Synchronization primitive
3667
    
3668
 
3669
    
3670
      
3671
        
3672
#include <cyg/kernel/kapi.h>
3673
        
3674
        
3675
          void cyg_flag_init
3676
          cyg_flag_t* flag
3677
        
3678
        
3679
          void cyg_flag_destroy
3680
          cyg_flag_t* flag
3681
        
3682
        
3683
          void cyg_flag_setbits
3684
          cyg_flag_t* flag
3685
          cyg_flag_value_t value
3686
        
3687
        
3688
          void cyg_flag_maskbits
3689
          cyg_flag_t* flag
3690
          cyg_flag_value_t value
3691
        
3692
        
3693
          cyg_flag_value_t cyg_flag_wait
3694
          cyg_flag_t* flag
3695
          cyg_flag_value_t pattern
3696
          cyg_flag_mode_t mode
3697
        
3698
        
3699
          cyg_flag_value_t cyg_flag_timed_wait
3700
          cyg_flag_t* flag
3701
          cyg_flag_value_t pattern
3702
          cyg_flag_mode_t mode
3703
          cyg_tick_count_t abstime
3704
        
3705
        
3706
          cyg_flag_value_t cyg_flag_poll
3707
          cyg_flag_t* flag
3708
          cyg_flag_value_t pattern
3709
          cyg_flag_mode_t mode
3710
        
3711
        
3712
          cyg_flag_value_t cyg_flag_peek
3713
          cyg_flag_t* flag
3714
        
3715
        
3716
          cyg_bool_t cyg_flag_waiting
3717
          cyg_flag_t* flag
3718
        
3719
      
3720
    
3721
 
3722
    Description
3723
      
3724
Event flags allow a consumer thread to wait for one of several
3725
different types of event to occur. Alternatively it is possible to
3726
wait for some combination of events. The implementation is relatively
3727
straightforward. Each event flag contains a 32-bit integer.
3728
Application code associates these bits with specific events, so for
3729
example bit 0 could indicate that an I/O operation has completed and
3730
data is available, while bit 1 could indicate that the user has
3731
pressed a start button. A producer thread or a DSR can cause one or
3732
more of the bits to be set, and a consumer thread currently waiting
3733
for these bits will be woken up.
3734
      
3735
      
3736
Unlike semaphores no attempt is made to keep track of event counts. It
3737
does not matter whether a given event occurs once or multiple times
3738
before being consumed, the corresponding bit in the event flag will
3739
change only once. However semaphores cannot easily be used to handle
3740
multiple event sources. Event flags can often be used as an
3741
alternative to condition variables, although they cannot be used for
3742
completely arbitrary conditions and they only support the equivalent
3743
of condition variable broadcasts, not signals.
3744
      
3745
      
3746
Before an event flag can be used it must be initialized by a call to
3747
cyg_flag_init. This takes a pointer to a
3748
cyg_flag_t data structure, which can be part of a
3749
larger structure. All 32 bits in the event flag will be set to 0,
3750
indicating that no events have yet occurred. If an event flag is no
3751
longer required it can be cleaned up with a call to
3752
cyg_flag_destroy, allowing the memory for the
3753
cyg_flag_t structure to be re-used.
3754
      
3755
      
3756
A consumer thread can wait for one or more events by calling
3757
cyg_flag_wait. This takes three arguments. The
3758
first identifies a particular event flag. The second is some
3759
combination of bits, indicating which events are of interest. The
3760
final argument should be one of the following:
3761
      
3762
      
3763
        
3764
          CYG_FLAG_WAITMODE_AND
3765
          
3766
The call to cyg_flag_wait will block until all
3767
the specified event bits are set. The event flag is not cleared when
3768
the wait succeeds, in other words all the bits remain set.
3769
          
3770
        
3771
        
3772
          CYG_FLAG_WAITMODE_OR
3773
          
3774
The call will block until at least one of the specified event bits is
3775
set. The event flag is not cleared on return.
3776
          
3777
        
3778
        
3779
          CYG_FLAG_WAITMODE_AND | CYG_FLAG_WAITMODE_CLR
3780
          
3781
The call will block until all the specified event bits are set, and
3782
the entire event flag is cleared when the call succeeds. Note that
3783
if this mode of operation is used then a single event flag cannot be
3784
used to store disjoint sets of events, even though enough bits might
3785
be available. Instead each disjoint set of events requires its own
3786
event flag.
3787
          
3788
        
3789
        
3790
          CYG_FLAG_WAITMODE_OR | CYG_FLAG_WAITMODE_CLR
3791
          
3792
The call will block until at least one of the specified event bits is
3793
set, and the entire flag is cleared when the call succeeds.
3794
          
3795
        
3796
      
3797
      
3798
A call to cyg_flag_wait normally blocks until the
3799
required condition is satisfied. It will return the value of the event
3800
flag at the point that the operation succeeded, which may be a
3801
superset of the requested events. If
3802
cyg_thread_release is used to unblock a thread
3803
that is currently in a wait operation, the
3804
cyg_flag_wait call will instead return 0.
3805
      
3806
      
3807
cyg_flag_timed_wait is a variant of
3808
cyg_flag_wait which adds a timeout: the wait
3809
operation must succeed within the specified number of ticks, or it
3810
will fail with a return value of 0. The number of ticks is specified
3811
as an absolute, not relative tick count, and so in order to wait for a
3812
relative number of ticks, the return value of the
3813
cyg_current_time() function should be added to
3814
determine the absolute number of ticks.
3815
cyg_flag_poll is a non-blocking variant: if the
3816
wait operation can succeed immediately it acts like
3817
cyg_flag_wait, otherwise it returns immediately
3818
with a value of 0.
3819
      
3820
      
3821
cyg_flag_setbits is called by a producer thread
3822
or from inside a DSR when an event occurs. The specified bits are or'd
3823
into the current event flag value. This may cause one or more waiting
3824
threads to be woken up, if their conditions are now satisfied. How many
3825
threads are awoken depends on the use of CYG_FLAG_WAITMODE_CLR
3826
. The queue of threads waiting on the flag is walked to find
3827
threads which now have their wake condition fulfilled. If the awoken thread
3828
has passed CYG_FLAG_WAITMODE_CLR the walking of the queue
3829
is terminated, otherwise the walk continues. Thus if no threads have passed
3830
CYG_FLAG_WAITMORE_CLR all threads with fulfilled
3831
conditions will be awoken. If CYG_FLAG_WAITMODE_CLR is
3832
passed by threads with fulfilled conditions, the number of awoken threads
3833
will depend on the order the threads are in the queue.
3834
      
3835
      
3836
cyg_flag_maskbits can be used to clear one or
3837
more bits in the event flag. This can be called from a producer when a
3838
particular condition is no longer satisfied, for example when the user
3839
is no longer pressing a particular button. It can also be used by a
3840
consumer thread if CYG_FLAG_WAITMODE_CLR was not
3841
used as part of the wait operation, to indicate that some but not all
3842
of the active events have been consumed. If there are multiple
3843
consumer threads performing wait operations without using
3844
CYG_FLAG_WAITMODE_CLR then typically some
3845
additional synchronization such as a mutex is needed to prevent
3846
multiple threads consuming the same event.
3847
      
3848
      
3849
Two additional functions are provided to query the current state of an
3850
event flag. cyg_flag_peek returns the current
3851
value of the event flag, and cyg_flag_waiting can
3852
be used to find out whether or not there are any threads currently
3853
blocked on the event flag. Both of these functions must be used with
3854
care because other threads may be operating on the event flag.
3855
      
3856
    
3857
 
3858
    Valid contexts
3859
      
3860
cyg_flag_init is typically called during system
3861
initialization but may also be called in thread context. The same
3862
applies to cyg_flag_destroy.
3863
cyg_flag_wait and
3864
cyg_flag_timed_wait may only be called from
3865
thread context. The remaining functions may be called from thread or
3866
DSR context.
3867
      
3868
    
3869
 
3870
  
3871
 
3872
3873
3874
 
3875
  
3876
 
3877
    
3878
    Spinlocks
3879
    
3880
 
3881
    
3882
      cyg_spinlock_create
3883
      cyg_spinlock_destroy
3884
      cyg_spinlock_spin
3885
      cyg_spinlock_clear
3886
      cyg_spinlock_test
3887
      cyg_spinlock_spin_intsave
3888
      cyg_spinlock_clear_intsave
3889
      Low-level Synchronization Primitive
3890
    
3891
 
3892
    
3893
      
3894
        
3895
#include <cyg/kernel/kapi.h>
3896
        
3897
        
3898
          void cyg_spinlock_init
3899
          cyg_spinlock_t* lock
3900
          cyg_bool_t locked
3901
        
3902
        
3903
          void cyg_spinlock_destroy
3904
          cyg_spinlock_t* lock
3905
        
3906
        
3907
          void cyg_spinlock_spin
3908
          cyg_spinlock_t* lock
3909
        
3910
        
3911
          void cyg_spinlock_clear
3912
          cyg_spinlock_t* lock
3913
        
3914
        
3915
          cyg_bool_t cyg_spinlock_try
3916
          cyg_spinlock_t* lock
3917
        
3918
        
3919
          cyg_bool_t cyg_spinlock_test
3920
          cyg_spinlock_t* lock
3921
        
3922
        
3923
          void cyg_spinlock_spin_intsave
3924
          cyg_spinlock_t* lock
3925
          cyg_addrword_t* istate
3926
        
3927
        
3928
          void cyg_spinlock_clear_intsave
3929
          cyg_spinlock_t* lock
3930
          cyg_addrword_t istate
3931
        
3932
      
3933
    
3934
 
3935
    Description
3936
      
3937
Spinlocks provide an additional synchronization primitive for
3938
applications running on SMP systems. They operate at a lower level
3939
than the other primitives such as mutexes, and for most purposes the
3940
higher-level primitives should be preferred. However there are some
3941
circumstances where a spinlock is appropriate, especially when
3942
interrupt handlers and threads need to share access to hardware, and
3943
on SMP systems the kernel implementation itself depends on spinlocks.
3944
      
3945
      
3946
Essentially a spinlock is just a simple flag. When code tries to claim
3947
a spinlock it checks whether or not the flag is already set. If not
3948
then the flag is set and the operation succeeds immediately. The exact
3949
implementation of this is hardware-specific, for example it may use a
3950
test-and-set instruction to guarantee the desired behaviour even if
3951
several processors try to access the spinlock at the exact same time.
3952
If it is not possible to claim a spinlock then the current thead spins
3953
in a tight loop, repeatedly checking the flag until it is clear. This
3954
behaviour is very different from other synchronization primitives such
3955
as mutexes, where contention would cause a thread to be suspended. The
3956
assumption is that a spinlock will only be held for a very short time.
3957
If claiming a spinlock could cause the current thread to be suspended
3958
then spinlocks could not be used inside interrupt handlers, which is
3959
not acceptable.
3960
      
3961
      
3962
This does impose a constraint on any code which uses spinlocks.
3963
Specifically it is important that spinlocks are held only for a short
3964
period of time, typically just some dozens of instructions. Otherwise
3965
another processor could be blocked on the spinlock for a long time,
3966
unable to do any useful work. It is also important that a thread which
3967
owns a spinlock does not get preempted because that might cause
3968
another processor to spin for a whole timeslice period, or longer. One
3969
way of achieving this is to disable interrupts on the current
3970
processor, and the function
3971
cyg_spinlock_spin_intsave is provided to
3972
facilitate this.
3973
      
3974
      
3975
Spinlocks should not be used on single-processor systems. Consider a
3976
high priority thread which attempts to claim a spinlock already held
3977
by a lower priority thread: it will just loop forever and the lower
3978
priority thread will never get another chance to run and release the
3979
spinlock. Even if the two threads were running at the same priority,
3980
the one attempting to claim the spinlock would spin until it was
3981
timesliced and a lot of cpu time would be wasted. If an interrupt
3982
handler tried to claim a spinlock owned by a thread, the interrupt
3983
handler would loop forever. Therefore spinlocks are only appropriate
3984
for SMP systems where the current owner of a spinlock can continue
3985
running on a different processor.
3986
      
3987
      
3988
Before a spinlock can be used it must be initialized by a call to
3989
cyg_spinlock_init. This takes two arguments, a
3990
pointer to a cyg_spinlock_t data structure, and
3991
a flag to specify whether the spinlock starts off locked or unlocked.
3992
If a spinlock is no longer required then it can be destroyed by a call
3993
to cyg_spinlock_destroy.
3994
      
3995
      
3996
There are two routines for claiming a spinlock:
3997
cyg_spinlock_spin and
3998
cyg_spinlock_spin_intsave. The former can be used
3999
when it is known the current code will not be preempted, for example
4000
because it is running in an interrupt handler or because interrupts
4001
are disabled. The latter will disable interrupts in addition to
4002
claiming the spinlock, so is safe to use in all circumstances. The
4003
previous interrupt state is returned via the second argument, and
4004
should be used in a subsequent call to
4005
cyg_spinlock_clear_intsave.
4006
      
4007
      
4008
Similarly there are two routines for releasing a spinlock:
4009
cyg_spinlock_clear and
4010
cyg_spinlock_clear_intsave. Typically
4011
the former will be used if the spinlock was claimed by a call to
4012
cyg_spinlock_spin, and the latter when
4013
cyg_spinlock_intsave was used.
4014
      
4015
      
4016
There are two additional routines.
4017
cyg_spinlock_try is a non-blocking version of
4018
cyg_spinlock_spin: if possible the lock will be
4019
claimed and the function will return true; otherwise the function
4020
will return immediately with failure.
4021
cyg_spinlock_test can be used to find out whether
4022
or not the spinlock is currently locked. This function must be used
4023
with care because, especially on a multiprocessor system, the state of
4024
the spinlock can change at any time.
4025
      
4026
      
4027
Spinlocks should only be held for a short period of time, and
4028
attempting to claim a spinlock will never cause a thread to be
4029
suspended. This means that there is no need to worry about priority
4030
inversion problems, and concepts such as priority ceilings and
4031
inheritance do not apply.
4032
      
4033
    
4034
 
4035
    Valid contexts
4036
      
4037
All of the spinlock functions can be called from any context,
4038
including ISR and DSR context. Typically
4039
cyg_spinlock_init is only called during system
4040
initialization.
4041
      
4042
    
4043
 
4044
  
4045
 
4046
4047
4048
 
4049
  
4050
 
4051
    
4052
    Scheduler Control
4053
    
4054
 
4055
    
4056
      cyg_scheduler_start
4057
      cyg_scheduler_lock
4058
      cyg_scheduler_unlock
4059
      cyg_scheduler_safe_lock
4060
      cyg_scheduler_read_lock
4061
      Control the state of the scheduler
4062
    
4063
 
4064
    
4065
      
4066
        
4067
#include <cyg/kernel/kapi.h>
4068
        
4069
        
4070
          void cyg_scheduler_start
4071
          
4072
        
4073
        
4074
          void cyg_scheduler_lock
4075
          
4076
        
4077
        
4078
          void cyg_scheduler_unlock
4079
          
4080
        
4081
        
4082
          cyg_ucount32 cyg_scheduler_read_lock
4083
          
4084
        
4085
      
4086
    
4087
 
4088
    Description
4089
      
4090
cyg_scheduler_start should only be called once,
4091
to mark the end of system initialization. In typical configurations it
4092
is called automatically by the system startup, but some applications
4093
may bypass the standard startup in which case
4094
cyg_scheduler_start will have to be called
4095
explicitly. The call will enable system interrupts, allowing I/O
4096
operations to commence. Then the scheduler will be invoked and control
4097
will be transferred to the highest priority runnable thread. The call
4098
will never return.
4099
      
4100
      
4101
The various data structures inside the eCos kernel must be protected
4102
against concurrent updates. Consider a call to
4103
cyg_semaphore_post which causes a thread to be
4104
woken up: the semaphore data structure must be updated to remove the
4105
thread from its queue; the scheduler data structure must also be
4106
updated to mark the thread as runnable; it is possible that the newly
4107
runnable thread has a higher priority than the current one, in which
4108
case preemption is required. If in the middle of the semaphore post
4109
call an interrupt occurred and the interrupt handler tried to
4110
manipulate the same data structures, for example by making another
4111
thread runnable, then it is likely that the structures will be left in
4112
an inconsistent state and the system will fail.
4113
      
4114
      
4115
To prevent such problems the kernel contains a special lock known as
4116
the scheduler lock. A typical kernel function such as
4117
cyg_semaphore_post will claim the scheduler lock,
4118
do all its manipulation of kernel data structures, and then release
4119
the scheduler lock. The current thread cannot be preempted while it
4120
holds the scheduler lock. If an interrupt occurs and a DSR is supposed
4121
to run to signal that some event has occurred, that DSR is postponed
4122
until the scheduler unlock operation. This prevents concurrent updates
4123
of kernel data structures.
4124
      
4125
      
4126
The kernel exports three routines for manipulating the scheduler lock.
4127
cyg_scheduler_lock can be called to claim the
4128
lock. On return it is guaranteed that the current thread will not be
4129
preempted, and that no other code is manipulating any kernel data
4130
structures. cyg_scheduler_unlock can be used to
4131
release the lock, which may cause the current thread to be preempted.
4132
cyg_scheduler_read_lock can be used to query the
4133
current state of the scheduler lock. This function should never be
4134
needed because well-written code should always know whether or not the
4135
scheduler is currently locked, but may prove useful during debugging.
4136
      
4137
      
4138
The implementation of the scheduler lock involves a simple counter.
4139
Code can call cyg_scheduler_lock multiple times,
4140
causing the counter to be incremented each time, as long as
4141
cyg_scheduler_unlock is called the same number of
4142
times. This behaviour is different from mutexes where an attempt by a
4143
thread to lock a mutex multiple times will result in deadlock or an
4144
assertion failure.
4145
      
4146
      
4147
Typical application code should not use the scheduler lock. Instead
4148
other synchronization primitives such as mutexes and semaphores should
4149
be used. While the scheduler is locked the current thread cannot be
4150
preempted, so any higher priority threads will not be able to run.
4151
Also no DSRs can run, so device drivers may not be able to service
4152
I/O requests. However there is one situation where locking the
4153
scheduler is appropriate: if some data structure needs to be shared
4154
between an application thread and a DSR associated with some interrupt
4155
source, the thread can use the scheduler lock to prevent concurrent
4156
invocations of the DSR and then safely manipulate the structure. It is
4157
desirable that the scheduler lock is held for only a short period of
4158
time, typically some tens of instructions. In exceptional cases there
4159
may also be some performance-critical code where it is more
4160
appropriate to use the scheduler lock rather than a mutex, because the
4161
former is more efficient.
4162
      
4163
    
4164
 
4165
    Valid contexts
4166
      
4167
cyg_scheduler_start can only be called during
4168
system initialization, since it marks the end of that phase. The
4169
remaining functions may be called from thread or DSR context. Locking
4170
the scheduler from inside the DSR has no practical effect because the
4171
lock is claimed automatically by the interrupt subsystem before
4172
running DSRs, but allows functions to be shared between normal thread
4173
code and DSRs.
4174
      
4175
    
4176
 
4177
  
4178
 
4179
4180
4181
 
4182
  
4183
 
4184
    
4185
    Interrupt Handling
4186
    
4187
 
4188
    
4189
      cyg_interrupt_create
4190
      cyg_interrupt_delete
4191
      cyg_interrupt_attach
4192
      cyg_interrupt_detach
4193
      cyg_interrupt_configure
4194
      cyg_interrupt_acknowledge
4195
      cyg_interrupt_enable
4196
      cyg_interrupt_disable
4197
      cyg_interrupt_mask
4198
      cyg_interrupt_mask_intunsafe
4199
      cyg_interrupt_unmask
4200
      cyg_interrupt_unmask_intunsafe
4201
      cyg_interrupt_set_cpu
4202
      cyg_interrupt_get_cpu
4203
      cyg_interrupt_get_vsr
4204
      cyg_interrupt_set_vsr
4205
      Manage interrupt handlers
4206
    
4207
 
4208
    
4209
      
4210
        
4211
#include <cyg/kernel/kapi.h>
4212
        
4213
        
4214
          void cyg_interrupt_create
4215
          cyg_vector_t vector
4216
          cyg_priority_t priority
4217
          cyg_addrword_t data
4218
          cyg_ISR_t* isr
4219
          cyg_DSR_t* dsr
4220
          cyg_handle_t* handle
4221
          cyg_interrupt* intr
4222
        
4223
        
4224
          void cyg_interrupt_delete
4225
          cyg_handle_t interrupt
4226
        
4227
        
4228
          void cyg_interrupt_attach
4229
          cyg_handle_t interrupt
4230
        
4231
        
4232
          void cyg_interrupt_detach
4233
          cyg_handle_t interrupt
4234
        
4235
        
4236
          void cyg_interrupt_configure
4237
          cyg_vector_t vector
4238
          cyg_bool_t level
4239
          cyg_bool_t up
4240
        
4241
        
4242
          void cyg_interrupt_acknowledge
4243
          cyg_vector_t vector
4244
        
4245
        
4246
          void cyg_interrupt_disable
4247
          
4248
        
4249
        
4250
          void cyg_interrupt_enable
4251
          
4252
        
4253
        
4254
          void cyg_interrupt_mask
4255
          cyg_vector_t vector
4256
        
4257
        
4258
          void cyg_interrupt_mask_intunsafe
4259
          cyg_vector_t vector
4260
        
4261
        
4262
          void cyg_interrupt_unmask
4263
          cyg_vector_t vector
4264
        
4265
        
4266
          void cyg_interrupt_unmask_intunsafe
4267
          cyg_vector_t vector
4268
        
4269
        
4270
          void cyg_interrupt_set_cpu
4271
          cyg_vector_t vector
4272
          cyg_cpu_t cpu
4273
        
4274
        
4275
          cyg_cpu_t cyg_interrupt_get_cpu
4276
          cyg_vector_t vector
4277
        
4278
        
4279
          void cyg_interrupt_get_vsr
4280
          cyg_vector_t vector
4281
          cyg_VSR_t** vsr
4282
        
4283
        
4284
          void cyg_interrupt_set_vsr
4285
          cyg_vector_t vector
4286
          cyg_VSR_t* vsr
4287
        
4288
      
4289
    
4290
 
4291
    Description
4292
      
4293
The kernel provides an interface for installing interrupt handlers and
4294
controlling when interrupts occur. This functionality is used
4295
primarily by eCos device drivers and by any application code that
4296
interacts directly with hardware. However in most cases it is better
4297
to avoid using this kernel functionality directly, and instead the
4298
device driver API provided by the common HAL package should be used.
4299
Use of the kernel package is optional, and some applications such as
4300
RedBoot work with no need for multiple threads or synchronization
4301
primitives. Any code which calls the kernel directly rather than the
4302
device driver API will not function in such a configuration. When the
4303
kernel package is present the device driver API is implemented as
4304
#define's to the equivalent kernel calls, otherwise
4305
it is implemented inside the common HAL package. The latter
4306
implementation can be simpler than the kernel one because there is no
4307
need to consider thread preemption and similar issues.
4308
      
4309
      
4310
The exact details of interrupt handling vary widely between
4311
architectures. The functionality provided by the kernel abstracts away
4312
from many of the details of the underlying hardware, thus simplifying
4313
application development. However this is not always successful. For
4314
example, if some hardware does not provide any support at all for
4315
masking specific interrupts then calling
4316
cyg_interrupt_mask may not behave as intended:
4317
instead of masking just the one interrupt source it might disable all
4318
interrupts, because that is as close to the desired behaviour as is
4319
possible given the hardware restrictions. Another possibility is that
4320
masking a given interrupt source also affects all lower-priority
4321
interrupts, but still allows higher-priority ones. The documentation
4322
for the appropriate HAL packages should be consulted for more
4323
information about exactly how interrupts are handled on any given
4324
hardware. The HAL header files will also contain useful information.
4325
      
4326
    
4327
 
4328
    Interrupt Handlers
4329
      
4330
Interrupt handlers are created by a call to
4331
cyg_interrupt_create. This takes the following
4332
arguments:
4333
      
4334
      
4335
        
4336
          cyg_vector_t vector
4337
          
4338
The interrupt vector, a small integer, identifies the specific
4339
interrupt source. The appropriate hardware documentation or HAL header
4340
files should be consulted for details of which vector corresponds to
4341
which device.
4342
          
4343
        
4344
        
4345
          cyg_priority_t priority
4346
          
4347
Some hardware may support interrupt priorities, where a low priority
4348
interrupt handler can in turn be interrupted by a higher priority one.
4349
Again hardware-specific documentation should be consulted for details
4350
about what the valid interrupt priority levels are.
4351
          
4352
        
4353
        
4354
          cyg_addrword_t data
4355
          
4356
When an interrupt occurs eCos will first call the associated
4357
interrupt service routine or ISR, then optionally a deferred service
4358
routine or DSR. The data argument to
4359
cyg_interrupt_create will be passed to both these
4360
functions. Typically it will be a pointer to some data structure.
4361
          
4362
        
4363
        
4364
          cyg_ISR_t isr
4365
          
4366
When an interrupt occurs the hardware will transfer control to the
4367
appropriate vector service routine or VSR, which is usually provided
4368
by eCos. This performs any appropriate processing, for example to work
4369
out exactly which interrupt occurred, and then as quickly as possible
4370
transfers control the installed ISR. An ISR is a C function which
4371
takes the following form:
4372
          
4373
          
4374
cyg_uint32
4375
isr_function(cyg_vector_t vector, cyg_addrword_t data)
4376
{
4377
    cyg_bool_t dsr_required = 0;
4378
 
4379
4380
 
4381
    return dsr_required ?
4382
        (CYG_ISR_CALL_DSR | CYG_ISR_HANDLED) :
4383
        CYG_ISR_HANDLED;
4384
}
4385
          
4386
          
4387
The first argument identifies the particular interrupt source,
4388
especially useful if there multiple instances of a given device and a
4389
single ISR can be used for several different interrupt vectors. The
4390
second argument is the data field passed to
4391
cyg_interrupt_create, usually a pointer to some
4392
data structure. The exact conditions under which an ISR runs will
4393
depend partly on the hardware and partly on configuration options.
4394
Interrupts may currently be disabled globally, especially if the
4395
hardware does not support interrupt priorities. Alternatively
4396
interrupts may be enabled such that higher priority interrupts are
4397
allowed through. The ISR may be running on a separate interrupt stack,
4398
or on the stack of whichever thread was running at the time the
4399
interrupt happened.
4400
          
4401
          
4402
A typical ISR will do as little work as possible, just enough to meet
4403
the needs of the hardware and then acknowledge the interrupt by
4404
calling cyg_interrupt_acknowledge. This ensures
4405
that interrupts will be quickly reenabled, so higher priority devices
4406
can be serviced. For some applications there may be one device which
4407
is especially important and whose ISR can take much longer than
4408
normal. However eCos device drivers usually will not assume that they
4409
are especially important, so their ISRs will be as short as possible.
4410
          
4411
          
4412
The return value of an ISR is normally a bit mask containing zero, one
4413
or both of the following bits: CYG_ISR_CALL_DSR or
4414
CYG_ISR_HANDLED. The former indicates that further
4415
processing is required at DSR level, and the interrupt handler's DSR
4416
will be run as soon as possible. The latter indicates that the
4417
interrupt was handled by this ISR so there is no need to call other
4418
interrupt handlers which might be chained on this interrupt vector. If
4419
this ISR did not handle the interrupt it should not set the
4420
CYG_ISR_HANDLED bit so that other chained interrupt handlers may
4421
handle the interrupt.
4422
          
4423
          
4424
An ISR is allowed to make very few kernel calls. It can manipulate the
4425
interrupt mask, and on SMP systems it can use spinlocks. However an
4426
ISR must not make higher-level kernel calls such as posting to a
4427
semaphore, instead any such calls must be made from the DSR. This
4428
avoids having to disable interrupts throughout the kernel and thus
4429
improves interrupt latency.
4430
          
4431
        
4432
        
4433
          cyg_DSR_t dsr
4434
          
4435
If an interrupt has occurred and the ISR has returned a value
4436
with CYG_ISR_CALL_DSR bit being set, the system
4437
will call the DSR associated with this interrupt
4438
handler. If the scheduler is not currently locked then the DSR will
4439
run immediately. However if the interrupted thread was in the middle
4440
of a kernel call and had locked the scheduler, then the DSR will be
4441
deferred until the scheduler is again unlocked. This allows the
4442
DSR to make certain kernel calls safely, for example posting to a
4443
semaphore or signalling a condition variable. A DSR is a C function
4444
which takes the following form:
4445
          
4446
          
4447
void
4448
dsr_function(cyg_vector_t vector,
4449
             cyg_ucount32 count,
4450
             cyg_addrword_t data)
4451
{
4452
}
4453
          
4454
          
4455
The first argument identifies the specific interrupt that has caused
4456
the DSR to run. The second argument indicates the number of these
4457
interrupts that have occurred and for which the ISR requested a DSR.
4458
Usually this will be 1, unless the system is
4459
suffering from a very heavy load. The third argument is the
4460
data field passed to
4461
cyg_interrupt_create.
4462
          
4463
        
4464
        
4465
          cyg_handle_t* handle
4466
          
4467
The kernel will return a handle to the newly created interrupt handler
4468
via this argument. Subsequent operations on the interrupt handler such
4469
as attaching it to the interrupt source will use this handle.
4470
          
4471
        
4472
        
4473
          cyg_interrupt* intr
4474
          
4475
This provides the kernel with an area of memory for holding this
4476
interrupt handler and associated data.
4477
          
4478
        
4479
      
4480
      
4481
The call to cyg_interrupt_create simply fills in
4482
a kernel data structure. A typical next step is to call
4483
cyg_interrupt_attach using the handle returned by
4484
the create operation. This makes it possible to have several different
4485
interrupt handlers for a given vector, attaching whichever one is
4486
currently appropriate. Replacing an interrupt handler requires a call
4487
to cyg_interrupt_detach, followed by another call
4488
to cyg_interrupt_attach for the replacement
4489
handler. cyg_interrupt_delete can be used if an
4490
interrupt handler is no longer required.
4491
      
4492
      
4493
Some hardware may allow for further control over specific interrupts,
4494
for example whether an interrupt is level or edge triggered. Any such
4495
hardware functionality can be accessed using
4496
cyg_interrupt_configure: the
4497
level argument selects between level versus
4498
edge triggered; the up argument selects between
4499
high and low level, or between rising and falling edges.
4500
      
4501
      
4502
Usually interrupt handlers are created, attached and configured during
4503
system initialization, while global interrupts are still disabled. On
4504
most hardware it will also be necessary to call
4505
cyg_interrupt_unmask, since the sensible default
4506
for interrupt masking is to ignore any interrupts for which no handler
4507
is installed.
4508
      
4509
    
4510
 
4511
    Controlling Interrupts
4512
      
4513
eCos provides two ways of controlling whether or not interrupts
4514
happen. It is possible to disable and reenable all interrupts
4515
globally, using cyg_interrupt_disable and
4516
cyg_interrupt_enable. Typically this works by
4517
manipulating state inside the cpu itself, for example setting a flag
4518
in a status register or executing special instructions. Alternatively
4519
it may be possible to mask a specific interrupt source by writing to
4520
one or to several interrupt mask registers. Hardware-specific
4521
documentation should be consulted for the exact details of how
4522
interrupt masking works, because a full implementation is not possible
4523
on all hardware.
4524
      
4525
      
4526
The primary use for these functions is to allow data to be shared
4527
between ISRs and other code such as DSRs or threads. If both a thread
4528
and an ISR need to manipulate either a data structure or the hardware
4529
itself, there is a possible conflict if an interrupt happens just when
4530
the thread is doing such manipulation. Problems can be avoided by the
4531
thread either disabling or masking interrupts during the critical
4532
region. If this critical region requires only a few instructions then
4533
usually it is more efficient to disable interrupts. For larger
4534
critical regions it may be more appropriate to use interrupt masking,
4535
allowing other interrupts to occur. There are other uses for interrupt
4536
masking. For example if a device is not currently being used by the
4537
application then it may be desirable to mask all interrupts generated
4538
by that device.
4539
      
4540
      
4541
There are two functions for masking a specific interrupt source,
4542
cyg_interrupt_mask and
4543
cyg_interrupt_mask_intunsafe. On typical hardware
4544
masking an interrupt is not an atomic operation, so if two threads
4545
were to perform interrupt masking operations at the same time there
4546
could be problems. cyg_interrupt_mask disables
4547
all interrupts while it manipulates the interrupt mask. In situations
4548
where interrupts are already know to be disabled,
4549
cyg_interrupt_mask_intunsafe can be used
4550
instead. There are matching functions
4551
cyg_interrupt_unmask and
4552
cyg_interrupt_unmask_intsafe.
4553
      
4554
    
4555
 
4556
    SMP Support
4557
      
4558
On SMP systems the kernel provides an additional two functions related
4559
to interrupt handling. cyg_interrupt_set_cpu
4560
specifies that a particular hardware interrupt should always be
4561
handled on one specific processor in the system. In other words when
4562
the interrupt triggers it is only that processor which detects it, and
4563
it is only on that processor that the VSR and ISR will run. If a DSR
4564
is requested then it will also run on the same CPU. The
4565
function cyg_interrupt_get_cpu can be used to
4566
find out which interrupts are handled on which processor.
4567
      
4568
    
4569
 
4570
    VSR Support
4571
      
4572
When an interrupt occurs the hardware will transfer control to a piece
4573
of code known as the VSR, or Vector Service Routine. By default this
4574
code is provided by eCos. Usually it is written in assembler, but on
4575
some architectures it may be possible to implement VSRs in C by
4576
specifying an interrupt attribute. Compiler documentation should be
4577
consulted for more information on this. The default eCos VSR will work
4578
out which ISR function should process the interrupt, and set up a C
4579
environment suitable for this ISR.
4580
      
4581
      
4582
For some applications it may be desirable to replace the default eCos
4583
VSR and handle some interrupts directly. This minimizes interrupt
4584
latency, but it requires application developers to program at a lower
4585
level. Usually the best way to write a custom VSR is to copy the
4586
existing one supplied by eCos and then make appropriate modifications.
4587
The function cyg_interrupt_get_vsr can be used to
4588
get hold of the current VSR for a given interrupt vector, allowing it
4589
to be restored if the custom VSR is no longer required.
4590
cyg_interrupt_set_vsr can be used to install a
4591
replacement VSR. Usually the vsr argument will
4592
correspond to an exported label in an assembler source file.
4593
      
4594
    
4595
 
4596
    Valid contexts
4597
      
4598
In a typical configuration interrupt handlers are created and attached
4599
during system initialization, and never detached or deleted. However
4600
it is possible to perform these operations at thread level, if
4601
desired. Similarly cyg_interrupt_configure,
4602
cyg_interrupt_set_vsr, and
4603
cyg_interrupt_set_cpu are usually called only
4604
during system initialization, but on typical hardware may be called at
4605
any time. cyg_interrupt_get_vsr and
4606
cyg_interrupt_get_cpu may be called at any time.
4607
      
4608
      
4609
The functions for enabling, disabling, masking and unmasking
4610
interrupts can be called in any context, when appropriate. It is the
4611
responsibility of application developers to determine when the use of
4612
these functions is appropriate.
4613
      
4614
    
4615
 
4616
  
4617
 
4618
4619
4620
 
4621
  
4622
 
4623
    
4624
    Kernel Real-time Characterization
4625
    
4626
    
4627
      tm_basic
4628
      Measure the performance of the eCos kernel
4629
    
4630
 
4631
    
4632
      Description
4633
        
4634
When building a real-time system, care must be taken to ensure that
4635
the system will be able to perform properly within the constraints of
4636
that system. One of these constraints may be how fast certain
4637
operations can be performed. Another might be how deterministic the
4638
overall behavior of the system is. Lastly the memory footprint (size)
4639
and unit cost may be important.
4640
        
4641
        
4642
One of the major problems encountered while evaluating a system will
4643
be how to compare it with possible alternatives. Most manufacturers of
4644
real-time systems publish performance numbers, ostensibly so that
4645
users can compare the different offerings. However, what these numbers
4646
mean and how they were gathered is often not clear. The values are
4647
typically measured on a particular piece of hardware, so in order to
4648
truly compare, one must obtain measurements for exactly the same set
4649
of hardware that were gathered in a similar fashion.
4650
        
4651
        
4652
Two major items need to be present in any given set of measurements.
4653
First, the raw values for the various operations; these are typically
4654
quite easy to measure and will be available for most systems. Second,
4655
the determinacy of the numbers; in other words how much the value
4656
might change depending on other factors within the system. This value
4657
is affected by a number of factors: how long interrupts might be
4658
masked, whether or not the function can be interrupted, even very
4659
hardware-specific effects such as cache locality and pipeline usage.
4660
It is very difficult to measure the determinacy of any given
4661
operation, but that determinacy is fundamentally important to proper
4662
overall characterization of a system.
4663
        
4664
        
4665
In the discussion and numbers that follow, three key measurements are
4666
provided. The first measurement is an estimate of the interrupt
4667
latency: this is the length of time from when a hardware interrupt
4668
occurs until its Interrupt Service Routine (ISR) is called. The second
4669
measurement is an estimate of overall interrupt overhead: this is the
4670
length of time average interrupt processing takes, as measured by the
4671
real-time clock interrupt (other interrupt sources will certainly take
4672
a different amount of time, but this data cannot be easily gathered).
4673
The third measurement consists of the timings for the various kernel
4674
primitives.
4675
          
4676
        
4677
        
4678
          Methodology
4679
          
4680
Key operations in the kernel were measured by using a simple test
4681
program which exercises the various kernel primitive operations. A
4682
hardware timer, normally the one used to drive the real-time clock,
4683
was used for these measurements. In most cases this timer can be read
4684
with quite high resolution, typically in the range of a few
4685
microseconds. For each measurement, the operation was repeated a
4686
number of times. Time stamps were obtained directly before and after
4687
the operation was performed. The data gathered for the entire set of
4688
operations was then analyzed, generating average (mean), maximum and
4689
minimum values. The sample variance (a measure of how close most
4690
samples are to the mean) was also calculated. The cost of obtaining
4691
the real-time clock timer values was also measured, and was subtracted
4692
from all other times.
4693
          
4694
          
4695
Most kernel functions can be measured separately. In each case, a
4696
reasonable number of iterations are performed. Where the test case
4697
involves a kernel object, for example creating a task, each iteration
4698
is performed on a different object. There is also a set of tests which
4699
measures the interactions between multiple tasks and certain kernel
4700
primitives. Most functions are tested in such a way as to determine
4701
the variations introduced by varying numbers of objects in the system.
4702
For example, the mailbox tests measure the cost of a 'peek' operation
4703
when the mailbox is empty, has a single item, and has multiple items
4704
present. In this way, any effects of the state of the object or how
4705
many items it contains can be determined.
4706
          
4707
          
4708
There are a few things to consider about these measurements. Firstly,
4709
they are quite micro in scale and only measure the operation in
4710
question. These measurements do not adequately describe how the
4711
timings would be perturbed in a real system with multiple interrupting
4712
sources. Secondly, the possible aberration incurred by the real-time
4713
clock (system heartbeat tick) is explicitly avoided. Virtually all
4714
kernel functions have been designed to be interruptible. Thus the
4715
times presented are typical, but best case, since any particular
4716
function may be interrupted by the clock tick processing. This number
4717
is explicitly calculated so that the value may be included in any
4718
deadline calculations required by the end user. Lastly, the reported
4719
measurements were obtained from a system built with all options at
4720
their default values. Kernel instrumentation and asserts are also
4721
disabled for these measurements. Any number of configuration options
4722
can change the measured results, sometimes quite dramatically. For
4723
example, mutexes are using priority inheritance in these measurements.
4724
The numbers will change if the system is built with priority
4725
inheritance on mutex variables turned off.
4726
          
4727
          
4728
The final value that is measured is an estimate of interrupt latency.
4729
This particular value is not explicitly calculated in the test program
4730
used, but rather by instrumenting the kernel itself. The raw number of
4731
timer ticks that elapse between the time the timer generates an
4732
interrupt and the start of the timer ISR is kept in the kernel. These
4733
values are printed by the test program after all other operations have
4734
been tested. Thus this should be a reasonable estimate of the
4735
interrupt latency over time.
4736
          
4737
        
4738
 
4739
        
4740
          Using these Measurements
4741
          
4742
These measurements can be used in a number of ways. The most typical
4743
use will be to compare different real-time kernel offerings on similar
4744
hardware, another will be to estimate the cost of implementing a task
4745
using eCos (applications can be examined to see what effect the kernel
4746
operations will have on the total execution time). Another use would
4747
be to observe how the tuning of the kernel affects overall operation.
4748
          
4749
        
4750
 
4751
        
4752
          Influences on Performance
4753
            
4754
A number of factors can affect real-time performance in a system. One
4755
of the most common factors, yet most difficult to characterize, is the
4756
effect of device drivers and interrupts on system timings. Different
4757
device drivers will have differing requirements as to how long
4758
interrupts are suppressed, for example. The eCos system has been
4759
designed with this in mind, by separating the management of interrupts
4760
(ISR handlers) and the processing required by the interrupt
4761
(DSR—Deferred Service Routine— handlers). However, since
4762
there is so much variability here, and indeed most device drivers will
4763
come from the end users themselves, these effects cannot be reliably
4764
measured. Attempts have been made to measure the overhead of the
4765
single interrupt that eCos relies on, the real-time clock timer. This
4766
should give you a reasonable idea of the cost of executing interrupt
4767
handling for devices.
4768
          
4769
        
4770
 
4771
       
4772
         Measured Items
4773
         
4774
This section describes the various tests and the numbers presented.
4775
All tests use the C kernel API (available by way of
4776
cyg/kernel/kapi.h). There is a single main thread
4777
in the system that performs the various tests. Additional threads may
4778
be created as part of the testing, but these are short lived and are
4779
destroyed between tests unless otherwise noted. The terminology
4780
“lower priority” means a priority that is less important,
4781
not necessarily lower in numerical value. A higher priority thread
4782
will run in preference to a lower priority thread even though the
4783
priority value of the higher priority thread may be numerically less
4784
than that of the lower priority thread.
4785
          
4786
 
4787
          
4788
            Thread Primitives
4789
            
4790
              
4791
                Create thread
4792
                
4793
This test measures the cyg_thread_create() call.
4794
Each call creates a totally new thread. The set of threads created by
4795
this test will be reused in the subsequent thread primitive tests.
4796
                
4797
              
4798
              
4799
                Yield thread
4800
                
4801
This test measures the cyg_thread_yield() call.
4802
For this test, there are no other runnable threads, thus the test
4803
should just measure the overhead of trying to give up the CPU.
4804
                
4805
              
4806
              
4807
                Suspend [suspended] thread
4808
                
4809
This test measures the cyg_thread_suspend() call.
4810
A thread may be suspended multiple times; each thread is already
4811
suspended from its initial creation, and is suspended again.
4812
                
4813
              
4814
              
4815
                Resume thread
4816
                
4817
This test measures the cyg_thread_resume() call.
4818
All of the threads have a suspend count of 2, thus this call does not
4819
make them runnable. This test just measures the overhead of resuming a
4820
thread.
4821
                
4822
              
4823
              
4824
                Set priority
4825
                
4826
This test measures the cyg_thread_set_priority()
4827
call. Each thread, currently suspended, has its priority set to a new
4828
value.
4829
                
4830
              
4831
              
4832
                Get priority
4833
                
4834
This test measures the cyg_thread_get_priority()
4835
call.
4836
                
4837
              
4838
              
4839
                Kill [suspended] thread
4840
                
4841
This test measures the cyg_thread_kill() call.
4842
Each thread in the set is killed. All threads are known to be
4843
suspended before being killed.
4844
                
4845
              
4846
              
4847
                Yield [no other] thread
4848
                
4849
This test measures the cyg_thread_yield() call
4850
again. This is to demonstrate that the
4851
cyg_thread_yield() call has a fixed overhead,
4852
regardless of whether there are other threads in the system.
4853
                
4854
              
4855
              
4856
                Resume [suspended low priority] thread
4857
                
4858
This test measures the cyg_thread_resume() call
4859
again. In this case, the thread being resumed is lower priority than
4860
the main thread, thus it will simply become ready to run but not be
4861
granted the CPU. This test measures the cost of making a thread ready
4862
to run.
4863
                
4864
              
4865
              
4866
                Resume [runnable low priority] thread
4867
                
4868
This test measures the cyg_thread_resume() call
4869
again. In this case, the thread being resumed is lower priority than
4870
the main thread and has already been made runnable, so in fact the
4871
resume call has no effect.
4872
                
4873
              
4874
              
4875
                Suspend [runnable] thread
4876
                
4877
This test measures the cyg_thread_suspend() call
4878
again. In this case, each thread has already been made runnable (by
4879
previous tests).
4880
                
4881
              
4882
              
4883
                Yield [only low priority] thread
4884
                
4885
This test measures the cyg_thread_yield() call.
4886
In this case, there are many other runnable threads, but they are all
4887
lower priority than the main thread, thus no thread switches will take
4888
place.
4889
                
4890
              
4891
              
4892
                Suspend [runnable->not runnable] thread
4893
                
4894
This test measures the cyg_thread_suspend() call
4895
again. The thread being suspended will become non-runnable by this
4896
action.
4897
                
4898
              
4899
              
4900
                Kill [runnable] thread
4901
                
4902
This test measures the cyg_thread_kill() call
4903
again. In this case, the thread being killed is currently runnable,
4904
but lower priority than the main thread.
4905
                
4906
              
4907
              
4908
                Resume [high priority] thread
4909
                
4910
This test measures the cyg_thread_resume() call.
4911
The thread being resumed is higher priority than the main thread, thus
4912
a thread switch will take place on each call. In fact there will be
4913
two thread switches; one to the new higher priority thread and a
4914
second back to the test thread. The test thread exits immediately.
4915
                
4916
              
4917
              
4918
                Thread switch
4919
                
4920
This test attempts to measure the cost of switching from one thread to
4921
another. Two equal priority threads are started and they will each
4922
yield to the other for a number of iterations. A time stamp is
4923
gathered in one thread before the
4924
cyg_thread_yield() call and after the call in the
4925
other thread.
4926
                
4927
              
4928
            
4929
          
4930
 
4931
          
4932
            Scheduler Primitives
4933
            
4934
              
4935
                Scheduler lock
4936
                
4937
This test measures the cyg_scheduler_lock() call.
4938
                
4939
              
4940
              
4941
                Scheduler unlock [0 threads]
4942
                
4943
This test measures the cyg_scheduler_unlock()
4944
call. There are no other threads in the system and the unlock happens
4945
immediately after a lock so there will be no pending DSR’s to
4946
run.
4947
                
4948
              
4949
              
4950
                Scheduler unlock [1 suspended thread]
4951
                
4952
This test measures the cyg_scheduler_unlock()
4953
call. There is one other thread in the system which is currently
4954
suspended.
4955
                
4956
              
4957
              
4958
                Scheduler unlock [many suspended threads]
4959
                
4960
This test measures the cyg_scheduler_unlock()
4961
call. There are many other threads in the system which are currently
4962
suspended. The purpose of this test is to determine the cost of having
4963
additional threads in the system when the scheduler is activated by
4964
way of cyg_scheduler_unlock().
4965
                
4966
              
4967
              
4968
                Scheduler unlock [many low priority threads]
4969
                
4970
This test measures the cyg_scheduler_unlock()
4971
call. There are many other threads in the system which are runnable
4972
but are lower priority than the main thread. The purpose of this test
4973
is to determine the cost of having additional threads in the system
4974
when the scheduler is activated by way of
4975
cyg_scheduler_unlock().
4976
                
4977
              
4978
            
4979
          
4980
 
4981
          
4982
            Mutex Primitives
4983
            
4984
              
4985
                Init mutex
4986
                
4987
This test measures the cyg_mutex_init() call. A
4988
number of separate mutex variables are created. The purpose of this
4989
test is to measure the cost of creating a new mutex and introducing it
4990
to the system.
4991
                
4992
              
4993
              
4994
                Lock [unlocked] mutex
4995
                
4996
This test measures the cyg_mutex_lock() call. The
4997
purpose of this test is to measure the cost of locking a mutex which
4998
is currently unlocked. There are no other threads executing in the
4999
system while this test runs.
5000
                
5001
              
5002
              
5003
                Unlock [locked] mutex
5004
                
5005
This test measures the cyg_mutex_unlock() call.
5006
The purpose of this test is to measure the cost of unlocking a mutex
5007
which is currently locked. There are no other threads executing in the
5008
system while this test runs.
5009
                
5010
              
5011
              
5012
                Trylock [unlocked] mutex
5013
                
5014
This test measures the cyg_mutex_trylock() call.
5015
The purpose of this test is to measure the cost of locking a mutex
5016
which is currently unlocked. There are no other threads executing in
5017
the system while this test runs.
5018
                
5019
              
5020
              
5021
                Trylock [locked] mutex
5022
                
5023
This test measures the cyg_mutex_trylock() call.
5024
The purpose of this test is to measure the cost of locking a mutex
5025
which is currently locked. There are no other threads executing in the
5026
system while this test runs.
5027
                
5028
              
5029
              
5030
                Destroy mutex
5031
                
5032
This test measures the cyg_mutex_destroy() call.
5033
The purpose of this test is to measure the cost of deleting a mutex
5034
from the system. There are no other threads executing in the system
5035
while this test runs.
5036
                
5037
              
5038
              
5039
                Unlock/Lock mutex
5040
                
5041
This test attempts to measure the cost of unlocking a mutex for which
5042
there is another higher priority thread waiting. When the mutex is
5043
unlocked, the higher priority waiting thread will immediately take the
5044
lock. The time from when the unlock is issued until after the lock
5045
succeeds in the second thread is measured, thus giving the round-trip
5046
or circuit time for this type of synchronizer.
5047
                
5048
              
5049
            
5050
          
5051
 
5052
          
5053
            Mailbox Primitives
5054
            
5055
              
5056
                Create mbox
5057
                
5058
This test measures the cyg_mbox_create() call. A
5059
number of separate mailboxes is created. The purpose of this test is
5060
to measure the cost of creating a new mailbox and introducing it to
5061
the system.
5062
                
5063
              
5064
              
5065
                Peek [empty] mbox
5066
                
5067
This test measures the cyg_mbox_peek() call. An
5068
attempt is made to peek the value in each mailbox, which is currently
5069
empty. The purpose of this test is to measure the cost of checking a
5070
mailbox for a value without blocking.
5071
                
5072
              
5073
              
5074
                Put [first] mbox
5075
                
5076
This test measures the cyg_mbox_put() call. One
5077
item is added to a currently empty mailbox. The purpose of this test
5078
is to measure the cost of adding an item to a mailbox. There are no
5079
other threads currently waiting for mailbox items to arrive.
5080
                
5081
              
5082
              
5083
                Peek [1 msg] mbox
5084
                
5085
This test measures the cyg_mbox_peek() call. An
5086
attempt is made to peek the value in each mailbox, which contains a
5087
single item. The purpose of this test is to measure the cost of
5088
checking a mailbox which has data to deliver.
5089
                
5090
              
5091
              
5092
                Put [second] mbox
5093
                
5094
This test measures the cyg_mbox_put() call. A
5095
second item is added to a mailbox. The purpose of this test is to
5096
measure the cost of adding an additional item to a mailbox. There are
5097
no other threads currently waiting for mailbox items to arrive.
5098
                
5099
              
5100
              
5101
                Peek [2 msgs] mbox
5102
                
5103
This test measures the cyg_mbox_peek() call. An
5104
attempt is made to peek the value in each mailbox, which contains two
5105
items. The purpose of this test is to measure the cost of checking a
5106
mailbox which has data to deliver.
5107
                
5108
              
5109
              
5110
                Get [first] mbox
5111
                
5112
This test measures the cyg_mbox_get() call. The
5113
first item is removed from a mailbox that currently contains two
5114
items. The purpose of this test is to measure the cost of obtaining an
5115
item from a mailbox without blocking.
5116
              
5117
              
5118
              
5119
                Get [second] mbox
5120
                
5121
This test measures the cyg_mbox_get() call. The
5122
last item is removed from a mailbox that currently contains one item.
5123
The purpose of this test is to measure the cost of obtaining an item
5124
from a mailbox without blocking.
5125
                
5126
              
5127
              
5128
                Tryput [first] mbox
5129
                
5130
This test measures the cyg_mbox_tryput() call. A
5131
single item is added to a currently empty mailbox. The purpose of this
5132
test is to measure the cost of adding an item to a mailbox.
5133
                
5134
              
5135
              
5136
                Peek item [non-empty] mbox
5137
                
5138
This test measures the cyg_mbox_peek_item() call.
5139
A single item is fetched from a mailbox that contains a single item.
5140
The purpose of this test is to measure the cost of obtaining an item
5141
without disturbing the mailbox.
5142
                
5143
              
5144
              
5145
                Tryget [non-empty] mbox
5146
                
5147
This test measures the cyg_mbox_tryget() call. A
5148
single item is removed from a mailbox that contains exactly one item.
5149
The purpose of this test is to measure the cost of obtaining one item
5150
from a non-empty mailbox.
5151
                
5152
              
5153
              
5154
                Peek item [empty] mbox
5155
                
5156
This test measures the cyg_mbox_peek_item() call.
5157
An attempt is made to fetch an item from a mailbox that is empty. The
5158
purpose of this test is to measure the cost of trying to obtain an
5159
item when the mailbox is empty.
5160
                
5161
              
5162
              
5163
                Tryget [empty] mbox
5164
                
5165
This test measures the cyg_mbox_tryget() call. An
5166
attempt is made to fetch an item from a mailbox that is empty. The
5167
purpose of this test is to measure the cost of trying to obtain an
5168
item when the mailbox is empty.
5169
                
5170
              
5171
              
5172
                Waiting to get mbox
5173
                
5174
This test measures the cyg_mbox_waiting_to_get()
5175
call. The purpose of this test is to measure the cost of determining
5176
how many threads are waiting to obtain a message from this mailbox.
5177
                
5178
              
5179
              
5180
                Waiting to put mbox
5181
                
5182
This test measures the cyg_mbox_waiting_to_put()
5183
call. The purpose of this test is to measure the cost of determining
5184
how many threads are waiting to put a message into this mailbox.
5185
                
5186
              
5187
              
5188
                Delete mbox
5189
                
5190
This test measures the cyg_mbox_delete() call.
5191
The purpose of this test is to measure the cost of destroying a
5192
mailbox and removing it from the system.
5193
                
5194
              
5195
              
5196
                Put/Get mbox
5197
                
5198
In this round-trip test, one thread is sending data to a mailbox that
5199
is being consumed by another thread. The time from when the data is
5200
put into the mailbox until it has been delivered to the waiting thread
5201
is measured. Note that this time will contain a thread switch.
5202
                
5203
              
5204
            
5205
          
5206
 
5207
          
5208
            Semaphore Primitives
5209
            
5210
              
5211
                Init semaphore
5212
                
5213
This test measures the cyg_semaphore_init() call.
5214
A number of separate semaphore objects are created and introduced to
5215
the system. The purpose of this test is to measure the cost of
5216
creating a new semaphore.
5217
                
5218
              
5219
              
5220
                Post [0] semaphore
5221
                
5222
This test measures the cyg_semaphore_post() call.
5223
Each semaphore currently has a value of 0 and there are no other
5224
threads in the system. The purpose of this test is to measure the
5225
overhead cost of posting to a semaphore. This cost will differ if
5226
there is a thread waiting for the semaphore.
5227
                
5228
              
5229
              
5230
                Wait [1] semaphore
5231
                
5232
This test measures the cyg_semaphore_wait() call.
5233
The semaphore has a current value of 1 so the call is non-blocking.
5234
The purpose of the test is to measure the overhead of
5235
“taking” a semaphore.
5236
                
5237
              
5238
              
5239
                Trywait [0] semaphore
5240
                
5241
This test measures the cyg_semaphore_trywait()
5242
call. The semaphore has a value of 0 when the call is made. The
5243
purpose of this test is to measure the cost of seeing if a semaphore
5244
can be “taken” without blocking. In this case, the answer
5245
would be no.
5246
                
5247
              
5248
              
5249
                Trywait [1] semaphore
5250
                
5251
This test measures the cyg_semaphore_trywait()
5252
call. The semaphore has a value of 1 when the call is made. The
5253
purpose of this test is to measure the cost of seeing if a semaphore
5254
can be “taken” without blocking. In this case, the answer
5255
would be yes.
5256
                
5257
              
5258
              
5259
                Peek semaphore
5260
                
5261
This test measures the cyg_semaphore_peek() call.
5262
The purpose of this test is to measure the cost of obtaining the
5263
current semaphore count value.
5264
                
5265
              
5266
              
5267
                Destroy semaphore
5268
                
5269
This test measures the cyg_semaphore_destroy()
5270
call. The purpose of this test is to measure the cost of deleting a
5271
semaphore from the system.
5272
                
5273
              
5274
              
5275
                Post/Wait semaphore
5276
                
5277
In this round-trip test, two threads are passing control back and
5278
forth by using a semaphore. The time from when one thread calls
5279
cyg_semaphore_post() until the other thread
5280
completes its cyg_semaphore_wait() is measured.
5281
Note that each iteration of this test will involve a thread switch.
5282
                
5283
              
5284
            
5285
          
5286
 
5287
          
5288
            Counters
5289
            
5290
              
5291
                Create counter
5292
                
5293
This test measures the cyg_counter_create() call.
5294
A number of separate counters are created. The purpose of this test is
5295
to measure the cost of creating a new counter and introducing it to
5296
the system.
5297
                
5298
              
5299
              
5300
                Get counter value
5301
                
5302
This test measures the
5303
cyg_counter_current_value() call. The current
5304
value of each counter is obtained.
5305
                
5306
              
5307
              
5308
                Set counter value
5309
                
5310
This test measures the cyg_counter_set_value()
5311
call. Each counter is set to a new value.
5312
                
5313
              
5314
              
5315
                Tick counter
5316
                
5317
This test measures the cyg_counter_tick() call.
5318
Each counter is “ticked” once.
5319
                
5320
              
5321
              
5322
                Delete counter
5323
                
5324
This test measures the cyg_counter_delete() call.
5325
Each counter is deleted from the system. The purpose of this test is
5326
to measure the cost of deleting a counter object.
5327
                
5328
              
5329
            
5330
          
5331
 
5332
          
5333
            Alarms
5334
            
5335
              
5336
                Create alarm
5337
                
5338
This test measures the cyg_alarm_create() call. A
5339
number of separate alarms are created, all attached to the same
5340
counter object. The purpose of this test is to measure the cost of
5341
creating a new counter and introducing it to the system.
5342
                
5343
              
5344
              
5345
                Initialize alarm
5346
                
5347
This test measures the cyg_alarm_initialize()
5348
call. Each alarm is initialized to a small value.
5349
                
5350
              
5351
              
5352
                Disable alarm
5353
                
5354
This test measures the cyg_alarm_disable() call.
5355
Each alarm is explicitly disabled.
5356
                
5357
              
5358
              
5359
                Enable alarm
5360
                
5361
This test measures the cyg_alarm_enable() call.
5362
Each alarm is explicitly enabled.
5363
                
5364
              
5365
              
5366
                Delete alarm
5367
                
5368
This test measures the cyg_alarm_delete() call.
5369
Each alarm is destroyed. The purpose of this test is to measure the
5370
cost of deleting an alarm and removing it from the system.
5371
                
5372
              
5373
              
5374
                Tick counter [1 alarm]
5375
                
5376
This test measures the cyg_counter_tick() call. A
5377
counter is created that has a single alarm attached to it. The purpose
5378
of this test is to measure the cost of “ticking” a counter
5379
when it has a single attached alarm. In this test, the alarm is not
5380
activated (fired).
5381
                
5382
              
5383
              
5384
                Tick counter [many alarms]
5385
                
5386
This test measures the cyg_counter_tick() call. A
5387
counter is created that has multiple alarms attached to it. The
5388
purpose of this test is to measure the cost of “ticking” a
5389
counter when it has many attached alarms. In this test, the alarms are
5390
not activated (fired).
5391
                
5392
              
5393
              
5394
                Tick & fire counter [1 alarm]
5395
                
5396
This test measures the cyg_counter_tick() call. A
5397
counter is created that has a single alarm attached to it. The purpose
5398
of this test is to measure the cost of “ticking” a counter
5399
when it has a single attached alarm. In this test, the alarm is
5400
activated (fired). Thus the measured time will include the overhead of
5401
calling the alarm callback function.
5402
                
5403
              
5404
              
5405
                Tick & fire counter [many alarms]
5406
                
5407
This test measures the cyg_counter_tick() call. A
5408
counter is created that has multiple alarms attached to it. The
5409
purpose of this test is to measure the cost of “ticking” a
5410
counter when it has many attached alarms. In this test, the alarms are
5411
activated (fired). Thus the measured time will include the overhead of
5412
calling the alarm callback function.
5413
                
5414
              
5415
              
5416
                Alarm latency [0 threads]
5417
                
5418
This test attempts to measure the latency in calling an alarm callback
5419
function. The time from the clock interrupt until the alarm function
5420
is called is measured. In this test, there are no threads that can be
5421
run, other than the system idle thread, when the clock interrupt
5422
occurs (all threads are suspended).
5423
                
5424
              
5425
              
5426
                Alarm latency [2 threads]
5427
                
5428
This test attempts to measure the latency in calling an alarm callback
5429
function. The time from the clock interrupt until the alarm function
5430
is called is measured. In this test, there are exactly two threads
5431
which are running when the clock interrupt occurs. They are simply
5432
passing back and forth by way of the
5433
cyg_thread_yield() call. The purpose of this test
5434
is to measure the variations in the latency when there are executing
5435
threads.
5436
                
5437
              
5438
              
5439
                Alarm latency [many threads]
5440
                
5441
This test attempts to measure the latency in calling an alarm callback
5442
function. The time from the clock interrupt until the alarm function
5443
is called is measured. In this test, there are a number of threads
5444
which are running when the clock interrupt occurs. They are simply
5445
passing back and forth by way of the
5446
cyg_thread_yield() call. The purpose of this test
5447
is to measure the variations in the latency when there are many
5448
executing threads.
5449
                
5450
              
5451
            
5452
          
5453
 
5454
    
5455
 
5456
  
5457
 
5458
5459
 
5460

powered by: WebSVN 2.1.0

© copyright 1999-2024 OpenCores.org, equivalent to Oliscience, all rights reserved. OpenCores®, registered trademark.