OpenCores
URL https://opencores.org/ocsvn/or1k/or1k/trunk

Subversion Repositories or1k

[/] [or1k/] [trunk/] [rtems-20020807/] [doc/] [user/] [mp.t] - Blame information for rev 1780

Go to most recent revision | Details | Compare with Previous | View Log

Line No. Rev Author Line
1 1026 ivang
@c
2
@c  COPYRIGHT (c) 1988-2002.
3
@c  On-Line Applications Research Corporation (OAR).
4
@c  All rights reserved.
5
@c
6
@c  mp.t,v 1.13 2002/01/17 21:47:47 joel Exp
7
@c
8
 
9
@chapter Multiprocessing Manager
10
 
11
@cindex multiprocessing
12
 
13
@section Introduction
14
 
15
In multiprocessor real-time systems, new
16
requirements, such as sharing data and global resources between
17
processors, are introduced.  This requires an efficient and
18
reliable communications vehicle which allows all processors to
19
communicate with each other as necessary.  In addition, the
20
ramifications of multiple processors affect each and every
21
characteristic of a real-time system, almost always making them
22
more complicated.
23
 
24
RTEMS addresses these issues by providing simple and
25
flexible real-time multiprocessing capabilities.  The executive
26
easily lends itself to both tightly-coupled and loosely-coupled
27
configurations of the target system hardware.  In addition,
28
RTEMS supports systems composed of both homogeneous and
29
heterogeneous mixtures of processors and target boards.
30
 
31
A major design goal of the RTEMS executive was to
32
transcend the physical boundaries of the target hardware
33
configuration.  This goal is achieved by presenting the
34
application software with a logical view of the target system
35
where the boundaries between processor nodes are transparent.
36
As a result, the application developer may designate objects
37
such as tasks, queues, events, signals, semaphores, and memory
38
blocks as global objects.  These global objects may then be
39
accessed by any task regardless of the physical location of the
40
object and the accessing task.  RTEMS automatically determines
41
that the object being accessed resides on another processor and
42
performs the actions required to access the desired object.
43
Simply stated, RTEMS allows the entire system, both hardware and
44
software, to be viewed logically as a single system.
45
 
46
@section Background
47
 
48
@cindex multiprocessing topologies
49
 
50
RTEMS makes no assumptions regarding the connection
51
media or topology of a multiprocessor system.  The tasks which
52
compose a particular application can be spread among as many
53
processors as needed to satisfy the application's timing
54
requirements.  The application tasks can interact using a subset
55
of the RTEMS directives as if they were on the same processor.
56
These directives allow application tasks to exchange data,
57
communicate, and synchronize regardless of which processor they
58
reside upon.
59
 
60
The RTEMS multiprocessor execution model is multiple
61
instruction streams with multiple data streams (MIMD).  This
62
execution model has each of the processors executing code
63
independent of the other processors.  Because of this
64
parallelism, the application designer can more easily guarantee
65
deterministic behavior.
66
 
67
By supporting heterogeneous environments, RTEMS
68
allows the systems designer to select the most efficient
69
processor for each subsystem of the application.  Configuring
70
RTEMS for a heterogeneous environment is no more difficult than
71
for a homogeneous one.  In keeping with RTEMS philosophy of
72
providing transparent physical node boundaries, the minimal
73
heterogeneous processing required is isolated in the MPCI layer.
74
 
75
@subsection Nodes
76
 
77
@cindex nodes, definition
78
 
79
A processor in a RTEMS system is referred to as a
80
node.  Each node is assigned a unique non-zero node number by
81
the application designer.  RTEMS assumes that node numbers are
82
assigned consecutively from one to the @code{maximum_nodes}
83
configuration parameter.  The node
84
number, node, and the maximum number of nodes, maximum_nodes, in
85
a system are found in the Multiprocessor Configuration Table.
86
The maximum_nodes field and the number of global objects,
87
maximum_global_objects, is required to be the same on all nodes
88
in a system.
89
 
90
The node number is used by RTEMS to identify each
91
node when performing remote operations.  Thus, the
92
Multiprocessor Communications Interface Layer (MPCI) must be
93
able to route messages based on the node number.
94
 
95
@subsection Global Objects
96
 
97
@cindex global objects, definition
98
 
99
All RTEMS objects which are created with the GLOBAL
100
attribute will be known on all other nodes.  Global objects can
101
be referenced from any node in the system, although certain
102
directive specific restrictions (e.g. one cannot delete a remote
103
object) may apply.  A task does not have to be global to perform
104
operations involving remote objects.  The maximum number of
105
global objects is the system is user configurable and can be
106
found in the maximum_global_objects field in the Multiprocessor
107
Configuration Table.  The distribution of tasks to processors is
108
performed during the application design phase.  Dynamic task
109
relocation is not supported by RTEMS.
110
 
111
@subsection Global Object Table
112
 
113
@cindex global objects table
114
 
115
RTEMS maintains two tables containing object
116
information on every node in a multiprocessor system: a local
117
object table and a global object table.  The local object table
118
on each node is unique and contains information for all objects
119
created on this node whether those objects are local or global.
120
The global object table contains information regarding all
121
global objects in the system and, consequently, is the same on
122
every node.
123
 
124
Since each node must maintain an identical copy of
125
the global object table,  the maximum number of entries in each
126
copy of the table must be the same.  The maximum number of
127
entries in each copy is determined by the
128
maximum_global_objects parameter in the Multiprocessor
129
Configuration Table.  This parameter, as well as the
130
maximum_nodes parameter, is required to be the same on all
131
nodes.  To maintain consistency among the table copies, every
132
node in the system must be informed of the creation or deletion
133
of a global object.
134
 
135
@subsection Remote Operations
136
 
137
@cindex MPCI and remote operations
138
 
139
When an application performs an operation on a remote
140
global object, RTEMS must generate a Remote Request (RQ) message
141
and send it to the appropriate node.  After completing the
142
requested operation, the remote node will build a Remote
143
Response (RR) message and send it to the originating node.
144
Messages generated as a side-effect of a directive (such as
145
deleting a global task) are known as Remote Processes (RP) and
146
do not require the receiving node to respond.
147
 
148
Other than taking slightly longer to execute
149
directives on remote objects, the application is unaware of the
150
location of the objects it acts upon.  The exact amount of
151
overhead required for a remote operation is dependent on the
152
media connecting the nodes and, to a lesser degree, on the
153
efficiency of the user-provided MPCI routines.
154
 
155
The following shows the typical transaction sequence
156
during a remote application:
157
 
158
@enumerate
159
 
160
@item The application issues a directive accessing a
161
remote global object.
162
 
163
@item RTEMS determines the node on which the object
164
resides.
165
 
166
@item RTEMS calls the user-provided MPCI routine
167
GET_PACKET to obtain a packet in which to build a RQ message.
168
 
169
@item After building a message packet, RTEMS calls the
170
user-provided MPCI routine SEND_PACKET to transmit the packet to
171
the node on which the object resides (referred to as the
172
destination node).
173
 
174
@item The calling task is blocked until the RR message
175
arrives, and control of the processor is transferred to another
176
task.
177
 
178
@item The MPCI layer on the destination node senses the
179
arrival of a packet (commonly in an ISR), and calls the
180
@code{@value{DIRPREFIX}multiprocessing_announce}
181
directive.  This directive readies the Multiprocessing Server.
182
 
183
@item The Multiprocessing Server calls the user-provided
184
MPCI routine RECEIVE_PACKET, performs the requested operation,
185
builds an RR message, and returns it to the originating node.
186
 
187
@item The MPCI layer on the originating node senses the
188
arrival of a packet (typically via an interrupt), and calls the RTEMS
189
@code{@value{DIRPREFIX}multiprocessing_announce} directive.  This directive
190
readies the Multiprocessing Server.
191
 
192
@item The Multiprocessing Server calls the user-provided
193
MPCI routine RECEIVE_PACKET, readies the original requesting
194
task, and blocks until another packet arrives.  Control is
195
transferred to the original task which then completes processing
196
of the directive.
197
 
198
@end enumerate
199
 
200
If an uncorrectable error occurs in the user-provided
201
MPCI layer, the fatal error handler should be invoked.  RTEMS
202
assumes the reliable transmission and reception of messages by
203
the MPCI and makes no attempt to detect or correct errors.
204
 
205
@subsection Proxies
206
 
207
@cindex proxy, definition
208
 
209
A proxy is an RTEMS data structure which resides on a
210
remote node and is used to represent a task which must block as
211
part of a remote operation. This action can occur as part of the
212
@code{@value{DIRPREFIX}semaphore_obtain} and
213
@code{@value{DIRPREFIX}message_queue_receive} directives.  If the
214
object were local, the task's control block would be available
215
for modification to indicate it was blocking on a message queue
216
or semaphore.  However, the task's control block resides only on
217
the same node as the task.  As a result, the remote node must
218
allocate a proxy to represent the task until it can be readied.
219
 
220
The maximum number of proxies is defined in the
221
Multiprocessor Configuration Table.  Each node in a
222
multiprocessor system may require a different number of proxies
223
to be configured.  The distribution of proxy control blocks is
224
application dependent and is different from the distribution of
225
tasks.
226
 
227
@subsection Multiprocessor Configuration Table
228
 
229
The Multiprocessor Configuration Table contains
230
information needed by RTEMS when used in a multiprocessor
231
system.  This table is discussed in detail in the section
232
Multiprocessor Configuration Table of the Configuring a System
233
chapter.
234
 
235
@section Multiprocessor Communications Interface Layer
236
 
237
The Multiprocessor Communications Interface Layer
238
(MPCI) is a set of user-provided procedures which enable the
239
nodes in a multiprocessor system to communicate with one
240
another.  These routines are invoked by RTEMS at various times
241
in the preparation and processing of remote requests.
242
Interrupts are enabled when an MPCI procedure is invoked.  It is
243
assumed that if the execution mode and/or interrupt level are
244
altered by the MPCI layer, that they will be restored prior to
245
returning to RTEMS.
246
 
247
@cindex MPCI, definition
248
 
249
The MPCI layer is responsible for managing a pool of
250
buffers called packets and for sending these packets between
251
system nodes.  Packet buffers contain the messages sent between
252
the nodes.  Typically, the MPCI layer will encapsulate the
253
packet within an envelope which contains the information needed
254
by the MPCI layer.  The number of packets available is dependent
255
on the MPCI layer implementation.
256
 
257
@cindex MPCI entry points
258
 
259
The entry points to the routines in the user's MPCI
260
layer should be placed in the Multiprocessor Communications
261
Interface Table.  The user must provide entry points for each of
262
the following table entries in a multiprocessor system:
263
 
264
@itemize @bullet
265
@item initialization    initialize the MPCI
266
@item get_packet        obtain a packet buffer
267
@item return_packet     return a packet buffer
268
@item send_packet       send a packet to another node
269
@item receive_packet    called to get an arrived packet
270
@end itemize
271
 
272
A packet is sent by RTEMS in each of the following situations:
273
 
274
@itemize @bullet
275
@item an RQ is generated on an originating node;
276
@item an RR is generated on a destination node;
277
@item a global object is created;
278
@item a global object is deleted;
279
@item a local task blocked on a remote object is deleted;
280
@item during system initialization to check for system consistency.
281
@end itemize
282
 
283
If the target hardware supports it, the arrival of a
284
packet at a node may generate an interrupt.  Otherwise, the
285
real-time clock ISR can check for the arrival of a packet.  In
286
any case, the
287
@code{@value{DIRPREFIX}multiprocessing_announce} directive must be called
288
to announce the arrival of a packet.  After exiting the ISR,
289
control will be passed to the Multiprocessing Server to process
290
the packet.  The Multiprocessing Server will call the get_packet
291
entry to obtain a packet buffer and the receive_entry entry to
292
copy the message into the buffer obtained.
293
 
294
@subsection INITIALIZATION
295
 
296
The INITIALIZATION component of the user-provided
297
MPCI layer is called as part of the @code{@value{DIRPREFIX}initialize_executive}
298
directive to initialize the MPCI layer and associated hardware.
299
It is invoked immediately after all of the device drivers have
300
been initialized.  This component should be adhere to the
301
following prototype:
302
 
303
@ifset is-C
304
@findex rtems_mpci_entry
305
@example
306
@group
307
rtems_mpci_entry user_mpci_initialization(
308
  rtems_configuration_table *configuration
309
);
310
@end group
311
@end example
312
@end ifset
313
 
314
@ifset is-Ada
315
@example
316
procedure User_MPCI_Initialization (
317
   Configuration : in     RTEMS.Configuration_Table_Pointer
318
);
319
@end example
320
@end ifset
321
 
322
where configuration is the address of the user's
323
Configuration Table.  Operations on global objects cannot be
324
performed until this component is invoked.  The INITIALIZATION
325
component is invoked only once in the life of any system.  If
326
the MPCI layer cannot be successfully initialized, the fatal
327
error manager should be invoked by this routine.
328
 
329
One of the primary functions of the MPCI layer is to
330
provide the executive with packet buffers.  The INITIALIZATION
331
routine must create and initialize a pool of packet buffers.
332
There must be enough packet buffers so RTEMS can obtain one
333
whenever needed.
334
 
335
@subsection GET_PACKET
336
 
337
The GET_PACKET component of the user-provided MPCI
338
layer is called when RTEMS must obtain a packet buffer to send
339
or broadcast a message.  This component should be adhere to the
340
following prototype:
341
 
342
@ifset is-C
343
@example
344
@group
345
rtems_mpci_entry user_mpci_get_packet(
346
  rtems_packet_prefix **packet
347
);
348
@end group
349
@end example
350
@end ifset
351
 
352
@ifset is-Ada
353
@example
354
procedure User_MPCI_Get_Packet (
355
   Packet : access RTEMS.Packet_Prefix_Pointer
356
);
357
@end example
358
@end ifset
359
 
360
where packet is the address of a pointer to a packet.
361
This routine always succeeds and, upon return, packet will
362
contain the address of a packet.  If for any reason, a packet
363
cannot be successfully obtained, then the fatal error manager
364
should be invoked.
365
 
366
RTEMS has been optimized to avoid the need for
367
obtaining a packet each time a message is sent or broadcast.
368
For example, RTEMS sends response messages (RR) back to the
369
originator in the same packet in which the request message (RQ)
370
arrived.
371
 
372
@subsection RETURN_PACKET
373
 
374
The RETURN_PACKET component of the user-provided MPCI
375
layer is called when RTEMS needs to release a packet to the free
376
packet buffer pool.  This component should be adhere to the
377
following prototype:
378
 
379
@ifset is-C
380
@example
381
@group
382
rtems_mpci_entry user_mpci_return_packet(
383
  rtems_packet_prefix *packet
384
);
385
@end group
386
@end example
387
@end ifset
388
 
389
@ifset is-Ada
390
@example
391
procedure User_MPCI_Return_Packet (
392
   Packet : in     RTEMS.Packet_Prefix_Pointer
393
);
394
@end example
395
@end ifset
396
 
397
where packet is the address of a packet.  If the
398
packet cannot be successfully returned, the fatal error manager
399
should be invoked.
400
 
401
@subsection RECEIVE_PACKET
402
 
403
The RECEIVE_PACKET component of the user-provided
404
MPCI layer is called when RTEMS needs to obtain a packet which
405
has previously arrived.  This component should be adhere to the
406
following prototype:
407
 
408
@ifset is-C
409
@example
410
@group
411
rtems_mpci_entry user_mpci_receive_packet(
412
  rtems_packet_prefix **packet
413
);
414
@end group
415
@end example
416
@end ifset
417
 
418
@ifset is-Ada
419
@example
420
procedure User_MPCI_Receive_Packet (
421
   Packet : access RTEMS.Packet_Prefix_Pointer
422
);
423
@end example
424
@end ifset
425
 
426
where packet is a pointer to the address of a packet
427
to place the message from another node.  If a message is
428
available, then packet will contain the address of the message
429
from another node.  If no messages are available, this entry
430
packet should contain NULL.
431
 
432
@subsection SEND_PACKET
433
 
434
The SEND_PACKET component of the user-provided MPCI
435
layer is called when RTEMS needs to send a packet containing a
436
message to another node.  This component should be adhere to the
437
following prototype:
438
 
439
@ifset is-C
440
@example
441
@group
442
rtems_mpci_entry user_mpci_send_packet(
443
  rtems_unsigned32       node,
444
  rtems_packet_prefix  **packet
445
);
446
@end group
447
@end example
448
@end ifset
449
 
450
@ifset is-Ada
451
@example
452
procedure User_MPCI_Send_Packet (
453
   Node   : in     RTEMS.Unsigned32;
454
   Packet : access RTEMS.Packet_Prefix_Pointer
455
);
456
@end example
457
@end ifset
458
 
459
where node is the node number of the destination and packet is the
460
address of a packet which containing a message.  If the packet cannot
461
be successfully sent, the fatal error manager should be invoked.
462
 
463
If node is set to zero, the packet is to be
464
broadcasted to all other nodes in the system.  Although some
465
MPCI layers will be built upon hardware which support a
466
broadcast mechanism, others may be required to generate a copy
467
of the packet for each node in the system.
468
 
469
@c XXX packet_prefix structure needs to be defined in this document
470
Many MPCI layers use the @code{packet_length} field of the
471
@code{@value{DIRPREFIX}packet_prefix} portion
472
of the packet to avoid sending unnecessary data.  This is especially
473
useful if the media connecting the nodes is relatively slow.
474
 
475
The to_convert field of the MP_packet_prefix portion of the packet indicates
476
how much of the packet (in @code{@value{DIRPREFIX}unsigned32}'s) may require
477
conversion in a heterogeneous system.
478
 
479
@subsection Supporting Heterogeneous Environments
480
 
481
@cindex heterogeneous multiprocessing
482
 
483
Developing an MPCI layer for a heterogeneous system
484
requires a thorough understanding of the differences between the
485
processors which comprise the system.  One difficult problem is
486
the varying data representation schemes used by different
487
processor types.  The most pervasive data representation problem
488
is the order of the bytes which compose a data entity.
489
Processors which place the least significant byte at the
490
smallest address are classified as little endian processors.
491
Little endian byte-ordering is shown below:
492
 
493
 
494
@example
495
@group
496
+---------------+----------------+---------------+----------------+
497
|               |                |               |                |
498
|    Byte 3     |     Byte 2     |    Byte 1     |    Byte 0      |
499
|               |                |               |                |
500
+---------------+----------------+---------------+----------------+
501
@end group
502
@end example
503
 
504
Conversely, processors which place the most
505
significant byte at the smallest address are classified as big
506
endian processors.  Big endian byte-ordering is shown below:
507
 
508
@example
509
@group
510
+---------------+----------------+---------------+----------------+
511
|               |                |               |                |
512
|    Byte 0     |     Byte 1     |    Byte 2     |    Byte 3      |
513
|               |                |               |                |
514
+---------------+----------------+---------------+----------------+
515
@end group
516
@end example
517
 
518
Unfortunately, sharing a data structure between big
519
endian and little endian processors requires translation into a
520
common endian format.  An application designer typically chooses
521
the common endian format to minimize conversion overhead.
522
 
523
Another issue in the design of shared data structures
524
is the alignment of data structure elements.  Alignment is both
525
processor and compiler implementation dependent.  For example,
526
some processors allow data elements to begin on any address
527
boundary, while others impose restrictions.  Common restrictions
528
are that data elements must begin on either an even address or
529
on a long word boundary.  Violation of these restrictions may
530
cause an exception or impose a performance penalty.
531
 
532
Other issues which commonly impact the design of
533
shared data structures include the representation of floating
534
point numbers, bit fields, decimal data, and character strings.
535
In addition, the representation method for negative integers
536
could be one's or two's complement.  These factors combine to
537
increase the complexity of designing and manipulating data
538
structures shared between processors.
539
 
540
RTEMS addressed these issues in the design of the
541
packets used to communicate between nodes.  The RTEMS packet
542
format is designed to allow the MPCI layer to perform all
543
necessary conversion without burdening the developer with the
544
details of the RTEMS packet format.  As a result, the MPCI layer
545
must be aware of the following:
546
 
547
@itemize @bullet
548
@item All packets must begin on a four byte boundary.
549
 
550
@item Packets are composed of both RTEMS and application data.
551
All RTEMS data is treated as thirty-two (32) bit unsigned
552
quantities and is in the first @code{@value{RPREFIX}MINIMUM_UNSIGNED32S_TO_CONVERT}
553
thirty-two (32) quantities of the packet.
554
 
555
@item The RTEMS data component of the packet must be in native
556
endian format.  Endian conversion may be performed by either the
557
sending or receiving MPCI layer.
558
 
559
@item RTEMS makes no assumptions regarding the application
560
data component of the packet.
561
@end itemize
562
 
563
@section Operations
564
 
565
@subsection Announcing a Packet
566
 
567
The @code{@value{DIRPREFIX}multiprocessing_announce} directive is called by
568
the MPCI layer to inform RTEMS that a packet has arrived from
569
another node.  This directive can be called from an interrupt
570
service routine or from within a polling routine.
571
 
572
@section Directives
573
 
574
This section details the additional directives
575
required to support RTEMS in a multiprocessor configuration.  A
576
subsection is dedicated to each of this manager's directives and
577
describes the calling sequence, related constants, usage, and
578
status codes.
579
 
580
@c
581
@c
582
@c
583
@page
584
@subsection MULTIPROCESSING_ANNOUNCE - Announce the arrival of a packet
585
 
586
@cindex announce arrival of package
587
 
588
@subheading CALLING SEQUENCE:
589
 
590
@ifset is-C
591
@findex rtems_multiprocessing_announce
592
@example
593
void rtems_multiprocessing_announce( void );
594
@end example
595
@end ifset
596
 
597
@ifset is-Ada
598
@example
599
procedure Multiprocessing_Announce;
600
@end example
601
@end ifset
602
 
603
@subheading DIRECTIVE STATUS CODES:
604
 
605
NONE
606
 
607
@subheading DESCRIPTION:
608
 
609
This directive informs RTEMS that a multiprocessing
610
communications packet has arrived from another node.  This
611
directive is called by the user-provided MPCI, and is only used
612
in multiprocessor configurations.
613
 
614
@subheading NOTES:
615
 
616
This directive is typically called from an ISR.
617
 
618
This directive will almost certainly cause the
619
calling task to be preempted.
620
 
621
This directive does not generate activity on remote nodes.

powered by: WebSVN 2.1.0

© copyright 1999-2024 OpenCores.org, equivalent to Oliscience, all rights reserved. OpenCores®, registered trademark.