OpenCores
URL https://opencores.org/ocsvn/openrisc/openrisc/trunk

Subversion Repositories openrisc

[/] [openrisc/] [trunk/] [rtos/] [ecos-2.0/] [packages/] [io/] [fileio/] [v2_0/] [doc/] [fileio.txt] - Blame information for rev 454

Go to most recent revision | Details | Compare with Previous | View Log

Line No. Rev Author Line
1 27 unneback
File System Support Infrastructure
2
==================================
3
 
4
Nick Garnett
5
v0.2
6
 
7
 
8
This document describes the filesystem infrastructure provided in
9
eCos. This is implemented by the FILEIO package and provides POSIX
10
compliant file and IO operations together with the BSD socket
11
API. These APIs are described in the relevant standards and original
12
documentation and will not be described here. This document is,
13
instead, concerned with the interfaces presented to client
14
filesystems and network protocol stacks.
15
 
16
The FILEIO infrastructure consist mainly of a set of tables containing
17
pointers to the primary interface functions of a file system. This
18
approach avoids problems of namespace pollution (several filesystems
19
can have a function called read(),so long as they are static). The
20
system is also structured to eliminate the need for dynamic memory
21
allocation.
22
 
23
New filesystems can be written directly to the interfaces described
24
here. Existing filesystems can be ported very easily by the
25
introduction of a thin veneer porting layer that translates FILEIO
26
calls into native filesystem calls.
27
 
28
The term filesystem should be read fairly loosely in this
29
document. Object accessed through these interfaces could equally be
30
network protocol sockets, device drivers, fifos, message queues or any
31
other object that can present a file-like interface.
32
 
33
 
34
File System Table
35
-----------------
36
 
37
The filesystem table is an array of entries that describe each
38
filesystem implementation that is part of the system image. Each
39
resident filesystem should export an entry to this table using the
40
FSTAB_ENTRY() macro.
41
 
42
The table entries are described by the following structure:
43
 
44
struct cyg_fstab_entry
45
{
46
    const char          *name;          // filesystem name
47
    CYG_ADDRWORD        data;           // private data value
48
    cyg_uint32          syncmode;       // synchronization mode
49
 
50
    int     (*mount)    ( cyg_fstab_entry *fste, cyg_mtab_entry *mte );
51
    int     (*umount)   ( cyg_mtab_entry *mte );
52
    int     (*open)     ( cyg_mtab_entry *mte, cyg_dir dir, const char *name,
53
                          int mode,  cyg_file *fte );
54
    int     (*unlink)   ( cyg_mtab_entry *mte, cyg_dir dir, const char *name );
55
    int     (*mkdir)    ( cyg_mtab_entry *mte, cyg_dir dir, const char *name );
56
    int     (*rmdir)    ( cyg_mtab_entry *mte, cyg_dir dir, const char *name );
57
    int     (*rename)   ( cyg_mtab_entry *mte, cyg_dir dir1, const char *name1,
58
                          cyg_dir dir2, const char *name2 );
59
    int     (*link)     ( cyg_mtab_entry *mte, cyg_dir dir1, const char *name1,
60
                          cyg_dir dir2, const char *name2, int type );
61
    int     (*opendir)  ( cyg_mtab_entry *mte, cyg_dir dir, const char *name,
62
                          cyg_file *fte );
63
    int     (*chdir)    ( cyg_mtab_entry *mte, cyg_dir dir, const char *name,
64
                          cyg_dir *dir_out );
65
    int     (*stat)     ( cyg_mtab_entry *mte, cyg_dir dir, const char *name,
66
                          struct stat *buf);
67
    int     (*getinfo)  ( cyg_mtab_entry *mte, cyg_dir dir, const char *name,
68
                          int key, char *buf, int len );
69
    int     (*setinfo)  ( cyg_mtab_entry *mte, cyg_dir dir, const char *name,
70
                          int key, char *buf, int len );
71
};
72
 
73
The _name_ field points to a string that identifies this filesystem
74
implementation. Typical values might be "romfs", "msdos", "ext2" etc.
75
 
76
The _data_ field contains any private data that the filesystem needs,
77
perhaps the root of its data structures.
78
 
79
The _syncmode_ field contains a description of the locking protocol to
80
be used when accessing this filesystem. It will be described in more
81
detail in the "Synchronization" section.
82
 
83
The remaining fields are pointers to functions that implement
84
filesystem operations that apply to files and directories as whole
85
objects. The operation implemented by each function should be obvious
86
from the names, with a few exceptions.
87
 
88
The _opendir_ function opens a directory for reading. See the section
89
on Directories later for details.
90
 
91
The _getinfo_ and _setinfo_ functions provide support for various
92
minor control and information functions such as pathconf() and
93
access().
94
 
95
With the exception of the _mount_ and _umount_ functions, all of these
96
functions take three standard arguments, a pointer to a mount table
97
entry (see later) a directory pointer (also see later) and a file name
98
relative to the directory. These should be used by the filesystem to
99
locate the object of interest.
100
 
101
Mount Table
102
-----------
103
 
104
The mount table records the filesystems that are actually active.
105
These can be seen as being analogous to mount points in Unix systems.
106
 
107
There are two sources of mount table entries. Filesystems (or other
108
components) may export static entries to the table using the
109
MTAB_ENTRY() macro. Alternatively, new entries may be installed at run
110
time using the mount() function. Both types of entry may be unmounted
111
with the umount() function.
112
 
113
A mount table entry has the following structure:
114
 
115
struct cyg_mtab_entry
116
{
117
    const char          *name;          // name of mount point
118
    const char          *fsname;        // name of implementing filesystem
119
    const char          *devname;       // name of hardware device
120
    CYG_ADDRWORD        data;           // private data value
121
    cyg_bool            valid;          // Valid entry?
122
    cyg_fstab_entry     *fs;            // pointer to fstab entry
123
    cyg_dir             root;           // root directory pointer
124
};
125
 
126
The _name_ field identifies the mount point. This is used to translate
127
rooted filenames (filenames that begin with "/") into the correct
128
filesystem. When a file name that begins with "/" is submitted, it is
129
matched against the _name_ fields of all valid mount table
130
entries. The entry that yields the longest match terminating before a
131
"/", or end of string, wins and the appropriate function from the
132
filesystem table entry is then passed the remainder of the file name
133
together with a pointer to the table entry and the value of the _root_
134
field as the directory pointer.
135
 
136
For example, consider a mount table that contains the following
137
entries:
138
 
139
        { "/",    "msdos", "/dev/hd0", ... }
140
        { "/fd",  "msdos", "/dev/fd0", ... }
141
        { "/rom", "romfs", "", ... }
142
        { "/tmp", "ramfs", "", ... }
143
        { "/dev", "devfs", "", ... }
144
 
145
An attempt to open "/tmp/foo" would be directed to the RAM filesystem
146
while an open of "/bar/bundy" would be directed to the hard disc MSDOS
147
filesystem. Opening "/dev/tty0" would be directed to the device
148
management filesystem for lookup in the device table.
149
 
150
Unrooted file names (those that do not begin with a '/') are passed
151
straight to the current directory. The current directory is
152
represented by a pair consisting of a mount table entry and a
153
directory pointer.
154
 
155
The _fsname_ field points to a string that should match the _name_
156
field of the implementing filesystem. During initialization the mount
157
table is scanned and the _fsname_ entries looked up in the
158
filesystem table. For each match, the filesystem's _mount_ function
159
is called and if successful the mount table entry is marked as valid
160
and the _fs_ pointer installed.
161
 
162
The _devname_ field contains the name of the device that this
163
filesystem is to use. This may match an entry in the device table (see
164
later) or may be a string that is specific to the filesystem if it has
165
its own internal device drivers.
166
 
167
The _data_ field is a private data value. This may be installed either
168
statically when the table entry is defined, or may be installed during
169
the _mount_ operation.
170
 
171
The _valid_ field indicates whether this mount point has actually been
172
mounted successfully. Entries with a false _valid_ field are ignored
173
when searching for a name match.
174
 
175
The _fs_ field is installed after a successful mount operation to
176
point to the implementing filesystem.
177
 
178
The _root_ field contains a directory pointer value that the
179
filesystem can interpret as the root of its directory tree. This is
180
passed as the _dir_ argument of filesystem functions that operate on
181
rooted filenames. This field must be initialized by the filesystem's
182
_mount_ function.
183
 
184
 
185
 
186
File Table
187
----------
188
 
189
Once a file has been opened it is represented by an open file
190
object. These are allocated from an array of available file
191
objects. User code accesses these open file objects via a second array
192
of pointers which is indexed by small integer offsets. This gives the
193
usual Unix file descriptor functionality, complete with the various
194
duplication mechanisms.
195
 
196
A file table entry has the following structure:
197
 
198
struct CYG_FILE_TAG
199
{
200
    cyg_uint32                  f_flag;         /* file state                   */
201
    cyg_uint16                  f_ucount;       /* use count                    */
202
    cyg_uint16                  f_type;         /* descriptor type              */
203
    cyg_uint32                  f_syncmode;     /* synchronization protocol     */
204
    struct CYG_FILEOPS_TAG      *f_ops;         /* file operations              */
205
    off_t                       f_offset;       /* current offset               */
206
    CYG_ADDRWORD                f_data;         /* file or socket               */
207
    CYG_ADDRWORD                f_xops;         /* extra type specific ops      */
208
    cyg_mtab_entry              *f_mte;         /* mount table entry            */
209
};
210
 
211
The _f_flag_ field contains some FILEIO control bits and some of the
212
bits from the open call (defined by CYG_FILE_MODE_MASK).
213
 
214
The _f_ucount_ field contains a use count that controls when a file
215
will be closed. Each duplicate in the file descriptor array counts for
216
one reference here and it is also incremented around each I/O
217
operation.
218
 
219
The _f_type_ field indicates the type of the underlying file
220
object. Some of the possible values here are CYG_FILE_TYPE_FILE,
221
CYG_FILE_TYPE_SOCKET or CYG_FILE_TYPE_DEVICE.
222
 
223
The _f_syncmode_ field is copied from the _syncmode_ field of the
224
implementing filesystem. Its use is described in the "Synchronization"
225
section later.
226
 
227
The _f_offset_ field records the current file position. It is the
228
responsibility of the file operation functions to keep this field up
229
to date.
230
 
231
The _f_data_ field contains private data placed here by the underlying
232
filesystem. Normally this will be a pointer to or handle on the
233
filesystem object that implements this file.
234
 
235
The _f_xops_ field contains a pointer to any extra type specific
236
operation functions. For example, the socket I/O system installs a
237
pointer to a table of functions that implement the standard socket
238
operations.
239
 
240
The _f_mte_ field contains a pointer to the parent mount table entry
241
for this file. It is used mainly to implement the synchronization
242
protocol. This may contain a pointer to some other data structure in
243
file objects not derived from a filesystem.
244
 
245
The _f_ops_ field contains a pointer to a table of file I/O
246
operations. This has the following structure:
247
 
248
struct CYG_FILEOPS_TAG
249
{
250
        int     (*fo_read)      (struct CYG_FILE_TAG *fp, struct CYG_UIO_TAG *uio);
251
        int     (*fo_write)     (struct CYG_FILE_TAG *fp, struct CYG_UIO_TAG *uio);
252
        int     (*fo_lseek)     (struct CYG_FILE_TAG *fp, off_t *pos, int whence );
253
        int     (*fo_ioctl)     (struct CYG_FILE_TAG *fp, CYG_ADDRWORD com,
254
                                 CYG_ADDRWORD data);
255
        int     (*fo_select)    (struct CYG_FILE_TAG *fp, int which, CYG_ADDRWORD info);
256
        int     (*fo_fsync)     (struct CYG_FILE_TAG *fp, int mode );
257
        int     (*fo_close)     (struct CYG_FILE_TAG *fp);
258
        int     (*fo_fstat)     (struct CYG_FILE_TAG *fp, struct stat *buf );
259
        int     (*fo_getinfo)   (struct CYG_FILE_TAG *fp, int key, char *buf, int len );
260
        int     (*fo_setinfo)   (struct CYG_FILE_TAG *fp, int key, char *buf, int len );
261
};
262
 
263
It should be obvious from the names of most of these functions what
264
their responsibilities are. The _fo_getinfo_ and _fo_setinfo_
265
function, like their counterparts in the filesystem structure,
266
implement minor control and info functions such as fpathconf().
267
 
268
The second argument to _fo_read_ and _fo_write_ is a pointer to a UIO
269
structure:
270
 
271
struct CYG_UIO_TAG
272
{
273
    struct CYG_IOVEC_TAG *uio_iov;      /* pointer to array of iovecs */
274
    int                  uio_iovcnt;    /* number of iovecs in array */
275
    off_t                uio_offset;    /* offset into file this uio corresponds to */
276
    ssize_t              uio_resid;     /* residual i/o count */
277
    enum cyg_uio_seg     uio_segflg;    /* see above */
278
    enum cyg_uio_rw      uio_rw;        /* see above */
279
};
280
 
281
struct CYG_IOVEC_TAG
282
{
283
    void           *iov_base;           /* Base address. */
284
    ssize_t        iov_len;             /* Length. */
285
};
286
 
287
This structure encapsulates the parameters of any data transfer
288
operation. It provides support for scatter/gather operations and
289
records the progress of any data transfer. It is also compatible with
290
the I/O operations of any BSD-derived network stacks and filesystems.
291
 
292
 
293
When a file is opened (or a file object created by some other means,
294
such as socket() or accept()) it is the responsibility of the
295
filesystem open operation to initialize all the fields of the object
296
except the _f_ucount_, _f_syncmode_ and _f_mte_ fields. Since the
297
_f_flag_ field will already contain bits belonging to the FILEIO
298
infrastructure, any changes to it must be made with the appropriate
299
logical operations.
300
 
301
 
302
Directories
303
-----------
304
 
305
Filesystem operations all take a directory pointer as one of their
306
arguments.  A directory pointer is an opaque handle managed by the
307
filesystem. It should encapsulate a reference to a specific directory
308
within the filesystem. For example, it may be a pointer to the data
309
structure that represents that directory, or a pointer to a pathname
310
for the directory.
311
 
312
The _chdir_ filesystem function has two modes of use. When passed a
313
pointer in the _dir_out_ argument, it should locate the named
314
directory and place a directory pointer there. If the _dir_out_
315
argument is NULL then the _dir_ argument is a previously generated
316
directory pointer that can now be disposed of. When the infrastructure
317
is implementing the chdir() function it makes two calls to filesystem
318
_chdir_ functions. The first is to get a directory pointer for the new
319
current directory. If this succeeds the second is to dispose of the
320
old current directory pointer.
321
 
322
The _opendir_ function is used to open a directory for reading. This
323
results in an open file object that can be read to return a sequence
324
of _struct dirent_ objects. The only operation that are allowed on
325
this file are _read_, _lseek_ and _close_. Each read operation on this
326
file should return a single _struct dirent_ object. When the end of
327
the directory is reached, zero should be returned. The only seek
328
operation allowed is a rewind to the start of the directory, by
329
supplying an offset of zero and a _whence_ specifier of _SEEK_SET_.
330
 
331
Most of these considerations are invisible to clients of a filesystem
332
since they will access directories via the POSIX
333
opendir()/readdir()/closedir() functions.
334
 
335
Support for the _getcwd()_ function is provided by three mechanisms.
336
The first is to use the _FS_INFO_GETCWD_ getinfo key on the filesystem
337
to use any internal support that it has for this. If that fails it
338
falls back on one of the two other mechanisms. If
339
_CYGPKG_IO_FILEIO_TRACK_CWD_ is set then the current directory is
340
tracked textually in chdir() and the result of that is reported in
341
getcwd(). Otherwise an attempt is made to traverse the directory tree
342
to its root using ".." entries.
343
 
344
This last option is complicated and expensive, and relies on the
345
filesystem supporting "." and ".."  entries. This is not always the
346
case, particularly if the filesystem has been ported from a
347
non-UNIX-compatible source. Tracking the pathname textually will
348
usually work, but might not produce optimum results when symbolic
349
links are being used.
350
 
351
 
352
Synchronization
353
---------------
354
 
355
The FILEIO infrastructure provides a synchronization mechanism for
356
controlling concurrent access to filesystems. This allows existing
357
filesystems to be ported to eCos, even if they do not have their own
358
synchronization mechanisms. It also allows new filesystems to be
359
implemented easily without having to consider the synchronization
360
issues.
361
 
362
The infrastructure maintains a mutex for each entry in each of
363
the main tables: filesystem table, mount table and file table. For
364
each class of operation each of these mutexes may be locked before the
365
corresponding filesystem operation is invoked.
366
 
367
The synchronization protocol implemented by a filesystem is described
368
by the _syncmode_ field of the filesystem table entry. This is a
369
combination of the following flags:
370
 
371
CYG_SYNCMODE_FILE_FILESYSTEM Lock the filesystem table entry mutex
372
                             during all filesystem level operations.
373
 
374
CYG_SYNCMODE_FILE_MOUNTPOINT Lock the mount table entry mutex
375
                             during all filesystem level operations.
376
 
377
CYG_SYNCMODE_IO_FILE         Lock the file table entry mutex during all
378
                             I/O operations.
379
 
380
CYG_SYNCMODE_IO_FILESYSTEM   Lock the filesystem table entry mutex
381
                             during all I/O operations.
382
 
383
CYG_SYNCMODE_IO_MOUNTPOINT   Lock the mount table entry mutex during
384
                             all I/O operations.
385
 
386
CYG_SYNCMODE_SOCK_FILE       Lock the file table entry mutex during
387
                             all socket operations.
388
 
389
CYG_SYNCMODE_SOCK_NETSTACK   Lock the network stack table entry mutex
390
                             during all socket operations.
391
 
392
CYG_SYNCMODE_NONE            Perform no locking at all during any
393
                             operations.
394
 
395
 
396
The value of the _syncmode_ in the filesystem table entry will be
397
copied by the infrastructure to the open file object after a
398
successful open() operation.
399
 
400
 
401
Initialization and Mounting
402
---------------------------
403
 
404
As mentioned previously, mount table entries can be sourced from two
405
places. Static entries may be defined by using the MTAB_ENTRY()
406
macro. Such entries will be automatically mounted on system startup.
407
For each entry in the mount table that has a non-null _name_ field the
408
filesystem table is searched for a match with the _fsname_ field. If a
409
match is found the filesystem's _mount_ entry is called and if
410
successful the mount table entry marked valid and the _fs_ field
411
initialized. The _mount_ function is responsible for initializing the
412
_root_ field.
413
 
414
The size of the mount table is defined by the configuration value
415
CYGNUM_FILEIO_MTAB_MAX. Any entries that have not been statically
416
defined are available for use by dynamic mounts.
417
 
418
A filesystem may be mounted dynamically by calling mount(). This
419
function has the following prototype:
420
 
421
int mount( const char *devname,
422
           const char *dir,
423
           const char *fsname);
424
 
425
The _devname_ argument identifies a device that will be used by this
426
filesystem and will be assigned to the _devname_ field of the mount
427
table entry.
428
 
429
The _dir_ argument is the mount point name, it will be assigned to the
430
_name_ field of the mount table entry.
431
 
432
The _fsname_ argument is the name of the implementing filesystem, it
433
will be assigned to the _fsname_ entry of the mount table entry.
434
 
435
The process of mounting a filesystem dynamically is as follows. First
436
a search is made of the mount table for an entry with a NULL _name_
437
field to be used for the new mount point. The filesystem table is then
438
searched for an entry whose name matches _fsname_. If this is
439
successful then the mount table entry is initialized and the
440
filesystem's _mount_ operation called. If this is successful, the
441
mount table entry is marked valid and the _fs_ field initialized.
442
 
443
Unmounting a filesystem is done by the umount() function. This can
444
unmount filesystems whether they were mounted statically or
445
dynamically.
446
 
447
The umount() function has the following prototype:
448
 
449
int umount( const char *name );
450
 
451
The mount table is searched for a match between the _name_ argument
452
and the entry _name_ field. When a match is found the filesystem's
453
_umount_ operation is called and if successful, the mount table entry
454
is invalidated by setting its _valid_ field false and the _name_ field
455
to NULL.
456
 
457
Sockets
458
-------
459
 
460
If a network stack is present, then the FILEIO infrastructure also
461
provides access to the standard BSD socket calls.
462
 
463
The netstack table contains entries which describe the network
464
protocol stacks that are in the system image. Each resident stack
465
should export an entry to this table using the NSTAB_ENTRY() macro.
466
 
467
Each table entry has the following structure:
468
 
469
struct cyg_nstab_entry
470
{
471
    cyg_bool            valid;          // true if stack initialized
472
    cyg_uint32          syncmode;       // synchronization protocol
473
    char                *name;          // stack name
474
    char                *devname;       // hardware device name
475
    CYG_ADDRWORD        data;           // private data value
476
 
477
    int     (*init)( cyg_nstab_entry *nste );
478
    int     (*socket)( cyg_nstab_entry *nste, int domain, int type,
479
                       int protocol, cyg_file *file );
480
};
481
 
482
This table is analogous to a combination of the filesystem and mount
483
tables.
484
 
485
The _valid_ field is set true if the stack's _init_ function returned
486
successfully and the _syncmode_ field contains the CYG_SYNCMODE_SOCK_*
487
bits described above.
488
 
489
The _name_ field contains the name of the protocol stack.
490
 
491
The _devname_ field names the device that the stack is using. This may
492
reference a device under "/dev", or may be a name that is only
493
meaningful to the stack itself.
494
 
495
The _init_ function is called during system initialization to start
496
the protocol stack running. If it returns non-zero the _valid_ field
497
is set false and the stack will be ignored subsequently.
498
 
499
The _socket_ function is called to attempt to create a socket in the
500
stack. When the socket() API function is called the netstack table is
501
scanned and for each valid entry the _socket_ function is called. If
502
this returns non-zero then the scan continues to the next valid stack,
503
or terminates with an error if the end of the table is reached.
504
 
505
The result of a successful socket call is an initialized file object
506
with the _f_xops_ field pointing to the following structure:
507
 
508
struct cyg_sock_ops
509
{
510
    int (*bind)      ( cyg_file *fp, const sockaddr *sa, socklen_t len );
511
    int (*connect)   ( cyg_file *fp, const sockaddr *sa, socklen_t len );
512
    int (*accept)    ( cyg_file *fp, cyg_file *new_fp,
513
                       struct sockaddr *name, socklen_t *anamelen );
514
    int (*listen)    ( cyg_file *fp, int len );
515
    int (*getname)   ( cyg_file *fp, sockaddr *sa, socklen_t *len, int peer );
516
    int (*shutdown)  ( cyg_file *fp, int flags );
517
    int (*getsockopt)( cyg_file *fp, int level, int optname,
518
                       void *optval, socklen_t *optlen);
519
    int (*setsockopt)( cyg_file *fp, int level, int optname,
520
                       const void *optval, socklen_t optlen);
521
    int (*sendmsg)   ( cyg_file *fp, const struct msghdr *m,
522
                       int flags, ssize_t *retsize );
523
    int (*recvmsg)   ( cyg_file *fp, struct msghdr *m,
524
                       socklen_t *namelen, ssize_t *retsize );
525
};
526
 
527
It should be obvious from the names of these functions which API calls
528
they provide support for. The _getname_ function provides support for
529
both getsockname() and getpeername() while the _sendmsg_ and _recvmsg_
530
functions provide support for send(), sendto(), sendmsg(), recv(),
531
recvfrom() and recvmsg() as appropriate.
532
 
533
 
534
 
535
Select
536
------
537
 
538
The infrastructure provides support for implementing a select
539
mechanism. This is modeled on the mechanism in the BSD kernel, but has
540
been modified to make it implementation independent.
541
 
542
The main part of the mechanism is the select() API call. This
543
processes its arguments and calls the _fo_select_ function on all file
544
objects referenced by the file descriptor sets passed to it. If the
545
same descriptor appears in more than one descriptor set, the
546
_fo_select_ function will be called separately for each appearance.
547
 
548
The _which_ argument of the _fo_select_ function will either be
549
CYG_FREAD to test for read conditions, CYG_FWRITE to test for write
550
conditions or zero to test for exceptions. For each of these options
551
the function should test whether the condition is satisfied and if so
552
return true. If it is not satisfied then it should call
553
cyg_selrecord() with the _info_ argument that was passed to the
554
function and a pointer to a cyg_selinfo structure.
555
 
556
The cyg_selinfo structure is used to record information about current
557
select operations. Any object that needs to support select must
558
contain an instance of this structure.  Separate cyg_selinfo
559
structures should be kept for each of the options that the object can
560
select on - read, write or exception.
561
 
562
If none of the file objects report that the select condition is
563
satisfied, then the select() API function puts the calling thread to
564
sleep waiting either for a condition to become satisfied, or for the
565
optional timeout to expire.
566
 
567
A selectable object must have some asynchronous activity that may
568
cause a select condition to become true - either via interrupts or the
569
activities of other threads. Whenever a selectable condition is
570
satisfied, the object should call cyg_selwakeup() with a pointer to
571
the appropriate cyg_selinfo structure. If the thread is still waiting,
572
this will cause it to wake up and repeat its poll of the file
573
descriptors. This time around, the object that caused the wakeup
574
should indicate that the select condition is satisfied, and the
575
_select()_ API call will return.
576
 
577
Note that _select()_ does not exhibit real time behaviour: the
578
iterative poll of the descriptors, and the wakeup mechanism mitigate
579
against this. If real time response to device or socket I/O is
580
required then separate threads should be devoted to each device of
581
interest.
582
 
583
 
584
Devices
585
-------
586
 
587
Devices are accessed by means of a pseudo-filesystem, "devfs", that is
588
mounted on "/dev". Open operations are translated into calls to
589
cyg_io_lookup() and if successful result in a file object whose
590
_f_ops_ functions translate filesystem API functions into calls into
591
the device API.
592
 
593
// EOF fileio.txt

powered by: WebSVN 2.1.0

© copyright 1999-2024 OpenCores.org, equivalent to Oliscience, all rights reserved. OpenCores®, registered trademark.