OpenCores
URL https://opencores.org/ocsvn/test_project/test_project/trunk

Subversion Repositories test_project

[/] [test_project/] [trunk/] [linux_sd_driver/] [Documentation/] [DMA-API.txt] - Blame information for rev 78

Go to most recent revision | Details | Compare with Previous | View Log

Line No. Rev Author Line
1 62 marcus.erl
               Dynamic DMA mapping using the generic device
2
               ============================================
3
 
4
        James E.J. Bottomley 
5
 
6
This document describes the DMA API.  For a more gentle introduction
7
phrased in terms of the pci_ equivalents (and actual examples) see
8
DMA-mapping.txt
9
 
10
This API is split into two pieces.  Part I describes the API and the
11
corresponding pci_ API.  Part II describes the extensions to the API
12
for supporting non-consistent memory machines.  Unless you know that
13
your driver absolutely has to support non-consistent platforms (this
14
is usually only legacy platforms) you should only use the API
15
described in part I.
16
 
17
Part I - pci_ and dma_ Equivalent API
18
-------------------------------------
19
 
20
To get the pci_ API, you must #include 
21
To get the dma_ API, you must #include 
22
 
23
 
24
Part Ia - Using large dma-coherent buffers
25
------------------------------------------
26
 
27
void *
28
dma_alloc_coherent(struct device *dev, size_t size,
29
                             dma_addr_t *dma_handle, gfp_t flag)
30
void *
31
pci_alloc_consistent(struct pci_dev *dev, size_t size,
32
                             dma_addr_t *dma_handle)
33
 
34
Consistent memory is memory for which a write by either the device or
35
the processor can immediately be read by the processor or device
36
without having to worry about caching effects.  (You may however need
37
to make sure to flush the processor's write buffers before telling
38
devices to read that memory.)
39
 
40
This routine allocates a region of  bytes of consistent memory.
41
It also returns a  which may be cast to an unsigned
42
integer the same width as the bus and used as the physical address
43
base of the region.
44
 
45
Returns: a pointer to the allocated region (in the processor's virtual
46
address space) or NULL if the allocation failed.
47
 
48
Note: consistent memory can be expensive on some platforms, and the
49
minimum allocation length may be as big as a page, so you should
50
consolidate your requests for consistent memory as much as possible.
51
The simplest way to do that is to use the dma_pool calls (see below).
52
 
53
The flag parameter (dma_alloc_coherent only) allows the caller to
54
specify the GFP_ flags (see kmalloc) for the allocation (the
55
implementation may choose to ignore flags that affect the location of
56
the returned memory, like GFP_DMA).  For pci_alloc_consistent, you
57
must assume GFP_ATOMIC behaviour.
58
 
59
void
60
dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
61
                           dma_addr_t dma_handle)
62
void
63
pci_free_consistent(struct pci_dev *dev, size_t size, void *cpu_addr,
64
                           dma_addr_t dma_handle)
65
 
66
Free the region of consistent memory you previously allocated.  dev,
67
size and dma_handle must all be the same as those passed into the
68
consistent allocate.  cpu_addr must be the virtual address returned by
69
the consistent allocate.
70
 
71
Note that unlike their sibling allocation calls, these routines
72
may only be called with IRQs enabled.
73
 
74
 
75
Part Ib - Using small dma-coherent buffers
76
------------------------------------------
77
 
78
To get this part of the dma_ API, you must #include 
79
 
80
Many drivers need lots of small dma-coherent memory regions for DMA
81
descriptors or I/O buffers.  Rather than allocating in units of a page
82
or more using dma_alloc_coherent(), you can use DMA pools.  These work
83
much like a struct kmem_cache, except that they use the dma-coherent allocator,
84
not __get_free_pages().  Also, they understand common hardware constraints
85
for alignment, like queue heads needing to be aligned on N-byte boundaries.
86
 
87
 
88
        struct dma_pool *
89
        dma_pool_create(const char *name, struct device *dev,
90
                        size_t size, size_t align, size_t alloc);
91
 
92
        struct pci_pool *
93
        pci_pool_create(const char *name, struct pci_device *dev,
94
                        size_t size, size_t align, size_t alloc);
95
 
96
The pool create() routines initialize a pool of dma-coherent buffers
97
for use with a given device.  It must be called in a context which
98
can sleep.
99
 
100
The "name" is for diagnostics (like a struct kmem_cache name); dev and size
101
are like what you'd pass to dma_alloc_coherent().  The device's hardware
102
alignment requirement for this type of data is "align" (which is expressed
103
in bytes, and must be a power of two).  If your device has no boundary
104
crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated
105
from this pool must not cross 4KByte boundaries.
106
 
107
 
108
        void *dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags,
109
                        dma_addr_t *dma_handle);
110
 
111
        void *pci_pool_alloc(struct pci_pool *pool, gfp_t gfp_flags,
112
                        dma_addr_t *dma_handle);
113
 
114
This allocates memory from the pool; the returned memory will meet the size
115
and alignment requirements specified at creation time.  Pass GFP_ATOMIC to
116
prevent blocking, or if it's permitted (not in_interrupt, not holding SMP locks),
117
pass GFP_KERNEL to allow blocking.  Like dma_alloc_coherent(), this returns
118
two values:  an address usable by the cpu, and the dma address usable by the
119
pool's device.
120
 
121
 
122
        void dma_pool_free(struct dma_pool *pool, void *vaddr,
123
                        dma_addr_t addr);
124
 
125
        void pci_pool_free(struct pci_pool *pool, void *vaddr,
126
                        dma_addr_t addr);
127
 
128
This puts memory back into the pool.  The pool is what was passed to
129
the pool allocation routine; the cpu (vaddr) and dma addresses are what
130
were returned when that routine allocated the memory being freed.
131
 
132
 
133
        void dma_pool_destroy(struct dma_pool *pool);
134
 
135
        void pci_pool_destroy(struct pci_pool *pool);
136
 
137
The pool destroy() routines free the resources of the pool.  They must be
138
called in a context which can sleep.  Make sure you've freed all allocated
139
memory back to the pool before you destroy it.
140
 
141
 
142
Part Ic - DMA addressing limitations
143
------------------------------------
144
 
145
int
146
dma_supported(struct device *dev, u64 mask)
147
int
148
pci_dma_supported(struct device *dev, u64 mask)
149
 
150
Checks to see if the device can support DMA to the memory described by
151
mask.
152
 
153
Returns: 1 if it can and 0 if it can't.
154
 
155
Notes: This routine merely tests to see if the mask is possible.  It
156
won't change the current mask settings.  It is more intended as an
157
internal API for use by the platform than an external API for use by
158
driver writers.
159
 
160
int
161
dma_set_mask(struct device *dev, u64 mask)
162
int
163
pci_set_dma_mask(struct pci_device *dev, u64 mask)
164
 
165
Checks to see if the mask is possible and updates the device
166
parameters if it is.
167
 
168
Returns: 0 if successful and a negative error if not.
169
 
170
u64
171
dma_get_required_mask(struct device *dev)
172
 
173
After setting the mask with dma_set_mask(), this API returns the
174
actual mask (within that already set) that the platform actually
175
requires to operate efficiently.  Usually this means the returned mask
176
is the minimum required to cover all of memory.  Examining the
177
required mask gives drivers with variable descriptor sizes the
178
opportunity to use smaller descriptors as necessary.
179
 
180
Requesting the required mask does not alter the current mask.  If you
181
wish to take advantage of it, you should issue another dma_set_mask()
182
call to lower the mask again.
183
 
184
 
185
Part Id - Streaming DMA mappings
186
--------------------------------
187
 
188
dma_addr_t
189
dma_map_single(struct device *dev, void *cpu_addr, size_t size,
190
                      enum dma_data_direction direction)
191
dma_addr_t
192
pci_map_single(struct device *dev, void *cpu_addr, size_t size,
193
                      int direction)
194
 
195
Maps a piece of processor virtual memory so it can be accessed by the
196
device and returns the physical handle of the memory.
197
 
198
The direction for both api's may be converted freely by casting.
199
However the dma_ API uses a strongly typed enumerator for its
200
direction:
201
 
202
DMA_NONE                = PCI_DMA_NONE          no direction (used for
203
                                                debugging)
204
DMA_TO_DEVICE           = PCI_DMA_TODEVICE      data is going from the
205
                                                memory to the device
206
DMA_FROM_DEVICE         = PCI_DMA_FROMDEVICE    data is coming from
207
                                                the device to the
208
                                                memory
209
DMA_BIDIRECTIONAL       = PCI_DMA_BIDIRECTIONAL direction isn't known
210
 
211
Notes:  Not all memory regions in a machine can be mapped by this
212
API.  Further, regions that appear to be physically contiguous in
213
kernel virtual space may not be contiguous as physical memory.  Since
214
this API does not provide any scatter/gather capability, it will fail
215
if the user tries to map a non-physically contiguous piece of memory.
216
For this reason, it is recommended that memory mapped by this API be
217
obtained only from sources which guarantee it to be physically contiguous
218
(like kmalloc).
219
 
220
Further, the physical address of the memory must be within the
221
dma_mask of the device (the dma_mask represents a bit mask of the
222
addressable region for the device.  I.e., if the physical address of
223
the memory anded with the dma_mask is still equal to the physical
224
address, then the device can perform DMA to the memory).  In order to
225
ensure that the memory allocated by kmalloc is within the dma_mask,
226
the driver may specify various platform-dependent flags to restrict
227
the physical memory range of the allocation (e.g. on x86, GFP_DMA
228
guarantees to be within the first 16Mb of available physical memory,
229
as required by ISA devices).
230
 
231
Note also that the above constraints on physical contiguity and
232
dma_mask may not apply if the platform has an IOMMU (a device which
233
supplies a physical to virtual mapping between the I/O memory bus and
234
the device).  However, to be portable, device driver writers may *not*
235
assume that such an IOMMU exists.
236
 
237
Warnings:  Memory coherency operates at a granularity called the cache
238
line width.  In order for memory mapped by this API to operate
239
correctly, the mapped region must begin exactly on a cache line
240
boundary and end exactly on one (to prevent two separately mapped
241
regions from sharing a single cache line).  Since the cache line size
242
may not be known at compile time, the API will not enforce this
243
requirement.  Therefore, it is recommended that driver writers who
244
don't take special care to determine the cache line size at run time
245
only map virtual regions that begin and end on page boundaries (which
246
are guaranteed also to be cache line boundaries).
247
 
248
DMA_TO_DEVICE synchronisation must be done after the last modification
249
of the memory region by the software and before it is handed off to
250
the driver.  Once this primitive is used, memory covered by this
251
primitive should be treated as read-only by the device.  If the device
252
may write to it at any point, it should be DMA_BIDIRECTIONAL (see
253
below).
254
 
255
DMA_FROM_DEVICE synchronisation must be done before the driver
256
accesses data that may be changed by the device.  This memory should
257
be treated as read-only by the driver.  If the driver needs to write
258
to it at any point, it should be DMA_BIDIRECTIONAL (see below).
259
 
260
DMA_BIDIRECTIONAL requires special handling: it means that the driver
261
isn't sure if the memory was modified before being handed off to the
262
device and also isn't sure if the device will also modify it.  Thus,
263
you must always sync bidirectional memory twice: once before the
264
memory is handed off to the device (to make sure all memory changes
265
are flushed from the processor) and once before the data may be
266
accessed after being used by the device (to make sure any processor
267
cache lines are updated with data that the device may have changed).
268
 
269
void
270
dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
271
                 enum dma_data_direction direction)
272
void
273
pci_unmap_single(struct pci_dev *hwdev, dma_addr_t dma_addr,
274
                 size_t size, int direction)
275
 
276
Unmaps the region previously mapped.  All the parameters passed in
277
must be identical to those passed in (and returned) by the mapping
278
API.
279
 
280
dma_addr_t
281
dma_map_page(struct device *dev, struct page *page,
282
                    unsigned long offset, size_t size,
283
                    enum dma_data_direction direction)
284
dma_addr_t
285
pci_map_page(struct pci_dev *hwdev, struct page *page,
286
                    unsigned long offset, size_t size, int direction)
287
void
288
dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
289
               enum dma_data_direction direction)
290
void
291
pci_unmap_page(struct pci_dev *hwdev, dma_addr_t dma_address,
292
               size_t size, int direction)
293
 
294
API for mapping and unmapping for pages.  All the notes and warnings
295
for the other mapping APIs apply here.  Also, although the 
296
and  parameters are provided to do partial page mapping, it is
297
recommended that you never use these unless you really know what the
298
cache width is.
299
 
300
int
301
dma_mapping_error(dma_addr_t dma_addr)
302
 
303
int
304
pci_dma_mapping_error(dma_addr_t dma_addr)
305
 
306
In some circumstances dma_map_single and dma_map_page will fail to create
307
a mapping. A driver can check for these errors by testing the returned
308
dma address with dma_mapping_error(). A non-zero return value means the mapping
309
could not be created and the driver should take appropriate action (e.g.
310
reduce current DMA mapping usage or delay and try again later).
311
 
312
        int
313
        dma_map_sg(struct device *dev, struct scatterlist *sg,
314
                int nents, enum dma_data_direction direction)
315
        int
316
        pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg,
317
                int nents, int direction)
318
 
319
Maps a scatter gather list from the block layer.
320
 
321
Returns: the number of physical segments mapped (this may be shorter
322
than  passed in if the block layer determines that some
323
elements of the scatter/gather list are physically adjacent and thus
324
may be mapped with a single entry).
325
 
326
Please note that the sg cannot be mapped again if it has been mapped once.
327
The mapping process is allowed to destroy information in the sg.
328
 
329
As with the other mapping interfaces, dma_map_sg can fail. When it
330
does, 0 is returned and a driver must take appropriate action. It is
331
critical that the driver do something, in the case of a block driver
332
aborting the request or even oopsing is better than doing nothing and
333
corrupting the filesystem.
334
 
335
With scatterlists, you use the resulting mapping like this:
336
 
337
        int i, count = dma_map_sg(dev, sglist, nents, direction);
338
        struct scatterlist *sg;
339
 
340
        for (i = 0, sg = sglist; i < count; i++, sg++) {
341
                hw_address[i] = sg_dma_address(sg);
342
                hw_len[i] = sg_dma_len(sg);
343
        }
344
 
345
where nents is the number of entries in the sglist.
346
 
347
The implementation is free to merge several consecutive sglist entries
348
into one (e.g. with an IOMMU, or if several pages just happen to be
349
physically contiguous) and returns the actual number of sg entries it
350
mapped them to. On failure 0, is returned.
351
 
352
Then you should loop count times (note: this can be less than nents times)
353
and use sg_dma_address() and sg_dma_len() macros where you previously
354
accessed sg->address and sg->length as shown above.
355
 
356
        void
357
        dma_unmap_sg(struct device *dev, struct scatterlist *sg,
358
                int nhwentries, enum dma_data_direction direction)
359
        void
360
        pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg,
361
                int nents, int direction)
362
 
363
Unmap the previously mapped scatter/gather list.  All the parameters
364
must be the same as those and passed in to the scatter/gather mapping
365
API.
366
 
367
Note:  must be the number you passed in, *not* the number of
368
physical entries returned.
369
 
370
void
371
dma_sync_single(struct device *dev, dma_addr_t dma_handle, size_t size,
372
                enum dma_data_direction direction)
373
void
374
pci_dma_sync_single(struct pci_dev *hwdev, dma_addr_t dma_handle,
375
                           size_t size, int direction)
376
void
377
dma_sync_sg(struct device *dev, struct scatterlist *sg, int nelems,
378
                          enum dma_data_direction direction)
379
void
380
pci_dma_sync_sg(struct pci_dev *hwdev, struct scatterlist *sg,
381
                       int nelems, int direction)
382
 
383
Synchronise a single contiguous or scatter/gather mapping.  All the
384
parameters must be the same as those passed into the single mapping
385
API.
386
 
387
Notes:  You must do this:
388
 
389
- Before reading values that have been written by DMA from the device
390
  (use the DMA_FROM_DEVICE direction)
391
- After writing values that will be written to the device using DMA
392
  (use the DMA_TO_DEVICE) direction
393
- before *and* after handing memory to the device if the memory is
394
  DMA_BIDIRECTIONAL
395
 
396
See also dma_map_single().
397
 
398
 
399
Part II - Advanced dma_ usage
400
-----------------------------
401
 
402
Warning: These pieces of the DMA API have no PCI equivalent.  They
403
should also not be used in the majority of cases, since they cater for
404
unlikely corner cases that don't belong in usual drivers.
405
 
406
If you don't understand how cache line coherency works between a
407
processor and an I/O device, you should not be using this part of the
408
API at all.
409
 
410
void *
411
dma_alloc_noncoherent(struct device *dev, size_t size,
412
                               dma_addr_t *dma_handle, gfp_t flag)
413
 
414
Identical to dma_alloc_coherent() except that the platform will
415
choose to return either consistent or non-consistent memory as it sees
416
fit.  By using this API, you are guaranteeing to the platform that you
417
have all the correct and necessary sync points for this memory in the
418
driver should it choose to return non-consistent memory.
419
 
420
Note: where the platform can return consistent memory, it will
421
guarantee that the sync points become nops.
422
 
423
Warning:  Handling non-consistent memory is a real pain.  You should
424
only ever use this API if you positively know your driver will be
425
required to work on one of the rare (usually non-PCI) architectures
426
that simply cannot make consistent memory.
427
 
428
void
429
dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr,
430
                              dma_addr_t dma_handle)
431
 
432
Free memory allocated by the nonconsistent API.  All parameters must
433
be identical to those passed in (and returned by
434
dma_alloc_noncoherent()).
435
 
436
int
437
dma_is_consistent(struct device *dev, dma_addr_t dma_handle)
438
 
439
Returns true if the device dev is performing consistent DMA on the memory
440
area pointed to by the dma_handle.
441
 
442
int
443
dma_get_cache_alignment(void)
444
 
445
Returns the processor cache alignment.  This is the absolute minimum
446
alignment *and* width that you must observe when either mapping
447
memory or doing partial flushes.
448
 
449
Notes: This API may return a number *larger* than the actual cache
450
line, but it will guarantee that one or more cache lines fit exactly
451
into the width returned by this call.  It will also always be a power
452
of two for easy alignment.
453
 
454
void
455
dma_sync_single_range(struct device *dev, dma_addr_t dma_handle,
456
                      unsigned long offset, size_t size,
457
                      enum dma_data_direction direction)
458
 
459
Does a partial sync, starting at offset and continuing for size.  You
460
must be careful to observe the cache alignment and width when doing
461
anything like this.  You must also be extra careful about accessing
462
memory you intend to sync partially.
463
 
464
void
465
dma_cache_sync(struct device *dev, void *vaddr, size_t size,
466
               enum dma_data_direction direction)
467
 
468
Do a partial sync of memory that was allocated by
469
dma_alloc_noncoherent(), starting at virtual address vaddr and
470
continuing on for size.  Again, you *must* observe the cache line
471
boundaries when doing this.
472
 
473
int
474
dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
475
                            dma_addr_t device_addr, size_t size, int
476
                            flags)
477
 
478
Declare region of memory to be handed out by dma_alloc_coherent when
479
it's asked for coherent memory for this device.
480
 
481
bus_addr is the physical address to which the memory is currently
482
assigned in the bus responding region (this will be used by the
483
platform to perform the mapping).
484
 
485
device_addr is the physical address the device needs to be programmed
486
with actually to address this memory (this will be handed out as the
487
dma_addr_t in dma_alloc_coherent()).
488
 
489
size is the size of the area (must be multiples of PAGE_SIZE).
490
 
491
flags can be or'd together and are:
492
 
493
DMA_MEMORY_MAP - request that the memory returned from
494
dma_alloc_coherent() be directly writable.
495
 
496
DMA_MEMORY_IO - request that the memory returned from
497
dma_alloc_coherent() be addressable using read/write/memcpy_toio etc.
498
 
499
One or both of these flags must be present.
500
 
501
DMA_MEMORY_INCLUDES_CHILDREN - make the declared memory be allocated by
502
dma_alloc_coherent of any child devices of this one (for memory residing
503
on a bridge).
504
 
505
DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions.
506
Do not allow dma_alloc_coherent() to fall back to system memory when
507
it's out of memory in the declared region.
508
 
509
The return value will be either DMA_MEMORY_MAP or DMA_MEMORY_IO and
510
must correspond to a passed in flag (i.e. no returning DMA_MEMORY_IO
511
if only DMA_MEMORY_MAP were passed in) for success or zero for
512
failure.
513
 
514
Note, for DMA_MEMORY_IO returns, all subsequent memory returned by
515
dma_alloc_coherent() may no longer be accessed directly, but instead
516
must be accessed using the correct bus functions.  If your driver
517
isn't prepared to handle this contingency, it should not specify
518
DMA_MEMORY_IO in the input flags.
519
 
520
As a simplification for the platforms, only *one* such region of
521
memory may be declared per device.
522
 
523
For reasons of efficiency, most platforms choose to track the declared
524
region only at the granularity of a page.  For smaller allocations,
525
you should use the dma_pool() API.
526
 
527
void
528
dma_release_declared_memory(struct device *dev)
529
 
530
Remove the memory region previously declared from the system.  This
531
API performs *no* in-use checking for this region and will return
532
unconditionally having removed all the required structures.  It is the
533
driver's job to ensure that no parts of this memory region are
534
currently in use.
535
 
536
void *
537
dma_mark_declared_memory_occupied(struct device *dev,
538
                                  dma_addr_t device_addr, size_t size)
539
 
540
This is used to occupy specific regions of the declared space
541
(dma_alloc_coherent() will hand out the first free region it finds).
542
 
543
device_addr is the *device* address of the region requested.
544
 
545
size is the size (and should be a page-sized multiple).
546
 
547
The return value will be either a pointer to the processor virtual
548
address of the memory, or an error (via PTR_ERR()) if any part of the
549
region is occupied.

powered by: WebSVN 2.1.0

© copyright 1999-2025 OpenCores.org, equivalent to Oliscience, all rights reserved. OpenCores®, registered trademark.