Merge tag 'mac80211-for-davem-2016-03-02' of git://git.kernel.org/pub/scm/linux/kerne...
[deliverable/linux.git] / Documentation / DMA-API.txt
1 Dynamic DMA mapping using the generic device
2 ============================================
3
4 James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
5
6 This document describes the DMA API. For a more gentle introduction
7 of the API (and actual examples), see Documentation/DMA-API-HOWTO.txt.
8
9 This API is split into two pieces. Part I describes the basic API.
10 Part II describes extensions for supporting non-consistent memory
11 machines. Unless you know that your driver absolutely has to support
12 non-consistent platforms (this is usually only legacy platforms) you
13 should only use the API described in part I.
14
15 Part I - dma_ API
16 -------------------------------------
17
18 To get the dma_ API, you must #include <linux/dma-mapping.h>. This
19 provides dma_addr_t and the interfaces described below.
20
21 A dma_addr_t can hold any valid DMA address for the platform. It can be
22 given to a device to use as a DMA source or target. A CPU cannot reference
23 a dma_addr_t directly because there may be translation between its physical
24 address space and the DMA address space.
25
26 Part Ia - Using large DMA-coherent buffers
27 ------------------------------------------
28
29 void *
30 dma_alloc_coherent(struct device *dev, size_t size,
31 dma_addr_t *dma_handle, gfp_t flag)
32
33 Consistent memory is memory for which a write by either the device or
34 the processor can immediately be read by the processor or device
35 without having to worry about caching effects. (You may however need
36 to make sure to flush the processor's write buffers before telling
37 devices to read that memory.)
38
39 This routine allocates a region of <size> bytes of consistent memory.
40
41 It returns a pointer to the allocated region (in the processor's virtual
42 address space) or NULL if the allocation failed.
43
44 It also returns a <dma_handle> which may be cast to an unsigned integer the
45 same width as the bus and given to the device as the DMA address base of
46 the region.
47
48 Note: consistent memory can be expensive on some platforms, and the
49 minimum allocation length may be as big as a page, so you should
50 consolidate your requests for consistent memory as much as possible.
51 The simplest way to do that is to use the dma_pool calls (see below).
52
53 The flag parameter (dma_alloc_coherent() only) allows the caller to
54 specify the GFP_ flags (see kmalloc()) for the allocation (the
55 implementation may choose to ignore flags that affect the location of
56 the returned memory, like GFP_DMA).
57
58 void *
59 dma_zalloc_coherent(struct device *dev, size_t size,
60 dma_addr_t *dma_handle, gfp_t flag)
61
62 Wraps dma_alloc_coherent() and also zeroes the returned memory if the
63 allocation attempt succeeded.
64
65 void
66 dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
67 dma_addr_t dma_handle)
68
69 Free a region of consistent memory you previously allocated. dev,
70 size and dma_handle must all be the same as those passed into
71 dma_alloc_coherent(). cpu_addr must be the virtual address returned by
72 the dma_alloc_coherent().
73
74 Note that unlike their sibling allocation calls, these routines
75 may only be called with IRQs enabled.
76
77
78 Part Ib - Using small DMA-coherent buffers
79 ------------------------------------------
80
81 To get this part of the dma_ API, you must #include <linux/dmapool.h>
82
83 Many drivers need lots of small DMA-coherent memory regions for DMA
84 descriptors or I/O buffers. Rather than allocating in units of a page
85 or more using dma_alloc_coherent(), you can use DMA pools. These work
86 much like a struct kmem_cache, except that they use the DMA-coherent allocator,
87 not __get_free_pages(). Also, they understand common hardware constraints
88 for alignment, like queue heads needing to be aligned on N-byte boundaries.
89
90
91 struct dma_pool *
92 dma_pool_create(const char *name, struct device *dev,
93 size_t size, size_t align, size_t alloc);
94
95 dma_pool_create() initializes a pool of DMA-coherent buffers
96 for use with a given device. It must be called in a context which
97 can sleep.
98
99 The "name" is for diagnostics (like a struct kmem_cache name); dev and size
100 are like what you'd pass to dma_alloc_coherent(). The device's hardware
101 alignment requirement for this type of data is "align" (which is expressed
102 in bytes, and must be a power of two). If your device has no boundary
103 crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated
104 from this pool must not cross 4KByte boundaries.
105
106
107 void *dma_pool_zalloc(struct dma_pool *pool, gfp_t mem_flags,
108 dma_addr_t *handle)
109
110 Wraps dma_pool_alloc() and also zeroes the returned memory if the
111 allocation attempt succeeded.
112
113
114 void *dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags,
115 dma_addr_t *dma_handle);
116
117 This allocates memory from the pool; the returned memory will meet the
118 size and alignment requirements specified at creation time. Pass
119 GFP_ATOMIC to prevent blocking, or if it's permitted (not
120 in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow
121 blocking. Like dma_alloc_coherent(), this returns two values: an
122 address usable by the CPU, and the DMA address usable by the pool's
123 device.
124
125
126 void dma_pool_free(struct dma_pool *pool, void *vaddr,
127 dma_addr_t addr);
128
129 This puts memory back into the pool. The pool is what was passed to
130 dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what
131 were returned when that routine allocated the memory being freed.
132
133
134 void dma_pool_destroy(struct dma_pool *pool);
135
136 dma_pool_destroy() frees the resources of the pool. It must be
137 called in a context which can sleep. Make sure you've freed all allocated
138 memory back to the pool before you destroy it.
139
140
141 Part Ic - DMA addressing limitations
142 ------------------------------------
143
144 int
145 dma_set_mask_and_coherent(struct device *dev, u64 mask)
146
147 Checks to see if the mask is possible and updates the device
148 streaming and coherent DMA mask parameters if it is.
149
150 Returns: 0 if successful and a negative error if not.
151
152 int
153 dma_set_mask(struct device *dev, u64 mask)
154
155 Checks to see if the mask is possible and updates the device
156 parameters if it is.
157
158 Returns: 0 if successful and a negative error if not.
159
160 int
161 dma_set_coherent_mask(struct device *dev, u64 mask)
162
163 Checks to see if the mask is possible and updates the device
164 parameters if it is.
165
166 Returns: 0 if successful and a negative error if not.
167
168 u64
169 dma_get_required_mask(struct device *dev)
170
171 This API returns the mask that the platform requires to
172 operate efficiently. Usually this means the returned mask
173 is the minimum required to cover all of memory. Examining the
174 required mask gives drivers with variable descriptor sizes the
175 opportunity to use smaller descriptors as necessary.
176
177 Requesting the required mask does not alter the current mask. If you
178 wish to take advantage of it, you should issue a dma_set_mask()
179 call to set the mask to the value returned.
180
181
182 Part Id - Streaming DMA mappings
183 --------------------------------
184
185 dma_addr_t
186 dma_map_single(struct device *dev, void *cpu_addr, size_t size,
187 enum dma_data_direction direction)
188
189 Maps a piece of processor virtual memory so it can be accessed by the
190 device and returns the DMA address of the memory.
191
192 The direction for both APIs may be converted freely by casting.
193 However the dma_ API uses a strongly typed enumerator for its
194 direction:
195
196 DMA_NONE no direction (used for debugging)
197 DMA_TO_DEVICE data is going from the memory to the device
198 DMA_FROM_DEVICE data is coming from the device to the memory
199 DMA_BIDIRECTIONAL direction isn't known
200
201 Notes: Not all memory regions in a machine can be mapped by this API.
202 Further, contiguous kernel virtual space may not be contiguous as
203 physical memory. Since this API does not provide any scatter/gather
204 capability, it will fail if the user tries to map a non-physically
205 contiguous piece of memory. For this reason, memory to be mapped by
206 this API should be obtained from sources which guarantee it to be
207 physically contiguous (like kmalloc).
208
209 Further, the DMA address of the memory must be within the
210 dma_mask of the device (the dma_mask is a bit mask of the
211 addressable region for the device, i.e., if the DMA address of
212 the memory ANDed with the dma_mask is still equal to the DMA
213 address, then the device can perform DMA to the memory). To
214 ensure that the memory allocated by kmalloc is within the dma_mask,
215 the driver may specify various platform-dependent flags to restrict
216 the DMA address range of the allocation (e.g., on x86, GFP_DMA
217 guarantees to be within the first 16MB of available DMA addresses,
218 as required by ISA devices).
219
220 Note also that the above constraints on physical contiguity and
221 dma_mask may not apply if the platform has an IOMMU (a device which
222 maps an I/O DMA address to a physical memory address). However, to be
223 portable, device driver writers may *not* assume that such an IOMMU
224 exists.
225
226 Warnings: Memory coherency operates at a granularity called the cache
227 line width. In order for memory mapped by this API to operate
228 correctly, the mapped region must begin exactly on a cache line
229 boundary and end exactly on one (to prevent two separately mapped
230 regions from sharing a single cache line). Since the cache line size
231 may not be known at compile time, the API will not enforce this
232 requirement. Therefore, it is recommended that driver writers who
233 don't take special care to determine the cache line size at run time
234 only map virtual regions that begin and end on page boundaries (which
235 are guaranteed also to be cache line boundaries).
236
237 DMA_TO_DEVICE synchronisation must be done after the last modification
238 of the memory region by the software and before it is handed off to
239 the device. Once this primitive is used, memory covered by this
240 primitive should be treated as read-only by the device. If the device
241 may write to it at any point, it should be DMA_BIDIRECTIONAL (see
242 below).
243
244 DMA_FROM_DEVICE synchronisation must be done before the driver
245 accesses data that may be changed by the device. This memory should
246 be treated as read-only by the driver. If the driver needs to write
247 to it at any point, it should be DMA_BIDIRECTIONAL (see below).
248
249 DMA_BIDIRECTIONAL requires special handling: it means that the driver
250 isn't sure if the memory was modified before being handed off to the
251 device and also isn't sure if the device will also modify it. Thus,
252 you must always sync bidirectional memory twice: once before the
253 memory is handed off to the device (to make sure all memory changes
254 are flushed from the processor) and once before the data may be
255 accessed after being used by the device (to make sure any processor
256 cache lines are updated with data that the device may have changed).
257
258 void
259 dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
260 enum dma_data_direction direction)
261
262 Unmaps the region previously mapped. All the parameters passed in
263 must be identical to those passed in (and returned) by the mapping
264 API.
265
266 dma_addr_t
267 dma_map_page(struct device *dev, struct page *page,
268 unsigned long offset, size_t size,
269 enum dma_data_direction direction)
270 void
271 dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
272 enum dma_data_direction direction)
273
274 API for mapping and unmapping for pages. All the notes and warnings
275 for the other mapping APIs apply here. Also, although the <offset>
276 and <size> parameters are provided to do partial page mapping, it is
277 recommended that you never use these unless you really know what the
278 cache width is.
279
280 int
281 dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
282
283 In some circumstances dma_map_single() and dma_map_page() will fail to create
284 a mapping. A driver can check for these errors by testing the returned
285 DMA address with dma_mapping_error(). A non-zero return value means the mapping
286 could not be created and the driver should take appropriate action (e.g.
287 reduce current DMA mapping usage or delay and try again later).
288
289 int
290 dma_map_sg(struct device *dev, struct scatterlist *sg,
291 int nents, enum dma_data_direction direction)
292
293 Returns: the number of DMA address segments mapped (this may be shorter
294 than <nents> passed in if some elements of the scatter/gather list are
295 physically or virtually adjacent and an IOMMU maps them with a single
296 entry).
297
298 Please note that the sg cannot be mapped again if it has been mapped once.
299 The mapping process is allowed to destroy information in the sg.
300
301 As with the other mapping interfaces, dma_map_sg() can fail. When it
302 does, 0 is returned and a driver must take appropriate action. It is
303 critical that the driver do something, in the case of a block driver
304 aborting the request or even oopsing is better than doing nothing and
305 corrupting the filesystem.
306
307 With scatterlists, you use the resulting mapping like this:
308
309 int i, count = dma_map_sg(dev, sglist, nents, direction);
310 struct scatterlist *sg;
311
312 for_each_sg(sglist, sg, count, i) {
313 hw_address[i] = sg_dma_address(sg);
314 hw_len[i] = sg_dma_len(sg);
315 }
316
317 where nents is the number of entries in the sglist.
318
319 The implementation is free to merge several consecutive sglist entries
320 into one (e.g. with an IOMMU, or if several pages just happen to be
321 physically contiguous) and returns the actual number of sg entries it
322 mapped them to. On failure 0, is returned.
323
324 Then you should loop count times (note: this can be less than nents times)
325 and use sg_dma_address() and sg_dma_len() macros where you previously
326 accessed sg->address and sg->length as shown above.
327
328 void
329 dma_unmap_sg(struct device *dev, struct scatterlist *sg,
330 int nents, enum dma_data_direction direction)
331
332 Unmap the previously mapped scatter/gather list. All the parameters
333 must be the same as those and passed in to the scatter/gather mapping
334 API.
335
336 Note: <nents> must be the number you passed in, *not* the number of
337 DMA address entries returned.
338
339 void
340 dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
341 enum dma_data_direction direction)
342 void
343 dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size,
344 enum dma_data_direction direction)
345 void
346 dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nents,
347 enum dma_data_direction direction)
348 void
349 dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nents,
350 enum dma_data_direction direction)
351
352 Synchronise a single contiguous or scatter/gather mapping for the CPU
353 and device. With the sync_sg API, all the parameters must be the same
354 as those passed into the single mapping API. With the sync_single API,
355 you can use dma_handle and size parameters that aren't identical to
356 those passed into the single mapping API to do a partial sync.
357
358 Notes: You must do this:
359
360 - Before reading values that have been written by DMA from the device
361 (use the DMA_FROM_DEVICE direction)
362 - After writing values that will be written to the device using DMA
363 (use the DMA_TO_DEVICE) direction
364 - before *and* after handing memory to the device if the memory is
365 DMA_BIDIRECTIONAL
366
367 See also dma_map_single().
368
369 dma_addr_t
370 dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size,
371 enum dma_data_direction dir,
372 struct dma_attrs *attrs)
373
374 void
375 dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr,
376 size_t size, enum dma_data_direction dir,
377 struct dma_attrs *attrs)
378
379 int
380 dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
381 int nents, enum dma_data_direction dir,
382 struct dma_attrs *attrs)
383
384 void
385 dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl,
386 int nents, enum dma_data_direction dir,
387 struct dma_attrs *attrs)
388
389 The four functions above are just like the counterpart functions
390 without the _attrs suffixes, except that they pass an optional
391 struct dma_attrs*.
392
393 struct dma_attrs encapsulates a set of "DMA attributes". For the
394 definition of struct dma_attrs see linux/dma-attrs.h.
395
396 The interpretation of DMA attributes is architecture-specific, and
397 each attribute should be documented in Documentation/DMA-attributes.txt.
398
399 If struct dma_attrs* is NULL, the semantics of each of these
400 functions is identical to those of the corresponding function
401 without the _attrs suffix. As a result dma_map_single_attrs()
402 can generally replace dma_map_single(), etc.
403
404 As an example of the use of the *_attrs functions, here's how
405 you could pass an attribute DMA_ATTR_FOO when mapping memory
406 for DMA:
407
408 #include <linux/dma-attrs.h>
409 /* DMA_ATTR_FOO should be defined in linux/dma-attrs.h and
410 * documented in Documentation/DMA-attributes.txt */
411 ...
412
413 DEFINE_DMA_ATTRS(attrs);
414 dma_set_attr(DMA_ATTR_FOO, &attrs);
415 ....
416 n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, &attr);
417 ....
418
419 Architectures that care about DMA_ATTR_FOO would check for its
420 presence in their implementations of the mapping and unmapping
421 routines, e.g.:
422
423 void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr,
424 size_t size, enum dma_data_direction dir,
425 struct dma_attrs *attrs)
426 {
427 ....
428 int foo = dma_get_attr(DMA_ATTR_FOO, attrs);
429 ....
430 if (foo)
431 /* twizzle the frobnozzle */
432 ....
433
434
435 Part II - Advanced dma_ usage
436 -----------------------------
437
438 Warning: These pieces of the DMA API should not be used in the
439 majority of cases, since they cater for unlikely corner cases that
440 don't belong in usual drivers.
441
442 If you don't understand how cache line coherency works between a
443 processor and an I/O device, you should not be using this part of the
444 API at all.
445
446 void *
447 dma_alloc_noncoherent(struct device *dev, size_t size,
448 dma_addr_t *dma_handle, gfp_t flag)
449
450 Identical to dma_alloc_coherent() except that the platform will
451 choose to return either consistent or non-consistent memory as it sees
452 fit. By using this API, you are guaranteeing to the platform that you
453 have all the correct and necessary sync points for this memory in the
454 driver should it choose to return non-consistent memory.
455
456 Note: where the platform can return consistent memory, it will
457 guarantee that the sync points become nops.
458
459 Warning: Handling non-consistent memory is a real pain. You should
460 only use this API if you positively know your driver will be
461 required to work on one of the rare (usually non-PCI) architectures
462 that simply cannot make consistent memory.
463
464 void
465 dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr,
466 dma_addr_t dma_handle)
467
468 Free memory allocated by the nonconsistent API. All parameters must
469 be identical to those passed in (and returned by
470 dma_alloc_noncoherent()).
471
472 int
473 dma_get_cache_alignment(void)
474
475 Returns the processor cache alignment. This is the absolute minimum
476 alignment *and* width that you must observe when either mapping
477 memory or doing partial flushes.
478
479 Notes: This API may return a number *larger* than the actual cache
480 line, but it will guarantee that one or more cache lines fit exactly
481 into the width returned by this call. It will also always be a power
482 of two for easy alignment.
483
484 void
485 dma_cache_sync(struct device *dev, void *vaddr, size_t size,
486 enum dma_data_direction direction)
487
488 Do a partial sync of memory that was allocated by
489 dma_alloc_noncoherent(), starting at virtual address vaddr and
490 continuing on for size. Again, you *must* observe the cache line
491 boundaries when doing this.
492
493 int
494 dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
495 dma_addr_t device_addr, size_t size, int
496 flags)
497
498 Declare region of memory to be handed out by dma_alloc_coherent() when
499 it's asked for coherent memory for this device.
500
501 phys_addr is the CPU physical address to which the memory is currently
502 assigned (this will be ioremapped so the CPU can access the region).
503
504 device_addr is the DMA address the device needs to be programmed
505 with to actually address this memory (this will be handed out as the
506 dma_addr_t in dma_alloc_coherent()).
507
508 size is the size of the area (must be multiples of PAGE_SIZE).
509
510 flags can be ORed together and are:
511
512 DMA_MEMORY_MAP - request that the memory returned from
513 dma_alloc_coherent() be directly writable.
514
515 DMA_MEMORY_IO - request that the memory returned from
516 dma_alloc_coherent() be addressable using read()/write()/memcpy_toio() etc.
517
518 One or both of these flags must be present.
519
520 DMA_MEMORY_INCLUDES_CHILDREN - make the declared memory be allocated by
521 dma_alloc_coherent of any child devices of this one (for memory residing
522 on a bridge).
523
524 DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions.
525 Do not allow dma_alloc_coherent() to fall back to system memory when
526 it's out of memory in the declared region.
527
528 The return value will be either DMA_MEMORY_MAP or DMA_MEMORY_IO and
529 must correspond to a passed in flag (i.e. no returning DMA_MEMORY_IO
530 if only DMA_MEMORY_MAP were passed in) for success or zero for
531 failure.
532
533 Note, for DMA_MEMORY_IO returns, all subsequent memory returned by
534 dma_alloc_coherent() may no longer be accessed directly, but instead
535 must be accessed using the correct bus functions. If your driver
536 isn't prepared to handle this contingency, it should not specify
537 DMA_MEMORY_IO in the input flags.
538
539 As a simplification for the platforms, only *one* such region of
540 memory may be declared per device.
541
542 For reasons of efficiency, most platforms choose to track the declared
543 region only at the granularity of a page. For smaller allocations,
544 you should use the dma_pool() API.
545
546 void
547 dma_release_declared_memory(struct device *dev)
548
549 Remove the memory region previously declared from the system. This
550 API performs *no* in-use checking for this region and will return
551 unconditionally having removed all the required structures. It is the
552 driver's job to ensure that no parts of this memory region are
553 currently in use.
554
555 void *
556 dma_mark_declared_memory_occupied(struct device *dev,
557 dma_addr_t device_addr, size_t size)
558
559 This is used to occupy specific regions of the declared space
560 (dma_alloc_coherent() will hand out the first free region it finds).
561
562 device_addr is the *device* address of the region requested.
563
564 size is the size (and should be a page-sized multiple).
565
566 The return value will be either a pointer to the processor virtual
567 address of the memory, or an error (via PTR_ERR()) if any part of the
568 region is occupied.
569
570 Part III - Debug drivers use of the DMA-API
571 -------------------------------------------
572
573 The DMA-API as described above has some constraints. DMA addresses must be
574 released with the corresponding function with the same size for example. With
575 the advent of hardware IOMMUs it becomes more and more important that drivers
576 do not violate those constraints. In the worst case such a violation can
577 result in data corruption up to destroyed filesystems.
578
579 To debug drivers and find bugs in the usage of the DMA-API checking code can
580 be compiled into the kernel which will tell the developer about those
581 violations. If your architecture supports it you can select the "Enable
582 debugging of DMA-API usage" option in your kernel configuration. Enabling this
583 option has a performance impact. Do not enable it in production kernels.
584
585 If you boot the resulting kernel will contain code which does some bookkeeping
586 about what DMA memory was allocated for which device. If this code detects an
587 error it prints a warning message with some details into your kernel log. An
588 example warning message may look like this:
589
590 ------------[ cut here ]------------
591 WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448
592 check_unmap+0x203/0x490()
593 Hardware name:
594 forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong
595 function [device address=0x00000000640444be] [size=66 bytes] [mapped as
596 single] [unmapped as page]
597 Modules linked in: nfsd exportfs bridge stp llc r8169
598 Pid: 0, comm: swapper Tainted: G W 2.6.28-dmatest-09289-g8bb99c0 #1
599 Call Trace:
600 <IRQ> [<ffffffff80240b22>] warn_slowpath+0xf2/0x130
601 [<ffffffff80647b70>] _spin_unlock+0x10/0x30
602 [<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0
603 [<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40
604 [<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0
605 [<ffffffff80252f96>] queue_work+0x56/0x60
606 [<ffffffff80237e10>] enqueue_task_fair+0x20/0x50
607 [<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0
608 [<ffffffff803b78c3>] cpumask_next_and+0x23/0x40
609 [<ffffffff80235177>] find_busiest_group+0x207/0x8a0
610 [<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50
611 [<ffffffff803c7ea3>] check_unmap+0x203/0x490
612 [<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50
613 [<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0
614 [<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0
615 [<ffffffff8026df84>] handle_IRQ_event+0x34/0x70
616 [<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150
617 [<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0
618 [<ffffffff8020c093>] ret_from_intr+0x0/0xa
619 <EOI> <4>---[ end trace f6435a98e2a38c0e ]---
620
621 The driver developer can find the driver and the device including a stacktrace
622 of the DMA-API call which caused this warning.
623
624 Per default only the first error will result in a warning message. All other
625 errors will only silently counted. This limitation exist to prevent the code
626 from flooding your kernel log. To support debugging a device driver this can
627 be disabled via debugfs. See the debugfs interface documentation below for
628 details.
629
630 The debugfs directory for the DMA-API debugging code is called dma-api/. In
631 this directory the following files can currently be found:
632
633 dma-api/all_errors This file contains a numeric value. If this
634 value is not equal to zero the debugging code
635 will print a warning for every error it finds
636 into the kernel log. Be careful with this
637 option, as it can easily flood your logs.
638
639 dma-api/disabled This read-only file contains the character 'Y'
640 if the debugging code is disabled. This can
641 happen when it runs out of memory or if it was
642 disabled at boot time
643
644 dma-api/error_count This file is read-only and shows the total
645 numbers of errors found.
646
647 dma-api/num_errors The number in this file shows how many
648 warnings will be printed to the kernel log
649 before it stops. This number is initialized to
650 one at system boot and be set by writing into
651 this file
652
653 dma-api/min_free_entries
654 This read-only file can be read to get the
655 minimum number of free dma_debug_entries the
656 allocator has ever seen. If this value goes
657 down to zero the code will disable itself
658 because it is not longer reliable.
659
660 dma-api/num_free_entries
661 The current number of free dma_debug_entries
662 in the allocator.
663
664 dma-api/driver-filter
665 You can write a name of a driver into this file
666 to limit the debug output to requests from that
667 particular driver. Write an empty string to
668 that file to disable the filter and see
669 all errors again.
670
671 If you have this code compiled into your kernel it will be enabled by default.
672 If you want to boot without the bookkeeping anyway you can provide
673 'dma_debug=off' as a boot parameter. This will disable DMA-API debugging.
674 Notice that you can not enable it again at runtime. You have to reboot to do
675 so.
676
677 If you want to see debug messages only for a special device driver you can
678 specify the dma_debug_driver=<drivername> parameter. This will enable the
679 driver filter at boot time. The debug code will only print errors for that
680 driver afterwards. This filter can be disabled or changed later using debugfs.
681
682 When the code disables itself at runtime this is most likely because it ran
683 out of dma_debug_entries. These entries are preallocated at boot. The number
684 of preallocated entries is defined per architecture. If it is too low for you
685 boot with 'dma_debug_entries=<your_desired_number>' to overwrite the
686 architectural default.
687
688 void debug_dmap_mapping_error(struct device *dev, dma_addr_t dma_addr);
689
690 dma-debug interface debug_dma_mapping_error() to debug drivers that fail
691 to check DMA mapping errors on addresses returned by dma_map_single() and
692 dma_map_page() interfaces. This interface clears a flag set by
693 debug_dma_map_page() to indicate that dma_mapping_error() has been called by
694 the driver. When driver does unmap, debug_dma_unmap() checks the flag and if
695 this flag is still set, prints warning message that includes call trace that
696 leads up to the unmap. This interface can be called from dma_mapping_error()
697 routines to enable DMA mapping error check debugging.
698
This page took 0.050679 seconds and 6 git commands to generate.