[media] v4l: vsp1: Support runtime modification of controls
[deliverable/linux.git] / Documentation / kprobes.txt
1 Title : Kernel Probes (Kprobes)
2 Authors : Jim Keniston <jkenisto@us.ibm.com>
3 : Prasanna S Panchamukhi <prasanna.panchamukhi@gmail.com>
4 : Masami Hiramatsu <mhiramat@redhat.com>
5
6 CONTENTS
7
8 1. Concepts: Kprobes, Jprobes, Return Probes
9 2. Architectures Supported
10 3. Configuring Kprobes
11 4. API Reference
12 5. Kprobes Features and Limitations
13 6. Probe Overhead
14 7. TODO
15 8. Kprobes Example
16 9. Jprobes Example
17 10. Kretprobes Example
18 Appendix A: The kprobes debugfs interface
19 Appendix B: The kprobes sysctl interface
20
21 1. Concepts: Kprobes, Jprobes, Return Probes
22
23 Kprobes enables you to dynamically break into any kernel routine and
24 collect debugging and performance information non-disruptively. You
25 can trap at almost any kernel code address(*), specifying a handler
26 routine to be invoked when the breakpoint is hit.
27 (*: some parts of the kernel code can not be trapped, see 1.5 Blacklist)
28
29 There are currently three types of probes: kprobes, jprobes, and
30 kretprobes (also called return probes). A kprobe can be inserted
31 on virtually any instruction in the kernel. A jprobe is inserted at
32 the entry to a kernel function, and provides convenient access to the
33 function's arguments. A return probe fires when a specified function
34 returns.
35
36 In the typical case, Kprobes-based instrumentation is packaged as
37 a kernel module. The module's init function installs ("registers")
38 one or more probes, and the exit function unregisters them. A
39 registration function such as register_kprobe() specifies where
40 the probe is to be inserted and what handler is to be called when
41 the probe is hit.
42
43 There are also register_/unregister_*probes() functions for batch
44 registration/unregistration of a group of *probes. These functions
45 can speed up unregistration process when you have to unregister
46 a lot of probes at once.
47
48 The next four subsections explain how the different types of
49 probes work and how jump optimization works. They explain certain
50 things that you'll need to know in order to make the best use of
51 Kprobes -- e.g., the difference between a pre_handler and
52 a post_handler, and how to use the maxactive and nmissed fields of
53 a kretprobe. But if you're in a hurry to start using Kprobes, you
54 can skip ahead to section 2.
55
56 1.1 How Does a Kprobe Work?
57
58 When a kprobe is registered, Kprobes makes a copy of the probed
59 instruction and replaces the first byte(s) of the probed instruction
60 with a breakpoint instruction (e.g., int3 on i386 and x86_64).
61
62 When a CPU hits the breakpoint instruction, a trap occurs, the CPU's
63 registers are saved, and control passes to Kprobes via the
64 notifier_call_chain mechanism. Kprobes executes the "pre_handler"
65 associated with the kprobe, passing the handler the addresses of the
66 kprobe struct and the saved registers.
67
68 Next, Kprobes single-steps its copy of the probed instruction.
69 (It would be simpler to single-step the actual instruction in place,
70 but then Kprobes would have to temporarily remove the breakpoint
71 instruction. This would open a small time window when another CPU
72 could sail right past the probepoint.)
73
74 After the instruction is single-stepped, Kprobes executes the
75 "post_handler," if any, that is associated with the kprobe.
76 Execution then continues with the instruction following the probepoint.
77
78 1.2 How Does a Jprobe Work?
79
80 A jprobe is implemented using a kprobe that is placed on a function's
81 entry point. It employs a simple mirroring principle to allow
82 seamless access to the probed function's arguments. The jprobe
83 handler routine should have the same signature (arg list and return
84 type) as the function being probed, and must always end by calling
85 the Kprobes function jprobe_return().
86
87 Here's how it works. When the probe is hit, Kprobes makes a copy of
88 the saved registers and a generous portion of the stack (see below).
89 Kprobes then points the saved instruction pointer at the jprobe's
90 handler routine, and returns from the trap. As a result, control
91 passes to the handler, which is presented with the same register and
92 stack contents as the probed function. When it is done, the handler
93 calls jprobe_return(), which traps again to restore the original stack
94 contents and processor state and switch to the probed function.
95
96 By convention, the callee owns its arguments, so gcc may produce code
97 that unexpectedly modifies that portion of the stack. This is why
98 Kprobes saves a copy of the stack and restores it after the jprobe
99 handler has run. Up to MAX_STACK_SIZE bytes are copied -- e.g.,
100 64 bytes on i386.
101
102 Note that the probed function's args may be passed on the stack
103 or in registers. The jprobe will work in either case, so long as the
104 handler's prototype matches that of the probed function.
105
106 1.3 Return Probes
107
108 1.3.1 How Does a Return Probe Work?
109
110 When you call register_kretprobe(), Kprobes establishes a kprobe at
111 the entry to the function. When the probed function is called and this
112 probe is hit, Kprobes saves a copy of the return address, and replaces
113 the return address with the address of a "trampoline." The trampoline
114 is an arbitrary piece of code -- typically just a nop instruction.
115 At boot time, Kprobes registers a kprobe at the trampoline.
116
117 When the probed function executes its return instruction, control
118 passes to the trampoline and that probe is hit. Kprobes' trampoline
119 handler calls the user-specified return handler associated with the
120 kretprobe, then sets the saved instruction pointer to the saved return
121 address, and that's where execution resumes upon return from the trap.
122
123 While the probed function is executing, its return address is
124 stored in an object of type kretprobe_instance. Before calling
125 register_kretprobe(), the user sets the maxactive field of the
126 kretprobe struct to specify how many instances of the specified
127 function can be probed simultaneously. register_kretprobe()
128 pre-allocates the indicated number of kretprobe_instance objects.
129
130 For example, if the function is non-recursive and is called with a
131 spinlock held, maxactive = 1 should be enough. If the function is
132 non-recursive and can never relinquish the CPU (e.g., via a semaphore
133 or preemption), NR_CPUS should be enough. If maxactive <= 0, it is
134 set to a default value. If CONFIG_PREEMPT is enabled, the default
135 is max(10, 2*NR_CPUS). Otherwise, the default is NR_CPUS.
136
137 It's not a disaster if you set maxactive too low; you'll just miss
138 some probes. In the kretprobe struct, the nmissed field is set to
139 zero when the return probe is registered, and is incremented every
140 time the probed function is entered but there is no kretprobe_instance
141 object available for establishing the return probe.
142
143 1.3.2 Kretprobe entry-handler
144
145 Kretprobes also provides an optional user-specified handler which runs
146 on function entry. This handler is specified by setting the entry_handler
147 field of the kretprobe struct. Whenever the kprobe placed by kretprobe at the
148 function entry is hit, the user-defined entry_handler, if any, is invoked.
149 If the entry_handler returns 0 (success) then a corresponding return handler
150 is guaranteed to be called upon function return. If the entry_handler
151 returns a non-zero error then Kprobes leaves the return address as is, and
152 the kretprobe has no further effect for that particular function instance.
153
154 Multiple entry and return handler invocations are matched using the unique
155 kretprobe_instance object associated with them. Additionally, a user
156 may also specify per return-instance private data to be part of each
157 kretprobe_instance object. This is especially useful when sharing private
158 data between corresponding user entry and return handlers. The size of each
159 private data object can be specified at kretprobe registration time by
160 setting the data_size field of the kretprobe struct. This data can be
161 accessed through the data field of each kretprobe_instance object.
162
163 In case probed function is entered but there is no kretprobe_instance
164 object available, then in addition to incrementing the nmissed count,
165 the user entry_handler invocation is also skipped.
166
167 1.4 How Does Jump Optimization Work?
168
169 If your kernel is built with CONFIG_OPTPROBES=y (currently this flag
170 is automatically set 'y' on x86/x86-64, non-preemptive kernel) and
171 the "debug.kprobes_optimization" kernel parameter is set to 1 (see
172 sysctl(8)), Kprobes tries to reduce probe-hit overhead by using a jump
173 instruction instead of a breakpoint instruction at each probepoint.
174
175 1.4.1 Init a Kprobe
176
177 When a probe is registered, before attempting this optimization,
178 Kprobes inserts an ordinary, breakpoint-based kprobe at the specified
179 address. So, even if it's not possible to optimize this particular
180 probepoint, there'll be a probe there.
181
182 1.4.2 Safety Check
183
184 Before optimizing a probe, Kprobes performs the following safety checks:
185
186 - Kprobes verifies that the region that will be replaced by the jump
187 instruction (the "optimized region") lies entirely within one function.
188 (A jump instruction is multiple bytes, and so may overlay multiple
189 instructions.)
190
191 - Kprobes analyzes the entire function and verifies that there is no
192 jump into the optimized region. Specifically:
193 - the function contains no indirect jump;
194 - the function contains no instruction that causes an exception (since
195 the fixup code triggered by the exception could jump back into the
196 optimized region -- Kprobes checks the exception tables to verify this);
197 and
198 - there is no near jump to the optimized region (other than to the first
199 byte).
200
201 - For each instruction in the optimized region, Kprobes verifies that
202 the instruction can be executed out of line.
203
204 1.4.3 Preparing Detour Buffer
205
206 Next, Kprobes prepares a "detour" buffer, which contains the following
207 instruction sequence:
208 - code to push the CPU's registers (emulating a breakpoint trap)
209 - a call to the trampoline code which calls user's probe handlers.
210 - code to restore registers
211 - the instructions from the optimized region
212 - a jump back to the original execution path.
213
214 1.4.4 Pre-optimization
215
216 After preparing the detour buffer, Kprobes verifies that none of the
217 following situations exist:
218 - The probe has either a break_handler (i.e., it's a jprobe) or a
219 post_handler.
220 - Other instructions in the optimized region are probed.
221 - The probe is disabled.
222 In any of the above cases, Kprobes won't start optimizing the probe.
223 Since these are temporary situations, Kprobes tries to start
224 optimizing it again if the situation is changed.
225
226 If the kprobe can be optimized, Kprobes enqueues the kprobe to an
227 optimizing list, and kicks the kprobe-optimizer workqueue to optimize
228 it. If the to-be-optimized probepoint is hit before being optimized,
229 Kprobes returns control to the original instruction path by setting
230 the CPU's instruction pointer to the copied code in the detour buffer
231 -- thus at least avoiding the single-step.
232
233 1.4.5 Optimization
234
235 The Kprobe-optimizer doesn't insert the jump instruction immediately;
236 rather, it calls synchronize_sched() for safety first, because it's
237 possible for a CPU to be interrupted in the middle of executing the
238 optimized region(*). As you know, synchronize_sched() can ensure
239 that all interruptions that were active when synchronize_sched()
240 was called are done, but only if CONFIG_PREEMPT=n. So, this version
241 of kprobe optimization supports only kernels with CONFIG_PREEMPT=n.(**)
242
243 After that, the Kprobe-optimizer calls stop_machine() to replace
244 the optimized region with a jump instruction to the detour buffer,
245 using text_poke_smp().
246
247 1.4.6 Unoptimization
248
249 When an optimized kprobe is unregistered, disabled, or blocked by
250 another kprobe, it will be unoptimized. If this happens before
251 the optimization is complete, the kprobe is just dequeued from the
252 optimized list. If the optimization has been done, the jump is
253 replaced with the original code (except for an int3 breakpoint in
254 the first byte) by using text_poke_smp().
255
256 (*)Please imagine that the 2nd instruction is interrupted and then
257 the optimizer replaces the 2nd instruction with the jump *address*
258 while the interrupt handler is running. When the interrupt
259 returns to original address, there is no valid instruction,
260 and it causes an unexpected result.
261
262 (**)This optimization-safety checking may be replaced with the
263 stop-machine method that ksplice uses for supporting a CONFIG_PREEMPT=y
264 kernel.
265
266 NOTE for geeks:
267 The jump optimization changes the kprobe's pre_handler behavior.
268 Without optimization, the pre_handler can change the kernel's execution
269 path by changing regs->ip and returning 1. However, when the probe
270 is optimized, that modification is ignored. Thus, if you want to
271 tweak the kernel's execution path, you need to suppress optimization,
272 using one of the following techniques:
273 - Specify an empty function for the kprobe's post_handler or break_handler.
274 or
275 - Execute 'sysctl -w debug.kprobes_optimization=n'
276
277 1.5 Blacklist
278
279 Kprobes can probe most of the kernel except itself. This means
280 that there are some functions where kprobes cannot probe. Probing
281 (trapping) such functions can cause a recursive trap (e.g. double
282 fault) or the nested probe handler may never be called.
283 Kprobes manages such functions as a blacklist.
284 If you want to add a function into the blacklist, you just need
285 to (1) include linux/kprobes.h and (2) use NOKPROBE_SYMBOL() macro
286 to specify a blacklisted function.
287 Kprobes checks the given probe address against the blacklist and
288 rejects registering it, if the given address is in the blacklist.
289
290 2. Architectures Supported
291
292 Kprobes, jprobes, and return probes are implemented on the following
293 architectures:
294
295 - i386 (Supports jump optimization)
296 - x86_64 (AMD-64, EM64T) (Supports jump optimization)
297 - ppc64
298 - ia64 (Does not support probes on instruction slot1.)
299 - sparc64 (Return probes not yet implemented.)
300 - arm
301 - ppc
302 - mips
303 - s390
304
305 3. Configuring Kprobes
306
307 When configuring the kernel using make menuconfig/xconfig/oldconfig,
308 ensure that CONFIG_KPROBES is set to "y". Under "General setup", look
309 for "Kprobes".
310
311 So that you can load and unload Kprobes-based instrumentation modules,
312 make sure "Loadable module support" (CONFIG_MODULES) and "Module
313 unloading" (CONFIG_MODULE_UNLOAD) are set to "y".
314
315 Also make sure that CONFIG_KALLSYMS and perhaps even CONFIG_KALLSYMS_ALL
316 are set to "y", since kallsyms_lookup_name() is used by the in-kernel
317 kprobe address resolution code.
318
319 If you need to insert a probe in the middle of a function, you may find
320 it useful to "Compile the kernel with debug info" (CONFIG_DEBUG_INFO),
321 so you can use "objdump -d -l vmlinux" to see the source-to-object
322 code mapping.
323
324 4. API Reference
325
326 The Kprobes API includes a "register" function and an "unregister"
327 function for each type of probe. The API also includes "register_*probes"
328 and "unregister_*probes" functions for (un)registering arrays of probes.
329 Here are terse, mini-man-page specifications for these functions and
330 the associated probe handlers that you'll write. See the files in the
331 samples/kprobes/ sub-directory for examples.
332
333 4.1 register_kprobe
334
335 #include <linux/kprobes.h>
336 int register_kprobe(struct kprobe *kp);
337
338 Sets a breakpoint at the address kp->addr. When the breakpoint is
339 hit, Kprobes calls kp->pre_handler. After the probed instruction
340 is single-stepped, Kprobe calls kp->post_handler. If a fault
341 occurs during execution of kp->pre_handler or kp->post_handler,
342 or during single-stepping of the probed instruction, Kprobes calls
343 kp->fault_handler. Any or all handlers can be NULL. If kp->flags
344 is set KPROBE_FLAG_DISABLED, that kp will be registered but disabled,
345 so, its handlers aren't hit until calling enable_kprobe(kp).
346
347 NOTE:
348 1. With the introduction of the "symbol_name" field to struct kprobe,
349 the probepoint address resolution will now be taken care of by the kernel.
350 The following will now work:
351
352 kp.symbol_name = "symbol_name";
353
354 (64-bit powerpc intricacies such as function descriptors are handled
355 transparently)
356
357 2. Use the "offset" field of struct kprobe if the offset into the symbol
358 to install a probepoint is known. This field is used to calculate the
359 probepoint.
360
361 3. Specify either the kprobe "symbol_name" OR the "addr". If both are
362 specified, kprobe registration will fail with -EINVAL.
363
364 4. With CISC architectures (such as i386 and x86_64), the kprobes code
365 does not validate if the kprobe.addr is at an instruction boundary.
366 Use "offset" with caution.
367
368 register_kprobe() returns 0 on success, or a negative errno otherwise.
369
370 User's pre-handler (kp->pre_handler):
371 #include <linux/kprobes.h>
372 #include <linux/ptrace.h>
373 int pre_handler(struct kprobe *p, struct pt_regs *regs);
374
375 Called with p pointing to the kprobe associated with the breakpoint,
376 and regs pointing to the struct containing the registers saved when
377 the breakpoint was hit. Return 0 here unless you're a Kprobes geek.
378
379 User's post-handler (kp->post_handler):
380 #include <linux/kprobes.h>
381 #include <linux/ptrace.h>
382 void post_handler(struct kprobe *p, struct pt_regs *regs,
383 unsigned long flags);
384
385 p and regs are as described for the pre_handler. flags always seems
386 to be zero.
387
388 User's fault-handler (kp->fault_handler):
389 #include <linux/kprobes.h>
390 #include <linux/ptrace.h>
391 int fault_handler(struct kprobe *p, struct pt_regs *regs, int trapnr);
392
393 p and regs are as described for the pre_handler. trapnr is the
394 architecture-specific trap number associated with the fault (e.g.,
395 on i386, 13 for a general protection fault or 14 for a page fault).
396 Returns 1 if it successfully handled the exception.
397
398 4.2 register_jprobe
399
400 #include <linux/kprobes.h>
401 int register_jprobe(struct jprobe *jp)
402
403 Sets a breakpoint at the address jp->kp.addr, which must be the address
404 of the first instruction of a function. When the breakpoint is hit,
405 Kprobes runs the handler whose address is jp->entry.
406
407 The handler should have the same arg list and return type as the probed
408 function; and just before it returns, it must call jprobe_return().
409 (The handler never actually returns, since jprobe_return() returns
410 control to Kprobes.) If the probed function is declared asmlinkage
411 or anything else that affects how args are passed, the handler's
412 declaration must match.
413
414 register_jprobe() returns 0 on success, or a negative errno otherwise.
415
416 4.3 register_kretprobe
417
418 #include <linux/kprobes.h>
419 int register_kretprobe(struct kretprobe *rp);
420
421 Establishes a return probe for the function whose address is
422 rp->kp.addr. When that function returns, Kprobes calls rp->handler.
423 You must set rp->maxactive appropriately before you call
424 register_kretprobe(); see "How Does a Return Probe Work?" for details.
425
426 register_kretprobe() returns 0 on success, or a negative errno
427 otherwise.
428
429 User's return-probe handler (rp->handler):
430 #include <linux/kprobes.h>
431 #include <linux/ptrace.h>
432 int kretprobe_handler(struct kretprobe_instance *ri, struct pt_regs *regs);
433
434 regs is as described for kprobe.pre_handler. ri points to the
435 kretprobe_instance object, of which the following fields may be
436 of interest:
437 - ret_addr: the return address
438 - rp: points to the corresponding kretprobe object
439 - task: points to the corresponding task struct
440 - data: points to per return-instance private data; see "Kretprobe
441 entry-handler" for details.
442
443 The regs_return_value(regs) macro provides a simple abstraction to
444 extract the return value from the appropriate register as defined by
445 the architecture's ABI.
446
447 The handler's return value is currently ignored.
448
449 4.4 unregister_*probe
450
451 #include <linux/kprobes.h>
452 void unregister_kprobe(struct kprobe *kp);
453 void unregister_jprobe(struct jprobe *jp);
454 void unregister_kretprobe(struct kretprobe *rp);
455
456 Removes the specified probe. The unregister function can be called
457 at any time after the probe has been registered.
458
459 NOTE:
460 If the functions find an incorrect probe (ex. an unregistered probe),
461 they clear the addr field of the probe.
462
463 4.5 register_*probes
464
465 #include <linux/kprobes.h>
466 int register_kprobes(struct kprobe **kps, int num);
467 int register_kretprobes(struct kretprobe **rps, int num);
468 int register_jprobes(struct jprobe **jps, int num);
469
470 Registers each of the num probes in the specified array. If any
471 error occurs during registration, all probes in the array, up to
472 the bad probe, are safely unregistered before the register_*probes
473 function returns.
474 - kps/rps/jps: an array of pointers to *probe data structures
475 - num: the number of the array entries.
476
477 NOTE:
478 You have to allocate(or define) an array of pointers and set all
479 of the array entries before using these functions.
480
481 4.6 unregister_*probes
482
483 #include <linux/kprobes.h>
484 void unregister_kprobes(struct kprobe **kps, int num);
485 void unregister_kretprobes(struct kretprobe **rps, int num);
486 void unregister_jprobes(struct jprobe **jps, int num);
487
488 Removes each of the num probes in the specified array at once.
489
490 NOTE:
491 If the functions find some incorrect probes (ex. unregistered
492 probes) in the specified array, they clear the addr field of those
493 incorrect probes. However, other probes in the array are
494 unregistered correctly.
495
496 4.7 disable_*probe
497
498 #include <linux/kprobes.h>
499 int disable_kprobe(struct kprobe *kp);
500 int disable_kretprobe(struct kretprobe *rp);
501 int disable_jprobe(struct jprobe *jp);
502
503 Temporarily disables the specified *probe. You can enable it again by using
504 enable_*probe(). You must specify the probe which has been registered.
505
506 4.8 enable_*probe
507
508 #include <linux/kprobes.h>
509 int enable_kprobe(struct kprobe *kp);
510 int enable_kretprobe(struct kretprobe *rp);
511 int enable_jprobe(struct jprobe *jp);
512
513 Enables *probe which has been disabled by disable_*probe(). You must specify
514 the probe which has been registered.
515
516 5. Kprobes Features and Limitations
517
518 Kprobes allows multiple probes at the same address. Currently,
519 however, there cannot be multiple jprobes on the same function at
520 the same time. Also, a probepoint for which there is a jprobe or
521 a post_handler cannot be optimized. So if you install a jprobe,
522 or a kprobe with a post_handler, at an optimized probepoint, the
523 probepoint will be unoptimized automatically.
524
525 In general, you can install a probe anywhere in the kernel.
526 In particular, you can probe interrupt handlers. Known exceptions
527 are discussed in this section.
528
529 The register_*probe functions will return -EINVAL if you attempt
530 to install a probe in the code that implements Kprobes (mostly
531 kernel/kprobes.c and arch/*/kernel/kprobes.c, but also functions such
532 as do_page_fault and notifier_call_chain).
533
534 If you install a probe in an inline-able function, Kprobes makes
535 no attempt to chase down all inline instances of the function and
536 install probes there. gcc may inline a function without being asked,
537 so keep this in mind if you're not seeing the probe hits you expect.
538
539 A probe handler can modify the environment of the probed function
540 -- e.g., by modifying kernel data structures, or by modifying the
541 contents of the pt_regs struct (which are restored to the registers
542 upon return from the breakpoint). So Kprobes can be used, for example,
543 to install a bug fix or to inject faults for testing. Kprobes, of
544 course, has no way to distinguish the deliberately injected faults
545 from the accidental ones. Don't drink and probe.
546
547 Kprobes makes no attempt to prevent probe handlers from stepping on
548 each other -- e.g., probing printk() and then calling printk() from a
549 probe handler. If a probe handler hits a probe, that second probe's
550 handlers won't be run in that instance, and the kprobe.nmissed member
551 of the second probe will be incremented.
552
553 As of Linux v2.6.15-rc1, multiple handlers (or multiple instances of
554 the same handler) may run concurrently on different CPUs.
555
556 Kprobes does not use mutexes or allocate memory except during
557 registration and unregistration.
558
559 Probe handlers are run with preemption disabled. Depending on the
560 architecture and optimization state, handlers may also run with
561 interrupts disabled (e.g., kretprobe handlers and optimized kprobe
562 handlers run without interrupt disabled on x86/x86-64). In any case,
563 your handler should not yield the CPU (e.g., by attempting to acquire
564 a semaphore).
565
566 Since a return probe is implemented by replacing the return
567 address with the trampoline's address, stack backtraces and calls
568 to __builtin_return_address() will typically yield the trampoline's
569 address instead of the real return address for kretprobed functions.
570 (As far as we can tell, __builtin_return_address() is used only
571 for instrumentation and error reporting.)
572
573 If the number of times a function is called does not match the number
574 of times it returns, registering a return probe on that function may
575 produce undesirable results. In such a case, a line:
576 kretprobe BUG!: Processing kretprobe d000000000041aa8 @ c00000000004f48c
577 gets printed. With this information, one will be able to correlate the
578 exact instance of the kretprobe that caused the problem. We have the
579 do_exit() case covered. do_execve() and do_fork() are not an issue.
580 We're unaware of other specific cases where this could be a problem.
581
582 If, upon entry to or exit from a function, the CPU is running on
583 a stack other than that of the current task, registering a return
584 probe on that function may produce undesirable results. For this
585 reason, Kprobes doesn't support return probes (or kprobes or jprobes)
586 on the x86_64 version of __switch_to(); the registration functions
587 return -EINVAL.
588
589 On x86/x86-64, since the Jump Optimization of Kprobes modifies
590 instructions widely, there are some limitations to optimization. To
591 explain it, we introduce some terminology. Imagine a 3-instruction
592 sequence consisting of a two 2-byte instructions and one 3-byte
593 instruction.
594
595 IA
596 |
597 [-2][-1][0][1][2][3][4][5][6][7]
598 [ins1][ins2][ ins3 ]
599 [<- DCR ->]
600 [<- JTPR ->]
601
602 ins1: 1st Instruction
603 ins2: 2nd Instruction
604 ins3: 3rd Instruction
605 IA: Insertion Address
606 JTPR: Jump Target Prohibition Region
607 DCR: Detoured Code Region
608
609 The instructions in DCR are copied to the out-of-line buffer
610 of the kprobe, because the bytes in DCR are replaced by
611 a 5-byte jump instruction. So there are several limitations.
612
613 a) The instructions in DCR must be relocatable.
614 b) The instructions in DCR must not include a call instruction.
615 c) JTPR must not be targeted by any jump or call instruction.
616 d) DCR must not straddle the border between functions.
617
618 Anyway, these limitations are checked by the in-kernel instruction
619 decoder, so you don't need to worry about that.
620
621 6. Probe Overhead
622
623 On a typical CPU in use in 2005, a kprobe hit takes 0.5 to 1.0
624 microseconds to process. Specifically, a benchmark that hits the same
625 probepoint repeatedly, firing a simple handler each time, reports 1-2
626 million hits per second, depending on the architecture. A jprobe or
627 return-probe hit typically takes 50-75% longer than a kprobe hit.
628 When you have a return probe set on a function, adding a kprobe at
629 the entry to that function adds essentially no overhead.
630
631 Here are sample overhead figures (in usec) for different architectures.
632 k = kprobe; j = jprobe; r = return probe; kr = kprobe + return probe
633 on same function; jr = jprobe + return probe on same function
634
635 i386: Intel Pentium M, 1495 MHz, 2957.31 bogomips
636 k = 0.57 usec; j = 1.00; r = 0.92; kr = 0.99; jr = 1.40
637
638 x86_64: AMD Opteron 246, 1994 MHz, 3971.48 bogomips
639 k = 0.49 usec; j = 0.76; r = 0.80; kr = 0.82; jr = 1.07
640
641 ppc64: POWER5 (gr), 1656 MHz (SMT disabled, 1 virtual CPU per physical CPU)
642 k = 0.77 usec; j = 1.31; r = 1.26; kr = 1.45; jr = 1.99
643
644 6.1 Optimized Probe Overhead
645
646 Typically, an optimized kprobe hit takes 0.07 to 0.1 microseconds to
647 process. Here are sample overhead figures (in usec) for x86 architectures.
648 k = unoptimized kprobe, b = boosted (single-step skipped), o = optimized kprobe,
649 r = unoptimized kretprobe, rb = boosted kretprobe, ro = optimized kretprobe.
650
651 i386: Intel(R) Xeon(R) E5410, 2.33GHz, 4656.90 bogomips
652 k = 0.80 usec; b = 0.33; o = 0.05; r = 1.10; rb = 0.61; ro = 0.33
653
654 x86-64: Intel(R) Xeon(R) E5410, 2.33GHz, 4656.90 bogomips
655 k = 0.99 usec; b = 0.43; o = 0.06; r = 1.24; rb = 0.68; ro = 0.30
656
657 7. TODO
658
659 a. SystemTap (http://sourceware.org/systemtap): Provides a simplified
660 programming interface for probe-based instrumentation. Try it out.
661 b. Kernel return probes for sparc64.
662 c. Support for other architectures.
663 d. User-space probes.
664 e. Watchpoint probes (which fire on data references).
665
666 8. Kprobes Example
667
668 See samples/kprobes/kprobe_example.c
669
670 9. Jprobes Example
671
672 See samples/kprobes/jprobe_example.c
673
674 10. Kretprobes Example
675
676 See samples/kprobes/kretprobe_example.c
677
678 For additional information on Kprobes, refer to the following URLs:
679 http://www-106.ibm.com/developerworks/library/l-kprobes.html?ca=dgr-lnxw42Kprobe
680 http://www.redhat.com/magazine/005mar05/features/kprobes/
681 http://www-users.cs.umn.edu/~boutcher/kprobes/
682 http://www.linuxsymposium.org/2006/linuxsymposium_procv2.pdf (pages 101-115)
683
684
685 Appendix A: The kprobes debugfs interface
686
687 With recent kernels (> 2.6.20) the list of registered kprobes is visible
688 under the /sys/kernel/debug/kprobes/ directory (assuming debugfs is mounted at //sys/kernel/debug).
689
690 /sys/kernel/debug/kprobes/list: Lists all registered probes on the system
691
692 c015d71a k vfs_read+0x0
693 c011a316 j do_fork+0x0
694 c03dedc5 r tcp_v4_rcv+0x0
695
696 The first column provides the kernel address where the probe is inserted.
697 The second column identifies the type of probe (k - kprobe, r - kretprobe
698 and j - jprobe), while the third column specifies the symbol+offset of
699 the probe. If the probed function belongs to a module, the module name
700 is also specified. Following columns show probe status. If the probe is on
701 a virtual address that is no longer valid (module init sections, module
702 virtual addresses that correspond to modules that've been unloaded),
703 such probes are marked with [GONE]. If the probe is temporarily disabled,
704 such probes are marked with [DISABLED]. If the probe is optimized, it is
705 marked with [OPTIMIZED]. If the probe is ftrace-based, it is marked with
706 [FTRACE].
707
708 /sys/kernel/debug/kprobes/enabled: Turn kprobes ON/OFF forcibly.
709
710 Provides a knob to globally and forcibly turn registered kprobes ON or OFF.
711 By default, all kprobes are enabled. By echoing "0" to this file, all
712 registered probes will be disarmed, till such time a "1" is echoed to this
713 file. Note that this knob just disarms and arms all kprobes and doesn't
714 change each probe's disabling state. This means that disabled kprobes (marked
715 [DISABLED]) will be not enabled if you turn ON all kprobes by this knob.
716
717
718 Appendix B: The kprobes sysctl interface
719
720 /proc/sys/debug/kprobes-optimization: Turn kprobes optimization ON/OFF.
721
722 When CONFIG_OPTPROBES=y, this sysctl interface appears and it provides
723 a knob to globally and forcibly turn jump optimization (see section
724 1.4) ON or OFF. By default, jump optimization is allowed (ON).
725 If you echo "0" to this file or set "debug.kprobes_optimization" to
726 0 via sysctl, all optimized probes will be unoptimized, and any new
727 probes registered after that will not be optimized. Note that this
728 knob *changes* the optimized state. This means that optimized probes
729 (marked [OPTIMIZED]) will be unoptimized ([OPTIMIZED] tag will be
730 removed). If the knob is turned on, they will be optimized again.
731
This page took 0.053274 seconds and 5 git commands to generate.