Fix typos.
[deliverable/binutils-gdb.git] / gprof / gprof.texi
CommitLineData
252b5132
RH
1\input texinfo @c -*-texinfo-*-
2@setfilename gprof.info
3@settitle GNU gprof
4@setchapternewpage odd
5
6@ifinfo
7@c This is a dir.info fragment to support semi-automated addition of
8@c manuals to an info tree. zoo@cygnus.com is developing this facility.
9@format
10START-INFO-DIR-ENTRY
11* gprof: (gprof). Profiling your program's execution
12END-INFO-DIR-ENTRY
13@end format
14@end ifinfo
15
16@ifinfo
17This file documents the gprof profiler of the GNU system.
18
19Copyright (C) 1988, 1992, 1997, 1998, 1999 Free Software Foundation, Inc.
20
21Permission is granted to make and distribute verbatim copies of
22this manual provided the copyright notice and this permission notice
23are preserved on all copies.
24
25@ignore
26Permission is granted to process this file through Tex and print the
27results, provided the printed document carries copying permission
28notice identical to this one except for the removal of this paragraph
29(this paragraph not being relevant to the printed manual).
30
31@end ignore
32Permission is granted to copy and distribute modified versions of this
33manual under the conditions for verbatim copying, provided that the entire
34resulting derived work is distributed under the terms of a permission
35notice identical to this one.
36
37Permission is granted to copy and distribute translations of this manual
38into another language, under the above conditions for modified versions.
39@end ifinfo
40
41@finalout
42@smallbook
43
44@titlepage
45@title GNU gprof
46@subtitle The @sc{gnu} Profiler
47@author Jay Fenlason and Richard Stallman
48
49@page
50
51This manual describes the @sc{gnu} profiler, @code{gprof}, and how you
52can use it to determine which parts of a program are taking most of the
53execution time. We assume that you know how to write, compile, and
54execute programs. @sc{gnu} @code{gprof} was written by Jay Fenlason.
55
56This manual was edited January 1993 by Jeffrey Osier
57and updated September 1997 by Brent Baccala.
58
59@vskip 0pt plus 1filll
60Copyright @copyright{} 1988, 1992, 1997, 1998 Free Software Foundation, Inc.
61
62Permission is granted to make and distribute verbatim copies of
63this manual provided the copyright notice and this permission notice
64are preserved on all copies.
65
66@ignore
67Permission is granted to process this file through TeX and print the
68results, provided the printed document carries copying permission
69notice identical to this one except for the removal of this paragraph
70(this paragraph not being relevant to the printed manual).
71
72@end ignore
73Permission is granted to copy and distribute modified versions of this
74manual under the conditions for verbatim copying, provided that the entire
75resulting derived work is distributed under the terms of a permission
76notice identical to this one.
77
78Permission is granted to copy and distribute translations of this manual
79into another language, under the same conditions as for modified versions.
80
81@end titlepage
82
83@ifinfo
84@node Top
85@top Profiling a Program: Where Does It Spend Its Time?
86
87This manual describes the @sc{gnu} profiler, @code{gprof}, and how you
88can use it to determine which parts of a program are taking most of the
89execution time. We assume that you know how to write, compile, and
90execute programs. @sc{gnu} @code{gprof} was written by Jay Fenlason.
91
92This manual was updated August 1997 by Brent Baccala.
93
94@menu
95* Introduction:: What profiling means, and why it is useful.
96
97* Compiling:: How to compile your program for profiling.
98* Executing:: Executing your program to generate profile data
99* Invoking:: How to run @code{gprof}, and its options
100
101* Output:: Interpreting @code{gprof}'s output
102
103* Inaccuracy:: Potential problems you should be aware of
104* How do I?:: Answers to common questions
105* Incompatibilities:: (between @sc{gnu} @code{gprof} and Unix @code{gprof}.)
106* Details:: Details of how profiling is done
107@end menu
108@end ifinfo
109
110@node Introduction
111@chapter Introduction to Profiling
112
113Profiling allows you to learn where your program spent its time and which
114functions called which other functions while it was executing. This
115information can show you which pieces of your program are slower than you
116expected, and might be candidates for rewriting to make your program
117execute faster. It can also tell you which functions are being called more
118or less often than you expected. This may help you spot bugs that had
119otherwise been unnoticed.
120
121Since the profiler uses information collected during the actual execution
122of your program, it can be used on programs that are too large or too
123complex to analyze by reading the source. However, how your program is run
124will affect the information that shows up in the profile data. If you
125don't use some feature of your program while it is being profiled, no
126profile information will be generated for that feature.
127
128Profiling has several steps:
129
130@itemize @bullet
131@item
132You must compile and link your program with profiling enabled.
133@xref{Compiling}.
134
135@item
136You must execute your program to generate a profile data file.
137@xref{Executing}.
138
139@item
140You must run @code{gprof} to analyze the profile data.
141@xref{Invoking}.
142@end itemize
143
144The next three chapters explain these steps in greater detail.
145
146Several forms of output are available from the analysis.
147
148The @dfn{flat profile} shows how much time your program spent in each function,
149and how many times that function was called. If you simply want to know
150which functions burn most of the cycles, it is stated concisely here.
151@xref{Flat Profile}.
152
153The @dfn{call graph} shows, for each function, which functions called it, which
154other functions it called, and how many times. There is also an estimate
155of how much time was spent in the subroutines of each function. This can
156suggest places where you might try to eliminate function calls that use a
157lot of time. @xref{Call Graph}.
158
159The @dfn{annotated source} listing is a copy of the program's
160source code, labeled with the number of times each line of the
161program was executed. @xref{Annotated Source}.
162
163To better understand how profiling works, you may wish to read
164a description of its implementation.
165@xref{Implementation}.
166
167@node Compiling
168@chapter Compiling a Program for Profiling
169
170The first step in generating profile information for your program is
171to compile and link it with profiling enabled.
172
173To compile a source file for profiling, specify the @samp{-pg} option when
174you run the compiler. (This is in addition to the options you normally
175use.)
176
177To link the program for profiling, if you use a compiler such as @code{cc}
178to do the linking, simply specify @samp{-pg} in addition to your usual
179options. The same option, @samp{-pg}, alters either compilation or linking
180to do what is necessary for profiling. Here are examples:
181
182@example
183cc -g -c myprog.c utils.c -pg
184cc -o myprog myprog.o utils.o -pg
185@end example
186
187The @samp{-pg} option also works with a command that both compiles and links:
188
189@example
190cc -o myprog myprog.c utils.c -g -pg
191@end example
192
193If you run the linker @code{ld} directly instead of through a compiler
194such as @code{cc}, you may have to specify a profiling startup file
195@file{gcrt0.o} as the first input file instead of the usual startup
196file @file{crt0.o}. In addition, you would probably want to
197specify the profiling C library, @file{libc_p.a}, by writing
198@samp{-lc_p} instead of the usual @samp{-lc}. This is not absolutely
199necessary, but doing this gives you number-of-calls information for
200standard library functions such as @code{read} and @code{open}. For
201example:
202
203@example
204ld -o myprog /lib/gcrt0.o myprog.o utils.o -lc_p
205@end example
206
207If you compile only some of the modules of the program with @samp{-pg}, you
208can still profile the program, but you won't get complete information about
209the modules that were compiled without @samp{-pg}. The only information
210you get for the functions in those modules is the total time spent in them;
211there is no record of how many times they were called, or from where. This
212will not affect the flat profile (except that the @code{calls} field for
213the functions will be blank), but will greatly reduce the usefulness of the
214call graph.
215
216If you wish to perform line-by-line profiling,
217you will also need to specify the @samp{-g} option,
218instructing the compiler to insert debugging symbols into the program
219that match program addresses to source code lines.
220@xref{Line-by-line}.
221
222In addition to the @samp{-pg} and @samp{-g} options,
223you may also wish to specify the @samp{-a} option when compiling.
224This will instrument
225the program to perform basic-block counting. As the program runs,
226it will count how many times it executed each branch of each @samp{if}
227statement, each iteration of each @samp{do} loop, etc. This will
228enable @code{gprof} to construct an annotated source code
229listing showing how many times each line of code was executed.
230
231@node Executing
232@chapter Executing the Program
233
234Once the program is compiled for profiling, you must run it in order to
235generate the information that @code{gprof} needs. Simply run the program
236as usual, using the normal arguments, file names, etc. The program should
237run normally, producing the same output as usual. It will, however, run
238somewhat slower than normal because of the time spent collecting and the
239writing the profile data.
240
241The way you run the program---the arguments and input that you give
242it---may have a dramatic effect on what the profile information shows. The
243profile data will describe the parts of the program that were activated for
244the particular input you use. For example, if the first command you give
245to your program is to quit, the profile data will show the time used in
246initialization and in cleanup, but not much else.
247
248Your program will write the profile data into a file called @file{gmon.out}
249just before exiting. If there is already a file called @file{gmon.out},
250its contents are overwritten. There is currently no way to tell the
251program to write the profile data under a different name, but you can rename
252the file afterward if you are concerned that it may be overwritten.
253
254In order to write the @file{gmon.out} file properly, your program must exit
255normally: by returning from @code{main} or by calling @code{exit}. Calling
256the low-level function @code{_exit} does not write the profile data, and
257neither does abnormal termination due to an unhandled signal.
258
259The @file{gmon.out} file is written in the program's @emph{current working
260directory} at the time it exits. This means that if your program calls
261@code{chdir}, the @file{gmon.out} file will be left in the last directory
262your program @code{chdir}'d to. If you don't have permission to write in
263this directory, the file is not written, and you will get an error message.
264
265Older versions of the @sc{gnu} profiling library may also write a file
266called @file{bb.out}. This file, if present, contains an human-readable
267listing of the basic-block execution counts. Unfortunately, the
268appearance of a human-readable @file{bb.out} means the basic-block
269counts didn't get written into @file{gmon.out}.
270The Perl script @code{bbconv.pl}, included with the @code{gprof}
271source distribution, will convert a @file{bb.out} file into
272a format readable by @code{gprof}.
273
274@node Invoking
275@chapter @code{gprof} Command Summary
276
277After you have a profile data file @file{gmon.out}, you can run @code{gprof}
278to interpret the information in it. The @code{gprof} program prints a
279flat profile and a call graph on standard output. Typically you would
280redirect the output of @code{gprof} into a file with @samp{>}.
281
282You run @code{gprof} like this:
283
284@smallexample
285gprof @var{options} [@var{executable-file} [@var{profile-data-files}@dots{}]] [> @var{outfile}]
286@end smallexample
287
288@noindent
289Here square-brackets indicate optional arguments.
290
291If you omit the executable file name, the file @file{a.out} is used. If
292you give no profile data file name, the file @file{gmon.out} is used. If
293any file is not in the proper format, or if the profile data file does not
294appear to belong to the executable file, an error message is printed.
295
296You can give more than one profile data file by entering all their names
297after the executable file name; then the statistics in all the data files
298are summed together.
299
300The order of these options does not matter.
301
302@menu
303* Output Options:: Controlling @code{gprof}'s output style
304* Analysis Options:: Controlling how @code{gprof} analyses its data
305* Miscellaneous Options::
306* Depricated Options:: Options you no longer need to use, but which
307 have been retained for compatibility
308* Symspecs:: Specifying functions to include or exclude
309@end menu
310
311@node Output Options,Analysis Options,,Invoking
312@section Output Options
313
314These options specify which of several output formats
315@code{gprof} should produce.
316
317Many of these options take an optional @dfn{symspec} to specify
318functions to be included or excluded. These options can be
319specified multiple times, with different symspecs, to include
320or exclude sets of symbols. @xref{Symspecs}.
321
322Specifying any of these options overrides the default (@samp{-p -q}),
323which prints a flat profile and call graph analysis
324for all functions.
325
326@table @code
327
328@item -A[@var{symspec}]
329@itemx --annotated-source[=@var{symspec}]
330The @samp{-A} option causes @code{gprof} to print annotated source code.
331If @var{symspec} is specified, print output only for matching symbols.
332@xref{Annotated Source}.
333
334@item -b
335@itemx --brief
336If the @samp{-b} option is given, @code{gprof} doesn't print the
337verbose blurbs that try to explain the meaning of all of the fields in
338the tables. This is useful if you intend to print out the output, or
339are tired of seeing the blurbs.
340
341@item -C[@var{symspec}]
342@itemx --exec-counts[=@var{symspec}]
343The @samp{-C} option causes @code{gprof} to
344print a tally of functions and the number of times each was called.
345If @var{symspec} is specified, print tally only for matching symbols.
346
347If the profile data file contains basic-block count records, specifing
348the @samp{-l} option, along with @samp{-C}, will cause basic-block
349execution counts to be tallied and displayed.
350
351@item -i
352@itemx --file-info
353The @samp{-i} option causes @code{gprof} to display summary information
354about the profile data file(s) and then exit. The number of histogram,
355call graph, and basic-block count records is displayed.
356
357@item -I @var{dirs}
358@itemx --directory-path=@var{dirs}
359The @samp{-I} option specifies a list of search directories in
360which to find source files. Environment variable @var{GPROF_PATH}
361can also be used to convery this information.
362Used mostly for annotated source output.
363
364@item -J[@var{symspec}]
365@itemx --no-annotated-source[=@var{symspec}]
366The @samp{-J} option causes @code{gprof} not to
367print annotated source code.
368If @var{symspec} is specified, @code{gprof} prints annotated source,
369but excludes matching symbols.
370
371@item -L
372@itemx --print-path
373Normally, source filenames are printed with the path
374component suppressed. The @samp{-L} option causes @code{gprof}
375to print the full pathname of
376source filenames, which is determined
377from symbolic debugging information in the image file
378and is relative to the directory in which the compiler
379was invoked.
380
381@item -p[@var{symspec}]
382@itemx --flat-profile[=@var{symspec}]
383The @samp{-p} option causes @code{gprof} to print a flat profile.
384If @var{symspec} is specified, print flat profile only for matching symbols.
385@xref{Flat Profile}.
386
387@item -P[@var{symspec}]
388@itemx --no-flat-profile[=@var{symspec}]
389The @samp{-P} option causes @code{gprof} to suppress printing a flat profile.
390If @var{symspec} is specified, @code{gprof} prints a flat profile,
391but excludes matching symbols.
392
393@item -q[@var{symspec}]
394@itemx --graph[=@var{symspec}]
395The @samp{-q} option causes @code{gprof} to print the call graph analysis.
396If @var{symspec} is specified, print call graph only for matching symbols
397and their children.
398@xref{Call Graph}.
399
400@item -Q[@var{symspec}]
401@itemx --no-graph[=@var{symspec}]
402The @samp{-Q} option causes @code{gprof} to suppress printing the
403call graph.
404If @var{symspec} is specified, @code{gprof} prints a call graph,
405but excludes matching symbols.
406
407@item -y
408@itemx --separate-files
409This option affects annotated source output only.
410Normally, gprof prints annotated source files
411to standard-output. If this option is specified,
412annotated source for a file named @file{path/filename}
413is generated in the file @file{filename-ann}.
414
415@item -Z[@var{symspec}]
416@itemx --no-exec-counts[=@var{symspec}]
417The @samp{-Z} option causes @code{gprof} not to
418print a tally of functions and the number of times each was called.
419If @var{symspec} is specified, print tally, but exclude matching symbols.
420
421@item --function-ordering
422The @samp{--function-ordering} option causes @code{gprof} to print a
423suggested function ordering for the program based on profiling data.
424This option suggests an ordering which may improve paging, tlb and
425cache behavior for the program on systems which support arbitrary
426ordering of functions in an executable.
427
428The exact details of how to force the linker to place functions
429in a particular order is system dependent and out of the scope of this
430manual.
431
432@item --file-ordering @var{map_file}
433The @samp{--file-ordering} option causes @code{gprof} to print a
434suggested .o link line ordering for the program based on profiling data.
435This option suggests an ordering which may improve paging, tlb and
436cache behavior for the program on systems which do not support arbitrary
437ordering of functions in an executable.
438
439Use of the @samp{-a} argument is highly recommended with this option.
440
441The @var{map_file} argument is a pathname to a file which provides
442function name to object file mappings. The format of the file is similar to
443the output of the program @code{nm}.
444
445@smallexample
446@group
447c-parse.o:00000000 T yyparse
448c-parse.o:00000004 C yyerrflag
449c-lang.o:00000000 T maybe_objc_method_name
450c-lang.o:00000000 T print_lang_statistics
451c-lang.o:00000000 T recognize_objc_keyword
452c-decl.o:00000000 T print_lang_identifier
453c-decl.o:00000000 T print_lang_type
454@dots{}
455
456@end group
457@end smallexample
458
459GNU @code{nm} @samp{--extern-only} @samp{--defined-only} @samp{-v} @samp{--print-file-name} can be used to create @var{map_file}.
460
461@item -T
462@itemx --traditional
463The @samp{-T} option causes @code{gprof} to print its output in
464``traditional'' BSD style.
465
466@item -w @var{width}
467@itemx --width=@var{width}
468Sets width of output lines to @var{width}.
469Currently only used when printing the function index at the bottom
470of the call graph.
471
472@item -x
473@itemx --all-lines
474This option affects annotated source output only.
475By default, only the lines at the beginning of a basic-block
476are annotated. If this option is specified, every line in
477a basic-block is annotated by repeating the annotation for the
478first line. This behavior is similar to @code{tcov}'s @samp{-a}.
479
480@item --demangle
481@itemx --no-demangle
482These options control whether C++ symbol names should be demangled when
483printing output. The default is to demangle symbols. The
484@code{--no-demangle} option may be used to turn off demangling.
485
486@end table
487
488@node Analysis Options,Miscellaneous Options,Output Options,Invoking
489@section Analysis Options
490
491@table @code
492
493@item -a
494@itemx --no-static
495The @samp{-a} option causes @code{gprof} to suppress the printing of
496statically declared (private) functions. (These are functions whose
497names are not listed as global, and which are not visible outside the
498file/function/block where they were defined.) Time spent in these
499functions, calls to/from them, etc, will all be attributed to the
500function that was loaded directly before it in the executable file.
501@c This is compatible with Unix @code{gprof}, but a bad idea.
502This option affects both the flat profile and the call graph.
503
504@item -c
505@itemx --static-call-graph
506The @samp{-c} option causes the call graph of the program to be
507augmented by a heuristic which examines the text space of the object
508file and identifies function calls in the binary machine code.
509Since normal call graph records are only generated when functions are
510entered, this option identifies children that could have been called,
511but never were. Calls to functions that were not compiled with
512profiling enabled are also identified, but only if symbol table
513entries are present for them.
514Calls to dynamic library routines are typically @emph{not} found
515by this option.
516Parents or children identified via this heuristic
517are indicated in the call graph with call counts of @samp{0}.
518
519@item -D
520@itemx --ignore-non-functions
521The @samp{-D} option causes @code{gprof} to ignore symbols which
522are not known to be functions. This option will give more accurate
523profile data on systems where it is supported (Solaris and HPUX for
524example).
525
526@item -k @var{from}/@var{to}
527The @samp{-k} option allows you to delete from the call graph any arcs from
528symbols matching symspec @var{from} to those matching symspec @var{to}.
529
530@item -l
531@itemx --line
532The @samp{-l} option enables line-by-line profiling, which causes
533histogram hits to be charged to individual source code lines,
534instead of functions.
535If the program was compiled with basic-block counting enabled,
536this option will also identify how many times each line of
537code was executed.
538While line-by-line profiling can help isolate where in a large function
539a program is spending its time, it also significantly increases
540the running time of @code{gprof}, and magnifies statistical
541inaccuracies.
542@xref{Sampling Error}.
543
544@item -m @var{num}
545@itemx --min-count=@var{num}
546This option affects execution count output only.
547Symbols that are executed less than @var{num} times are suppressed.
548
549@item -n[@var{symspec}]
550@itemx --time[=@var{symspec}]
551The @samp{-n} option causes @code{gprof}, in its call graph analysis,
552to only propagate times for symbols matching @var{symspec}.
553
554@item -N[@var{symspec}]
555@itemx --no-time[=@var{symspec}]
556The @samp{-n} option causes @code{gprof}, in its call graph analysis,
557not to propagate times for symbols matching @var{symspec}.
558
559@item -z
560@itemx --display-unused-functions
561If you give the @samp{-z} option, @code{gprof} will mention all
562functions in the flat profile, even those that were never called, and
563that had no time spent in them. This is useful in conjunction with the
564@samp{-c} option for discovering which routines were never called.
565
566@end table
567
568@node Miscellaneous Options,Depricated Options,Analysis Options,Invoking
569@section Miscellaneous Options
570
571@table @code
572
573@item -d[@var{num}]
574@itemx --debug[=@var{num}]
575The @samp{-d @var{num}} option specifies debugging options.
576If @var{num} is not specified, enable all debugging.
577@xref{Debugging}.
578
579@item -O@var{name}
580@itemx --file-format=@var{name}
581Selects the format of the profile data files. Recognized formats are
582@samp{auto} (the default), @samp{bsd}, @samp{4.4bsd}, @samp{magic}, and
583@samp{prof} (not yet supported).
584
585@item -s
586@itemx --sum
587The @samp{-s} option causes @code{gprof} to summarize the information
588in the profile data files it read in, and write out a profile data
589file called @file{gmon.sum}, which contains all the information from
590the profile data files that @code{gprof} read in. The file @file{gmon.sum}
591may be one of the specified input files; the effect of this is to
592merge the data in the other input files into @file{gmon.sum}.
593
594Eventually you can run @code{gprof} again without @samp{-s} to analyze the
595cumulative data in the file @file{gmon.sum}.
596
597@item -v
598@itemx --version
599The @samp{-v} flag causes @code{gprof} to print the current version
600number, and then exit.
601
602@end table
603
604@node Depricated Options,Symspecs,Miscellaneous Options,Invoking
605@section Depricated Options
606
607@table @code
608
609These options have been replaced with newer versions that use symspecs.
610
611@item -e @var{function_name}
612The @samp{-e @var{function}} option tells @code{gprof} to not print
613information about the function @var{function_name} (and its
614children@dots{}) in the call graph. The function will still be listed
615as a child of any functions that call it, but its index number will be
616shown as @samp{[not printed]}. More than one @samp{-e} option may be
617given; only one @var{function_name} may be indicated with each @samp{-e}
618option.
619
620@item -E @var{function_name}
621The @code{-E @var{function}} option works like the @code{-e} option, but
622time spent in the function (and children who were not called from
623anywhere else), will not be used to compute the percentages-of-time for
624the call graph. More than one @samp{-E} option may be given; only one
625@var{function_name} may be indicated with each @samp{-E} option.
626
627@item -f @var{function_name}
628The @samp{-f @var{function}} option causes @code{gprof} to limit the
629call graph to the function @var{function_name} and its children (and
630their children@dots{}). More than one @samp{-f} option may be given;
631only one @var{function_name} may be indicated with each @samp{-f}
632option.
633
634@item -F @var{function_name}
635The @samp{-F @var{function}} option works like the @code{-f} option, but
636only time spent in the function and its children (and their
637children@dots{}) will be used to determine total-time and
638percentages-of-time for the call graph. More than one @samp{-F} option
639may be given; only one @var{function_name} may be indicated with each
640@samp{-F} option. The @samp{-F} option overrides the @samp{-E} option.
641
642@end table
643
644Note that only one function can be specified with each @code{-e},
645@code{-E}, @code{-f} or @code{-F} option. To specify more than one
646function, use multiple options. For example, this command:
647
648@example
649gprof -e boring -f foo -f bar myprogram > gprof.output
650@end example
651
652@noindent
653lists in the call graph all functions that were reached from either
654@code{foo} or @code{bar} and were not reachable from @code{boring}.
655
656@node Symspecs,,Depricated Options,Invoking
657@section Symspecs
658
659Many of the output options allow functions to be included or excluded
660using @dfn{symspecs} (symbol specifications), which observe the
661following syntax:
662
663@example
664 filename_containing_a_dot
665| funcname_not_containing_a_dot
666| linenumber
667| ( [ any_filename ] `:' ( any_funcname | linenumber ) )
668@end example
669
670Here are some sample symspecs:
671
672@table @samp
673@item main.c
674Selects everything in file @file{main.c}---the
675dot in the string tells gprof to interpret
676the string as a filename, rather than as
677a function name. To select a file whose
678name does not contain a dot, a trailing colon
679should be specified. For example, @samp{odd:} is
680interpreted as the file named @file{odd}.
681
682@item main
683Selects all functions named @samp{main}.
684
685Note that there may be multiple instances of the same function name
686because some of the definitions may be local (i.e., static). Unless a
687function name is unique in a program, you must use the colon notation
688explained below to specify a function from a specific source file.
689
690Sometimes, function names contain dots. In such cases, it is necessar
691to add a leading colon to the name. For example, @samp{:.mul} selects
692function @samp{.mul}.
693
694In some object file formats, symbols have a leading underscore. gprof
695will normally not print these underscores. However, you must use the
696underscore when you name a symbol in a symspec. You can use the
697@code{nm} program to see whether symbols have underscores for the object
698file format you are using.
699
700@item main.c:main
701Selects function @samp{main} in file @file{main.c}.
702
703@item main.c:134
704Selects line 134 in file @file{main.c}.
705@end table
706
707@node Output
708@chapter Interpreting @code{gprof}'s Output
709
710@code{gprof} can produce several different output styles, the
711most important of which are described below. The simplest output
712styles (file information, execution count, and function and file ordering)
713are not described here, but are documented with the respective options
714that trigger them.
715@xref{Output Options}.
716
717@menu
718* Flat Profile:: The flat profile shows how much time was spent
719 executing directly in each function.
720* Call Graph:: The call graph shows which functions called which
721 others, and how much time each function used
722 when its subroutine calls are included.
723* Line-by-line:: @code{gprof} can analyze individual source code lines
724* Annotated Source:: The annotated source listing displays source code
725 labeled with execution counts
726@end menu
727
728
729@node Flat Profile,Call Graph,,Output
730@section The Flat Profile
731@cindex flat profile
732
733The @dfn{flat profile} shows the total amount of time your program
734spent executing each function. Unless the @samp{-z} option is given,
735functions with no apparent time spent in them, and no apparent calls
736to them, are not mentioned. Note that if a function was not compiled
737for profiling, and didn't run long enough to show up on the program
738counter histogram, it will be indistinguishable from a function that
739was never called.
740
741This is part of a flat profile for a small program:
742
743@smallexample
744@group
745Flat profile:
746
747Each sample counts as 0.01 seconds.
748 % cumulative self self total
749 time seconds seconds calls ms/call ms/call name
750 33.34 0.02 0.02 7208 0.00 0.00 open
751 16.67 0.03 0.01 244 0.04 0.12 offtime
752 16.67 0.04 0.01 8 1.25 1.25 memccpy
753 16.67 0.05 0.01 7 1.43 1.43 write
754 16.67 0.06 0.01 mcount
755 0.00 0.06 0.00 236 0.00 0.00 tzset
756 0.00 0.06 0.00 192 0.00 0.00 tolower
757 0.00 0.06 0.00 47 0.00 0.00 strlen
758 0.00 0.06 0.00 45 0.00 0.00 strchr
759 0.00 0.06 0.00 1 0.00 50.00 main
760 0.00 0.06 0.00 1 0.00 0.00 memcpy
761 0.00 0.06 0.00 1 0.00 10.11 print
762 0.00 0.06 0.00 1 0.00 0.00 profil
763 0.00 0.06 0.00 1 0.00 50.00 report
764@dots{}
765@end group
766@end smallexample
767
768@noindent
769The functions are sorted by first by decreasing run-time spent in them,
770then by decreasing number of calls, then alphabetically by name. The
771functions @samp{mcount} and @samp{profil} are part of the profiling
772aparatus and appear in every flat profile; their time gives a measure of
773the amount of overhead due to profiling.
774
775Just before the column headers, a statement appears indicating
776how much time each sample counted as.
777This @dfn{sampling period} estimates the margin of error in each of the time
778figures. A time figure that is not much larger than this is not
779reliable. In this example, each sample counted as 0.01 seconds,
780suggesting a 100 Hz sampling rate.
781The program's total execution time was 0.06
782seconds, as indicated by the @samp{cumulative seconds} field. Since
783each sample counted for 0.01 seconds, this means only six samples
784were taken during the run. Two of the samples occured while the
785program was in the @samp{open} function, as indicated by the
786@samp{self seconds} field. Each of the other four samples
787occured one each in @samp{offtime}, @samp{memccpy}, @samp{write},
788and @samp{mcount}.
789Since only six samples were taken, none of these values can
790be regarded as particularly reliable.
791In another run,
792the @samp{self seconds} field for
793@samp{mcount} might well be @samp{0.00} or @samp{0.02}.
794@xref{Sampling Error}, for a complete discussion.
795
796The remaining functions in the listing (those whose
797@samp{self seconds} field is @samp{0.00}) didn't appear
798in the histogram samples at all. However, the call graph
799indicated that they were called, so therefore they are listed,
800sorted in decreasing order by the @samp{calls} field.
801Clearly some time was spent executing these functions,
802but the paucity of histogram samples prevents any
803determination of how much time each took.
804
805Here is what the fields in each line mean:
806
807@table @code
808@item % time
809This is the percentage of the total execution time your program spent
810in this function. These should all add up to 100%.
811
812@item cumulative seconds
813This is the cumulative total number of seconds the computer spent
814executing this functions, plus the time spent in all the functions
815above this one in this table.
816
817@item self seconds
818This is the number of seconds accounted for by this function alone.
819The flat profile listing is sorted first by this number.
820
821@item calls
822This is the total number of times the function was called. If the
823function was never called, or the number of times it was called cannot
824be determined (probably because the function was not compiled with
825profiling enabled), the @dfn{calls} field is blank.
826
827@item self ms/call
828This represents the average number of milliseconds spent in this
829function per call, if this function is profiled. Otherwise, this field
830is blank for this function.
831
832@item total ms/call
833This represents the average number of milliseconds spent in this
834function and its descendants per call, if this function is profiled.
835Otherwise, this field is blank for this function.
836This is the only field in the flat profile that uses call graph analysis.
837
838@item name
839This is the name of the function. The flat profile is sorted by this
840field alphabetically after the @dfn{self seconds} and @dfn{calls}
841fields are sorted.
842@end table
843
844@node Call Graph,Line-by-line,Flat Profile,Output
845@section The Call Graph
846@cindex call graph
847
848The @dfn{call graph} shows how much time was spent in each function
849and its children. From this information, you can find functions that,
850while they themselves may not have used much time, called other
851functions that did use unusual amounts of time.
852
853Here is a sample call from a small program. This call came from the
854same @code{gprof} run as the flat profile example in the previous
855chapter.
856
857@smallexample
858@group
859granularity: each sample hit covers 2 byte(s) for 20.00% of 0.05 seconds
860
861index % time self children called name
862 <spontaneous>
863[1] 100.0 0.00 0.05 start [1]
864 0.00 0.05 1/1 main [2]
865 0.00 0.00 1/2 on_exit [28]
866 0.00 0.00 1/1 exit [59]
867-----------------------------------------------
868 0.00 0.05 1/1 start [1]
869[2] 100.0 0.00 0.05 1 main [2]
870 0.00 0.05 1/1 report [3]
871-----------------------------------------------
872 0.00 0.05 1/1 main [2]
873[3] 100.0 0.00 0.05 1 report [3]
874 0.00 0.03 8/8 timelocal [6]
875 0.00 0.01 1/1 print [9]
876 0.00 0.01 9/9 fgets [12]
877 0.00 0.00 12/34 strncmp <cycle 1> [40]
878 0.00 0.00 8/8 lookup [20]
879 0.00 0.00 1/1 fopen [21]
880 0.00 0.00 8/8 chewtime [24]
881 0.00 0.00 8/16 skipspace [44]
882-----------------------------------------------
883[4] 59.8 0.01 0.02 8+472 <cycle 2 as a whole> [4]
884 0.01 0.02 244+260 offtime <cycle 2> [7]
885 0.00 0.00 236+1 tzset <cycle 2> [26]
886-----------------------------------------------
887@end group
888@end smallexample
889
890The lines full of dashes divide this table into @dfn{entries}, one for each
891function. Each entry has one or more lines.
892
893In each entry, the primary line is the one that starts with an index number
894in square brackets. The end of this line says which function the entry is
895for. The preceding lines in the entry describe the callers of this
896function and the following lines describe its subroutines (also called
897@dfn{children} when we speak of the call graph).
898
899The entries are sorted by time spent in the function and its subroutines.
900
901The internal profiling function @code{mcount} (@pxref{Flat Profile})
902is never mentioned in the call graph.
903
904@menu
905* Primary:: Details of the primary line's contents.
906* Callers:: Details of caller-lines' contents.
907* Subroutines:: Details of subroutine-lines' contents.
908* Cycles:: When there are cycles of recursion,
909 such as @code{a} calls @code{b} calls @code{a}@dots{}
910@end menu
911
912@node Primary
913@subsection The Primary Line
914
915The @dfn{primary line} in a call graph entry is the line that
916describes the function which the entry is about and gives the overall
917statistics for this function.
918
919For reference, we repeat the primary line from the entry for function
920@code{report} in our main example, together with the heading line that
921shows the names of the fields:
922
923@smallexample
924@group
925index % time self children called name
926@dots{}
927[3] 100.0 0.00 0.05 1 report [3]
928@end group
929@end smallexample
930
931Here is what the fields in the primary line mean:
932
933@table @code
934@item index
935Entries are numbered with consecutive integers. Each function
936therefore has an index number, which appears at the beginning of its
937primary line.
938
939Each cross-reference to a function, as a caller or subroutine of
940another, gives its index number as well as its name. The index number
941guides you if you wish to look for the entry for that function.
942
943@item % time
944This is the percentage of the total time that was spent in this
945function, including time spent in subroutines called from this
946function.
947
948The time spent in this function is counted again for the callers of
949this function. Therefore, adding up these percentages is meaningless.
950
951@item self
952This is the total amount of time spent in this function. This
953should be identical to the number printed in the @code{seconds} field
954for this function in the flat profile.
955
956@item children
957This is the total amount of time spent in the subroutine calls made by
958this function. This should be equal to the sum of all the @code{self}
959and @code{children} entries of the children listed directly below this
960function.
961
962@item called
963This is the number of times the function was called.
964
965If the function called itself recursively, there are two numbers,
966separated by a @samp{+}. The first number counts non-recursive calls,
967and the second counts recursive calls.
968
969In the example above, the function @code{report} was called once from
970@code{main}.
971
972@item name
973This is the name of the current function. The index number is
974repeated after it.
975
976If the function is part of a cycle of recursion, the cycle number is
977printed between the function's name and the index number
978(@pxref{Cycles}). For example, if function @code{gnurr} is part of
979cycle number one, and has index number twelve, its primary line would
980be end like this:
981
982@example
983gnurr <cycle 1> [12]
984@end example
985@end table
986
987@node Callers, Subroutines, Primary, Call Graph
988@subsection Lines for a Function's Callers
989
990A function's entry has a line for each function it was called by.
991These lines' fields correspond to the fields of the primary line, but
992their meanings are different because of the difference in context.
993
994For reference, we repeat two lines from the entry for the function
995@code{report}, the primary line and one caller-line preceding it, together
996with the heading line that shows the names of the fields:
997
998@smallexample
999index % time self children called name
1000@dots{}
1001 0.00 0.05 1/1 main [2]
1002[3] 100.0 0.00 0.05 1 report [3]
1003@end smallexample
1004
1005Here are the meanings of the fields in the caller-line for @code{report}
1006called from @code{main}:
1007
1008@table @code
1009@item self
1010An estimate of the amount of time spent in @code{report} itself when it was
1011called from @code{main}.
1012
1013@item children
1014An estimate of the amount of time spent in subroutines of @code{report}
1015when @code{report} was called from @code{main}.
1016
1017The sum of the @code{self} and @code{children} fields is an estimate
1018of the amount of time spent within calls to @code{report} from @code{main}.
1019
1020@item called
1021Two numbers: the number of times @code{report} was called from @code{main},
1022followed by the total number of nonrecursive calls to @code{report} from
1023all its callers.
1024
1025@item name and index number
1026The name of the caller of @code{report} to which this line applies,
1027followed by the caller's index number.
1028
1029Not all functions have entries in the call graph; some
1030options to @code{gprof} request the omission of certain functions.
1031When a caller has no entry of its own, it still has caller-lines
1032in the entries of the functions it calls.
1033
1034If the caller is part of a recursion cycle, the cycle number is
1035printed between the name and the index number.
1036@end table
1037
1038If the identity of the callers of a function cannot be determined, a
1039dummy caller-line is printed which has @samp{<spontaneous>} as the
1040``caller's name'' and all other fields blank. This can happen for
1041signal handlers.
1042@c What if some calls have determinable callers' names but not all?
1043@c FIXME - still relevant?
1044
1045@node Subroutines, Cycles, Callers, Call Graph
1046@subsection Lines for a Function's Subroutines
1047
1048A function's entry has a line for each of its subroutines---in other
1049words, a line for each other function that it called. These lines'
1050fields correspond to the fields of the primary line, but their meanings
1051are different because of the difference in context.
1052
1053For reference, we repeat two lines from the entry for the function
1054@code{main}, the primary line and a line for a subroutine, together
1055with the heading line that shows the names of the fields:
1056
1057@smallexample
1058index % time self children called name
1059@dots{}
1060[2] 100.0 0.00 0.05 1 main [2]
1061 0.00 0.05 1/1 report [3]
1062@end smallexample
1063
1064Here are the meanings of the fields in the subroutine-line for @code{main}
1065calling @code{report}:
1066
1067@table @code
1068@item self
1069An estimate of the amount of time spent directly within @code{report}
1070when @code{report} was called from @code{main}.
1071
1072@item children
1073An estimate of the amount of time spent in subroutines of @code{report}
1074when @code{report} was called from @code{main}.
1075
1076The sum of the @code{self} and @code{children} fields is an estimate
1077of the total time spent in calls to @code{report} from @code{main}.
1078
1079@item called
1080Two numbers, the number of calls to @code{report} from @code{main}
1081followed by the total number of nonrecursive calls to @code{report}.
1082This ratio is used to determine how much of @code{report}'s @code{self}
1083and @code{children} time gets credited to @code{main}.
1084@xref{Assumptions}.
1085
1086@item name
1087The name of the subroutine of @code{main} to which this line applies,
1088followed by the subroutine's index number.
1089
1090If the caller is part of a recursion cycle, the cycle number is
1091printed between the name and the index number.
1092@end table
1093
1094@node Cycles,, Subroutines, Call Graph
1095@subsection How Mutually Recursive Functions Are Described
1096@cindex cycle
1097@cindex recursion cycle
1098
1099The graph may be complicated by the presence of @dfn{cycles of
1100recursion} in the call graph. A cycle exists if a function calls
1101another function that (directly or indirectly) calls (or appears to
1102call) the original function. For example: if @code{a} calls @code{b},
1103and @code{b} calls @code{a}, then @code{a} and @code{b} form a cycle.
1104
1105Whenever there are call paths both ways between a pair of functions, they
1106belong to the same cycle. If @code{a} and @code{b} call each other and
1107@code{b} and @code{c} call each other, all three make one cycle. Note that
1108even if @code{b} only calls @code{a} if it was not called from @code{a},
1109@code{gprof} cannot determine this, so @code{a} and @code{b} are still
1110considered a cycle.
1111
1112The cycles are numbered with consecutive integers. When a function
1113belongs to a cycle, each time the function name appears in the call graph
1114it is followed by @samp{<cycle @var{number}>}.
1115
1116The reason cycles matter is that they make the time values in the call
1117graph paradoxical. The ``time spent in children'' of @code{a} should
1118include the time spent in its subroutine @code{b} and in @code{b}'s
1119subroutines---but one of @code{b}'s subroutines is @code{a}! How much of
1120@code{a}'s time should be included in the children of @code{a}, when
1121@code{a} is indirectly recursive?
1122
1123The way @code{gprof} resolves this paradox is by creating a single entry
1124for the cycle as a whole. The primary line of this entry describes the
1125total time spent directly in the functions of the cycle. The
1126``subroutines'' of the cycle are the individual functions of the cycle, and
1127all other functions that were called directly by them. The ``callers'' of
1128the cycle are the functions, outside the cycle, that called functions in
1129the cycle.
1130
1131Here is an example portion of a call graph which shows a cycle containing
1132functions @code{a} and @code{b}. The cycle was entered by a call to
1133@code{a} from @code{main}; both @code{a} and @code{b} called @code{c}.
1134
1135@smallexample
1136index % time self children called name
1137----------------------------------------
1138 1.77 0 1/1 main [2]
1139[3] 91.71 1.77 0 1+5 <cycle 1 as a whole> [3]
1140 1.02 0 3 b <cycle 1> [4]
1141 0.75 0 2 a <cycle 1> [5]
1142----------------------------------------
1143 3 a <cycle 1> [5]
1144[4] 52.85 1.02 0 0 b <cycle 1> [4]
1145 2 a <cycle 1> [5]
1146 0 0 3/6 c [6]
1147----------------------------------------
1148 1.77 0 1/1 main [2]
1149 2 b <cycle 1> [4]
1150[5] 38.86 0.75 0 1 a <cycle 1> [5]
1151 3 b <cycle 1> [4]
1152 0 0 3/6 c [6]
1153----------------------------------------
1154@end smallexample
1155
1156@noindent
1157(The entire call graph for this program contains in addition an entry for
1158@code{main}, which calls @code{a}, and an entry for @code{c}, with callers
1159@code{a} and @code{b}.)
1160
1161@smallexample
1162index % time self children called name
1163 <spontaneous>
1164[1] 100.00 0 1.93 0 start [1]
1165 0.16 1.77 1/1 main [2]
1166----------------------------------------
1167 0.16 1.77 1/1 start [1]
1168[2] 100.00 0.16 1.77 1 main [2]
1169 1.77 0 1/1 a <cycle 1> [5]
1170----------------------------------------
1171 1.77 0 1/1 main [2]
1172[3] 91.71 1.77 0 1+5 <cycle 1 as a whole> [3]
1173 1.02 0 3 b <cycle 1> [4]
1174 0.75 0 2 a <cycle 1> [5]
1175 0 0 6/6 c [6]
1176----------------------------------------
1177 3 a <cycle 1> [5]
1178[4] 52.85 1.02 0 0 b <cycle 1> [4]
1179 2 a <cycle 1> [5]
1180 0 0 3/6 c [6]
1181----------------------------------------
1182 1.77 0 1/1 main [2]
1183 2 b <cycle 1> [4]
1184[5] 38.86 0.75 0 1 a <cycle 1> [5]
1185 3 b <cycle 1> [4]
1186 0 0 3/6 c [6]
1187----------------------------------------
1188 0 0 3/6 b <cycle 1> [4]
1189 0 0 3/6 a <cycle 1> [5]
1190[6] 0.00 0 0 6 c [6]
1191----------------------------------------
1192@end smallexample
1193
1194The @code{self} field of the cycle's primary line is the total time
1195spent in all the functions of the cycle. It equals the sum of the
1196@code{self} fields for the individual functions in the cycle, found
1197in the entry in the subroutine lines for these functions.
1198
1199The @code{children} fields of the cycle's primary line and subroutine lines
1200count only subroutines outside the cycle. Even though @code{a} calls
1201@code{b}, the time spent in those calls to @code{b} is not counted in
1202@code{a}'s @code{children} time. Thus, we do not encounter the problem of
1203what to do when the time in those calls to @code{b} includes indirect
1204recursive calls back to @code{a}.
1205
1206The @code{children} field of a caller-line in the cycle's entry estimates
1207the amount of time spent @emph{in the whole cycle}, and its other
1208subroutines, on the times when that caller called a function in the cycle.
1209
1210The @code{calls} field in the primary line for the cycle has two numbers:
1211first, the number of times functions in the cycle were called by functions
1212outside the cycle; second, the number of times they were called by
1213functions in the cycle (including times when a function in the cycle calls
1214itself). This is a generalization of the usual split into nonrecursive and
1215recursive calls.
1216
1217The @code{calls} field of a subroutine-line for a cycle member in the
1218cycle's entry says how many time that function was called from functions in
1219the cycle. The total of all these is the second number in the primary line's
1220@code{calls} field.
1221
1222In the individual entry for a function in a cycle, the other functions in
1223the same cycle can appear as subroutines and as callers. These lines show
1224how many times each function in the cycle called or was called from each other
1225function in the cycle. The @code{self} and @code{children} fields in these
1226lines are blank because of the difficulty of defining meanings for them
1227when recursion is going on.
1228
1229@node Line-by-line,Annotated Source,Call Graph,Output
1230@section Line-by-line Profiling
1231
1232@code{gprof}'s @samp{-l} option causes the program to perform
1233@dfn{line-by-line} profiling. In this mode, histogram
1234samples are assigned not to functions, but to individual
1235lines of source code. The program usually must be compiled
1236with a @samp{-g} option, in addition to @samp{-pg}, in order
1237to generate debugging symbols for tracking source code lines.
1238
1239The flat profile is the most useful output table
1240in line-by-line mode.
1241The call graph isn't as useful as normal, since
1242the current version of @code{gprof} does not propagate
1243call graph arcs from source code lines to the enclosing function.
1244The call graph does, however, show each line of code
1245that called each function, along with a count.
1246
1247Here is a section of @code{gprof}'s output, without line-by-line profiling.
1248Note that @code{ct_init} accounted for four histogram hits, and
124913327 calls to @code{init_block}.
1250
1251@smallexample
1252Flat profile:
1253
1254Each sample counts as 0.01 seconds.
1255 % cumulative self self total
1256 time seconds seconds calls us/call us/call name
1257 30.77 0.13 0.04 6335 6.31 6.31 ct_init
1258
1259
1260 Call graph (explanation follows)
1261
1262
1263granularity: each sample hit covers 4 byte(s) for 7.69% of 0.13 seconds
1264
1265index % time self children called name
1266
1267 0.00 0.00 1/13496 name_too_long
1268 0.00 0.00 40/13496 deflate
1269 0.00 0.00 128/13496 deflate_fast
1270 0.00 0.00 13327/13496 ct_init
1271[7] 0.0 0.00 0.00 13496 init_block
1272
1273@end smallexample
1274
1275Now let's look at some of @code{gprof}'s output from the same program run,
1276this time with line-by-line profiling enabled. Note that @code{ct_init}'s
1277four histogram hits are broken down into four lines of source code - one hit
1278occured on each of lines 349, 351, 382 and 385. In the call graph,
1279note how
1280@code{ct_init}'s 13327 calls to @code{init_block} are broken down
1281into one call from line 396, 3071 calls from line 384, 3730 calls
1282from line 385, and 6525 calls from 387.
1283
1284@smallexample
1285Flat profile:
1286
1287Each sample counts as 0.01 seconds.
1288 % cumulative self
1289 time seconds seconds calls name
1290 7.69 0.10 0.01 ct_init (trees.c:349)
1291 7.69 0.11 0.01 ct_init (trees.c:351)
1292 7.69 0.12 0.01 ct_init (trees.c:382)
1293 7.69 0.13 0.01 ct_init (trees.c:385)
1294
1295
1296 Call graph (explanation follows)
1297
1298
1299granularity: each sample hit covers 4 byte(s) for 7.69% of 0.13 seconds
1300
1301 % time self children called name
1302
1303 0.00 0.00 1/13496 name_too_long (gzip.c:1440)
1304 0.00 0.00 1/13496 deflate (deflate.c:763)
1305 0.00 0.00 1/13496 ct_init (trees.c:396)
1306 0.00 0.00 2/13496 deflate (deflate.c:727)
1307 0.00 0.00 4/13496 deflate (deflate.c:686)
1308 0.00 0.00 5/13496 deflate (deflate.c:675)
1309 0.00 0.00 12/13496 deflate (deflate.c:679)
1310 0.00 0.00 16/13496 deflate (deflate.c:730)
1311 0.00 0.00 128/13496 deflate_fast (deflate.c:654)
1312 0.00 0.00 3071/13496 ct_init (trees.c:384)
1313 0.00 0.00 3730/13496 ct_init (trees.c:385)
1314 0.00 0.00 6525/13496 ct_init (trees.c:387)
1315[6] 0.0 0.00 0.00 13496 init_block (trees.c:408)
1316
1317@end smallexample
1318
1319
1320@node Annotated Source,,Line-by-line,Output
1321@section The Annotated Source Listing
1322
1323@code{gprof}'s @samp{-A} option triggers an annotated source listing,
1324which lists the program's source code, each function labeled with the
1325number of times it was called. You may also need to specify the
1326@samp{-I} option, if @code{gprof} can't find the source code files.
1327
1328Compiling with @samp{gcc @dots{} -g -pg -a} augments your program
1329with basic-block counting code, in addition to function counting code.
1330This enables @code{gprof} to determine how many times each line
1331of code was exeucted.
1332For example, consider the following function, taken from gzip,
1333with line numbers added:
1334
1335@smallexample
1336 1 ulg updcrc(s, n)
1337 2 uch *s;
1338 3 unsigned n;
1339 4 @{
1340 5 register ulg c;
1341 6
1342 7 static ulg crc = (ulg)0xffffffffL;
1343 8
1344 9 if (s == NULL) @{
134510 c = 0xffffffffL;
134611 @} else @{
134712 c = crc;
134813 if (n) do @{
134914 c = crc_32_tab[...];
135015 @} while (--n);
135116 @}
135217 crc = c;
135318 return c ^ 0xffffffffL;
135419 @}
1355
1356@end smallexample
1357
1358@code{updcrc} has at least five basic-blocks.
1359One is the function itself. The
1360@code{if} statement on line 9 generates two more basic-blocks, one
1361for each branch of the @code{if}. A fourth basic-block results from
1362the @code{if} on line 13, and the contents of the @code{do} loop form
1363the fifth basic-block. The compiler may also generate additional
1364basic-blocks to handle various special cases.
1365
1366A program augmented for basic-block counting can be analyzed with
1367@code{gprof -l -A}. I also suggest use of the @samp{-x} option,
1368which ensures that each line of code is labeled at least once.
1369Here is @code{updcrc}'s
1370annotated source listing for a sample @code{gzip} run:
1371
1372@smallexample
1373 ulg updcrc(s, n)
1374 uch *s;
1375 unsigned n;
1376 2 ->@{
1377 register ulg c;
1378
1379 static ulg crc = (ulg)0xffffffffL;
1380
1381 2 -> if (s == NULL) @{
1382 1 -> c = 0xffffffffL;
1383 1 -> @} else @{
1384 1 -> c = crc;
1385 1 -> if (n) do @{
1386 26312 -> c = crc_32_tab[...];
138726312,1,26311 -> @} while (--n);
1388 @}
1389 2 -> crc = c;
1390 2 -> return c ^ 0xffffffffL;
1391 2 ->@}
1392@end smallexample
1393
1394In this example, the function was called twice, passing once through
1395each branch of the @code{if} statement. The body of the @code{do}
1396loop was executed a total of 26312 times. Note how the @code{while}
1397statement is annotated. It began execution 26312 times, once for
1398each iteration through the loop. One of those times (the last time)
1399it exited, while it branched back to the beginning of the loop 26311 times.
1400
1401@node Inaccuracy
1402@chapter Inaccuracy of @code{gprof} Output
1403
1404@menu
1405* Sampling Error:: Statistical margins of error
1406* Assumptions:: Estimating children times
1407@end menu
1408
1409@node Sampling Error,Assumptions,,Inaccuracy
1410@section Statistical Sampling Error
1411
1412The run-time figures that @code{gprof} gives you are based on a sampling
1413process, so they are subject to statistical inaccuracy. If a function runs
1414only a small amount of time, so that on the average the sampling process
1415ought to catch that function in the act only once, there is a pretty good
1416chance it will actually find that function zero times, or twice.
1417
1418By contrast, the number-of-calls and basic-block figures
1419are derived by counting, not
1420sampling. They are completely accurate and will not vary from run to run
1421if your program is deterministic.
1422
1423The @dfn{sampling period} that is printed at the beginning of the flat
1424profile says how often samples are taken. The rule of thumb is that a
1425run-time figure is accurate if it is considerably bigger than the sampling
1426period.
1427
1428The actual amount of error can be predicted.
1429For @var{n} samples, the @emph{expected} error
1430is the square-root of @var{n}. For example,
1431if the sampling period is 0.01 seconds and @code{foo}'s run-time is 1 second,
1432@var{n} is 100 samples (1 second/0.01 seconds), sqrt(@var{n}) is 10 samples, so
1433the expected error in @code{foo}'s run-time is 0.1 seconds (10*0.01 seconds),
1434or ten percent of the observed value.
1435Again, if the sampling period is 0.01 seconds and @code{bar}'s run-time is
1436100 seconds, @var{n} is 10000 samples, sqrt(@var{n}) is 100 samples, so
1437the expected error in @code{bar}'s run-time is 1 second,
1438or one percent of the observed value.
1439It is likely to
1440vary this much @emph{on the average} from one profiling run to the next.
1441(@emph{Sometimes} it will vary more.)
1442
1443This does not mean that a small run-time figure is devoid of information.
1444If the program's @emph{total} run-time is large, a small run-time for one
1445function does tell you that that function used an insignificant fraction of
1446the whole program's time. Usually this means it is not worth optimizing.
1447
1448One way to get more accuracy is to give your program more (but similar)
1449input data so it will take longer. Another way is to combine the data from
1450several runs, using the @samp{-s} option of @code{gprof}. Here is how:
1451
1452@enumerate
1453@item
1454Run your program once.
1455
1456@item
1457Issue the command @samp{mv gmon.out gmon.sum}.
1458
1459@item
1460Run your program again, the same as before.
1461
1462@item
1463Merge the new data in @file{gmon.out} into @file{gmon.sum} with this command:
1464
1465@example
1466gprof -s @var{executable-file} gmon.out gmon.sum
1467@end example
1468
1469@item
1470Repeat the last two steps as often as you wish.
1471
1472@item
1473Analyze the cumulative data using this command:
1474
1475@example
1476gprof @var{executable-file} gmon.sum > @var{output-file}
1477@end example
1478@end enumerate
1479
1480@node Assumptions,,Sampling Error,Inaccuracy
1481@section Estimating @code{children} Times
1482
1483Some of the figures in the call graph are estimates---for example, the
1484@code{children} time values and all the the time figures in caller and
1485subroutine lines.
1486
1487There is no direct information about these measurements in the profile
1488data itself. Instead, @code{gprof} estimates them by making an assumption
1489about your program that might or might not be true.
1490
1491The assumption made is that the average time spent in each call to any
1492function @code{foo} is not correlated with who called @code{foo}. If
1493@code{foo} used 5 seconds in all, and 2/5 of the calls to @code{foo} came
1494from @code{a}, then @code{foo} contributes 2 seconds to @code{a}'s
1495@code{children} time, by assumption.
1496
1497This assumption is usually true enough, but for some programs it is far
1498from true. Suppose that @code{foo} returns very quickly when its argument
1499is zero; suppose that @code{a} always passes zero as an argument, while
1500other callers of @code{foo} pass other arguments. In this program, all the
1501time spent in @code{foo} is in the calls from callers other than @code{a}.
1502But @code{gprof} has no way of knowing this; it will blindly and
1503incorrectly charge 2 seconds of time in @code{foo} to the children of
1504@code{a}.
1505
1506@c FIXME - has this been fixed?
1507We hope some day to put more complete data into @file{gmon.out}, so that
1508this assumption is no longer needed, if we can figure out how. For the
1509nonce, the estimated figures are usually more useful than misleading.
1510
1511@node How do I?
1512@chapter Answers to Common Questions
1513
1514@table @asis
1515@item How do I find which lines in my program were executed the most times?
1516
1517Compile your program with basic-block counting enabled, run it, then
1518use the following pipeline:
1519
1520@example
1521gprof -l -C @var{objfile} | sort -k 3 -n -r
1522@end example
1523
1524This listing will show you the lines in your code executed most often,
1525but not necessarily those that consumed the most time.
1526
1527@item How do I find which lines in my program called a particular function?
1528
1529Use @code{gprof -l} and lookup the function in the call graph.
1530The callers will be broken down by function and line number.
1531
1532@item How do I analyze a program that runs for less than a second?
1533
1534Try using a shell script like this one:
1535
1536@example
1537for i in `seq 1 100`; do
1538 fastprog
1539 mv gmon.out gmon.out.$i
1540done
1541
1542gprof -s fastprog gmon.out.*
1543
1544gprof fastprog gmon.sum
1545@end example
1546
1547If your program is completely deterministic, all the call counts
1548will be simple multiples of 100 (i.e. a function called once in
1549each run will appear with a call count of 100).
1550
1551@end table
1552
1553@node Incompatibilities
1554@chapter Incompatibilities with Unix @code{gprof}
1555
1556@sc{gnu} @code{gprof} and Berkeley Unix @code{gprof} use the same data
1557file @file{gmon.out}, and provide essentially the same information. But
1558there are a few differences.
1559
1560@itemize @bullet
1561@item
1562@sc{gnu} @code{gprof} uses a new, generalized file format with support
1563for basic-block execution counts and non-realtime histograms. A magic
1564cookie and version number allows @code{gprof} to easily identify
1565new style files. Old BSD-style files can still be read.
1566@xref{File Format}.
1567
1568@item
1569For a recursive function, Unix @code{gprof} lists the function as a
1570parent and as a child, with a @code{calls} field that lists the number
1571of recursive calls. @sc{gnu} @code{gprof} omits these lines and puts
1572the number of recursive calls in the primary line.
1573
1574@item
1575When a function is suppressed from the call graph with @samp{-e}, @sc{gnu}
1576@code{gprof} still lists it as a subroutine of functions that call it.
1577
1578@item
1579@sc{gnu} @code{gprof} accepts the @samp{-k} with its argument
1580in the form @samp{from/to}, instead of @samp{from to}.
1581
1582@item
1583In the annotated source listing,
1584if there are multiple basic blocks on the same line,
1585@sc{gnu} @code{gprof} prints all of their counts, seperated by commas.
1586
1587@ignore - it does this now
1588@item
1589The function names printed in @sc{gnu} @code{gprof} output do not include
1590the leading underscores that are added internally to the front of all
1591C identifiers on many operating systems.
1592@end ignore
1593
1594@item
1595The blurbs, field widths, and output formats are different. @sc{gnu}
1596@code{gprof} prints blurbs after the tables, so that you can see the
1597tables without skipping the blurbs.
1598@end itemize
1599
1600@node Details
1601@chapter Details of Profiling
1602
1603@menu
1604* Implementation:: How a program collets profiling information
1605* File Format:: Format of @samp{gmon.out} files
1606* Internals:: @code{gprof}'s internal operation
1607* Debugging:: Using @code{gprof}'s @samp{-d} option
1608@end menu
1609
1610@node Implementation,File Format,,Details
1611@section Implementation of Profiling
1612
1613Profiling works by changing how every function in your program is compiled
1614so that when it is called, it will stash away some information about where
1615it was called from. From this, the profiler can figure out what function
1616called it, and can count how many times it was called. This change is made
1617by the compiler when your program is compiled with the @samp{-pg} option,
1618which causes every function to call @code{mcount}
1619(or @code{_mcount}, or @code{__mcount}, depending on the OS and compiler)
1620as one of its first operations.
1621
1622The @code{mcount} routine, included in the profiling library,
1623is responsible for recording in an in-memory call graph table
1624both its parent routine (the child) and its parent's parent. This is
1625typically done by examining the stack frame to find both
1626the address of the child, and the return address in the original parent.
1627Since this is a very machine-dependant operation, @code{mcount}
1628itself is typically a short assembly-language stub routine
1629that extracts the required
1630information, and then calls @code{__mcount_internal}
1631(a normal C function) with two arguments - @code{frompc} and @code{selfpc}.
1632@code{__mcount_internal} is responsible for maintaining
1633the in-memory call graph, which records @code{frompc}, @code{selfpc},
1634and the number of times each of these call arcs was transversed.
1635
1636GCC Version 2 provides a magical function (@code{__builtin_return_address}),
1637which allows a generic @code{mcount} function to extract the
1638required information from the stack frame. However, on some
1639architectures, most notably the SPARC, using this builtin can be
1640very computationally expensive, and an assembly language version
1641of @code{mcount} is used for performance reasons.
1642
1643Number-of-calls information for library routines is collected by using a
1644special version of the C library. The programs in it are the same as in
1645the usual C library, but they were compiled with @samp{-pg}. If you
1646link your program with @samp{gcc @dots{} -pg}, it automatically uses the
1647profiling version of the library.
1648
1649Profiling also involves watching your program as it runs, and keeping a
1650histogram of where the program counter happens to be every now and then.
1651Typically the program counter is looked at around 100 times per second of
1652run time, but the exact frequency may vary from system to system.
1653
1654This is done is one of two ways. Most UNIX-like operating systems
1655provide a @code{profil()} system call, which registers a memory
1656array with the kernel, along with a scale
1657factor that determines how the program's address space maps
1658into the array.
1659Typical scaling values cause every 2 to 8 bytes of address space
1660to map into a single array slot.
1661On every tick of the system clock
1662(assuming the profiled program is running), the value of the
1663program counter is examined and the corresponding slot in
1664the memory array is incremented. Since this is done in the kernel,
1665which had to interrupt the process anyway to handle the clock
1666interrupt, very little additional system overhead is required.
1667
1668However, some operating systems, most notably Linux 2.0 (and earlier),
1669do not provide a @code{profil()} system call. On such a system,
1670arrangements are made for the kernel to periodically deliver
1671a signal to the process (typically via @code{setitimer()}),
1672which then performs the same operation of examining the
1673program counter and incrementing a slot in the memory array.
1674Since this method requires a signal to be delivered to
1675user space every time a sample is taken, it uses considerably
1676more overhead than kernel-based profiling. Also, due to the
1677added delay required to deliver the signal, this method is
1678less accurate as well.
1679
1680A special startup routine allocates memory for the histogram and
1681either calls @code{profil()} or sets up
1682a clock signal handler.
1683This routine (@code{monstartup}) can be invoked in several ways.
1684On Linux systems, a special profiling startup file @code{gcrt0.o},
1685which invokes @code{monstartup} before @code{main},
1686is used instead of the default @code{crt0.o}.
1687Use of this special startup file is one of the effects
1688of using @samp{gcc @dots{} -pg} to link.
1689On SPARC systems, no special startup files are used.
1690Rather, the @code{mcount} routine, when it is invoked for
1691the first time (typically when @code{main} is called),
1692calls @code{monstartup}.
1693
1694If the compiler's @samp{-a} option was used, basic-block counting
1695is also enabled. Each object file is then compiled with a static array
1696of counts, initially zero.
1697In the executable code, every time a new basic-block begins
1698(i.e. when an @code{if} statement appears), an extra instruction
1699is inserted to increment the corresponding count in the array.
1700At compile time, a paired array was constructed that recorded
1701the starting address of each basic-block. Taken together,
1702the two arrays record the starting address of every basic-block,
1703along with the number of times it was executed.
1704
1705The profiling library also includes a function (@code{mcleanup}) which is
1706typically registered using @code{atexit()} to be called as the
1707program exits, and is responsible for writing the file @file{gmon.out}.
1708Profiling is turned off, various headers are output, and the histogram
1709is written, followed by the call-graph arcs and the basic-block counts.
1710
1711The output from @code{gprof} gives no indication of parts of your program that
1712are limited by I/O or swapping bandwidth. This is because samples of the
1713program counter are taken at fixed intervals of the program's run time.
1714Therefore, the
1715time measurements in @code{gprof} output say nothing about time that your
1716program was not running. For example, a part of the program that creates
1717so much data that it cannot all fit in physical memory at once may run very
1718slowly due to thrashing, but @code{gprof} will say it uses little time. On
1719the other hand, sampling by run time has the advantage that the amount of
1720load due to other users won't directly affect the output you get.
1721
1722@node File Format,Internals,Implementation,Details
1723@section Profiling Data File Format
1724
1725The old BSD-derived file format used for profile data does not contain a
1726magic cookie that allows to check whether a data file really is a
1727gprof file. Furthermore, it does not provide a version number, thus
1728rendering changes to the file format almost impossible. @sc{gnu} @code{gprof}
1729uses a new file format that provides these features. For backward
1730compatibility, @sc{gnu} @code{gprof} continues to support the old BSD-derived
1731format, but not all features are supported with it. For example,
1732basic-block execution counts cannot be accommodated by the old file
1733format.
1734
1735The new file format is defined in header file @file{gmon_out.h}. It
1736consists of a header containing the magic cookie and a version number,
1737as well as some spare bytes available for future extensions. All data
1738in a profile data file is in the native format of the host on which
1739the profile was collected. @sc{gnu} @code{gprof} adapts automatically to the
1740byte-order in use.
1741
1742In the new file format, the header is followed by a sequence of
1743records. Currently, there are three different record types: histogram
1744records, call-graph arc records, and basic-block execution count
1745records. Each file can contain any number of each record type. When
1746reading a file, @sc{gnu} @code{gprof} will ensure records of the same type are
1747compatible with each other and compute the union of all records. For
1748example, for basic-block execution counts, the union is simply the sum
1749of all execution counts for each basic-block.
1750
1751@subsection Histogram Records
1752
1753Histogram records consist of a header that is followed by an array of
1754bins. The header contains the text-segment range that the histogram
1755spans, the size of the histogram in bytes (unlike in the old BSD
1756format, this does not include the size of the header), the rate of the
1757profiling clock, and the physical dimension that the bin counts
1758represent after being scaled by the profiling clock rate. The
1759physical dimension is specified in two parts: a long name of up to 15
1760characters and a single character abbreviation. For example, a
1761histogram representing real-time would specify the long name as
1762"seconds" and the abbreviation as "s". This feature is useful for
1763architectures that support performance monitor hardware (which,
1764fortunately, is becoming increasingly common). For example, under DEC
1765OSF/1, the "uprofile" command can be used to produce a histogram of,
1766say, instruction cache misses. In this case, the dimension in the
1767histogram header could be set to "i-cache misses" and the abbreviation
1768could be set to "1" (because it is simply a count, not a physical
1769dimension). Also, the profiling rate would have to be set to 1 in
1770this case.
1771
1772Histogram bins are 16-bit numbers and each bin represent an equal
1773amount of text-space. For example, if the text-segment is one
1774thousand bytes long and if there are ten bins in the histogram, each
1775bin represents one hundred bytes.
1776
1777
1778@subsection Call-Graph Records
1779
1780Call-graph records have a format that is identical to the one used in
1781the BSD-derived file format. It consists of an arc in the call graph
1782and a count indicating the number of times the arc was traversed
1783during program execution. Arcs are specified by a pair of addresses:
1784the first must be within caller's function and the second must be
1785within the callee's function. When performing profiling at the
1786function level, these addresses can point anywhere within the
1787respective function. However, when profiling at the line-level, it is
1788better if the addresses are as close to the call-site/entry-point as
1789possible. This will ensure that the line-level call-graph is able to
1790identify exactly which line of source code performed calls to a
1791function.
1792
1793@subsection Basic-Block Execution Count Records
1794
1795Basic-block execution count records consist of a header followed by a
1796sequence of address/count pairs. The header simply specifies the
1797length of the sequence. In an address/count pair, the address
1798identifies a basic-block and the count specifies the number of times
1799that basic-block was executed. Any address within the basic-address can
1800be used.
1801
1802@node Internals,Debugging,File Format,Details
1803@section @code{gprof}'s Internal Operation
1804
1805Like most programs, @code{gprof} begins by processing its options.
1806During this stage, it may building its symspec list
1807(@code{sym_ids.c:sym_id_add}), if
1808options are specified which use symspecs.
1809@code{gprof} maintains a single linked list of symspecs,
1810which will eventually get turned into 12 symbol tables,
1811organized into six include/exclude pairs - one
1812pair each for the flat profile (INCL_FLAT/EXCL_FLAT),
1813the call graph arcs (INCL_ARCS/EXCL_ARCS),
1814printing in the call graph (INCL_GRAPH/EXCL_GRAPH),
1815timing propagation in the call graph (INCL_TIME/EXCL_TIME),
1816the annotated source listing (INCL_ANNO/EXCL_ANNO),
1817and the execution count listing (INCL_EXEC/EXCL_EXEC).
1818
1819After option processing, @code{gprof} finishes
1820building the symspec list by adding all the symspecs in
1821@code{default_excluded_list} to the exclude lists
1822EXCL_TIME and EXCL_GRAPH, and if line-by-line profiling is specified,
1823EXCL_FLAT as well.
1824These default excludes are not added to EXCL_ANNO, EXCL_ARCS, and EXCL_EXEC.
1825
1826Next, the BFD library is called to open the object file,
1827verify that it is an object file,
1828and read its symbol table (@code{core.c:core_init}),
1829using @code{bfd_canonicalize_symtab} after mallocing
1830an appropiate sized array of asymbols. At this point,
1831function mappings are read (if the @samp{--file-ordering} option
1832has been specified), and the core text space is read into
1833memory (if the @samp{-c} option was given).
1834
1835@code{gprof}'s own symbol table, an array of Sym structures,
1836is now built.
1837This is done in one of two ways, by one of two routines, depending
1838on whether line-by-line profiling (@samp{-l} option) has been
1839enabled.
1840For normal profiling, the BFD canonical symbol table is scanned.
1841For line-by-line profiling, every
1842text space address is examined, and a new symbol table entry
1843gets created every time the line number changes.
1844In either case, two passes are made through the symbol
1845table - one to count the size of the symbol table required,
1846and the other to actually read the symbols. In between the
1847two passes, a single array of type @code{Sym} is created of
1848the appropiate length.
1849Finally, @code{symtab.c:symtab_finalize}
1850is called to sort the symbol table and remove duplicate entries
1851(entries with the same memory address).
1852
1853The symbol table must be a contiguous array for two reasons.
1854First, the @code{qsort} library function (which sorts an array)
1855will be used to sort the symbol table.
1856Also, the symbol lookup routine (@code{symtab.c:sym_lookup}),
1857which finds symbols
1858based on memory address, uses a binary search algorithm
1859which requires the symbol table to be a sorted array.
1860Function symbols are indicated with an @code{is_func} flag.
1861Line number symbols have no special flags set.
1862Additionally, a symbol can have an @code{is_static} flag
1863to indicate that it is a local symbol.
1864
1865With the symbol table read, the symspecs can now be translated
1866into Syms (@code{sym_ids.c:sym_id_parse}). Remember that a single
1867symspec can match multiple symbols.
1868An array of symbol tables
1869(@code{syms}) is created, each entry of which is a symbol table
1870of Syms to be included or excluded from a particular listing.
1871The master symbol table and the symspecs are examined by nested
1872loops, and every symbol that matches a symspec is inserted
1873into the appropriate syms table. This is done twice, once to
1874count the size of each required symbol table, and again to build
1875the tables, which have been malloced between passes.
1876From now on, to determine whether a symbol is on an include
1877or exclude symspec list, @code{gprof} simply uses its
1878standard symbol lookup routine on the appropriate table
1879in the @code{syms} array.
1880
1881Now the profile data file(s) themselves are read
1882(@code{gmon_io.c:gmon_out_read}),
1883first by checking for a new-style @samp{gmon.out} header,
1884then assuming this is an old-style BSD @samp{gmon.out}
1885if the magic number test failed.
1886
1887New-style histogram records are read by @code{hist.c:hist_read_rec}.
1888For the first histogram record, allocate a memory array to hold
1889all the bins, and read them in.
1890When multiple profile data files (or files with multiple histogram
1891records) are read, the starting address, ending address, number
1892of bins and sampling rate must match between the various histograms,
1893or a fatal error will result.
1894If everything matches, just sum the additional histograms into
1895the existing in-memory array.
1896
1897As each call graph record is read (@code{call_graph.c:cg_read_rec}),
1898the parent and child addresses
1899are matched to symbol table entries, and a call graph arc is
1900created by @code{cg_arcs.c:arc_add}, unless the arc fails a symspec
1901check against INCL_ARCS/EXCL_ARCS. As each arc is added,
1902a linked list is maintained of the parent's child arcs, and of the child's
1903parent arcs.
1904Both the child's call count and the arc's call count are
1905incremented by the record's call count.
1906
1907Basic-block records are read (@code{basic_blocks.c:bb_read_rec}),
1908but only if line-by-line profiling has been selected.
1909Each basic-block address is matched to a corresponding line
1910symbol in the symbol table, and an entry made in the symbol's
1911bb_addr and bb_calls arrays. Again, if multiple basic-block
1912records are present for the same address, the call counts
1913are cumulative.
1914
1915A gmon.sum file is dumped, if requested (@code{gmon_io.c:gmon_out_write}).
1916
1917If histograms were present in the data files, assign them to symbols
1918(@code{hist.c:hist_assign_samples}) by iterating over all the sample
1919bins and assigning them to symbols. Since the symbol table
1920is sorted in order of ascending memory addresses, we can
1921simple follow along in the symbol table as we make our pass
1922over the sample bins.
1923This step includes a symspec check against INCL_FLAT/EXCL_FLAT.
1924Depending on the histogram
1925scale factor, a sample bin may span multiple symbols,
1926in which case a fraction of the sample count is allocated
1927to each symbol, proportional to the degree of overlap.
1928This effect is rare for normal profiling, but overlaps
1929are more common during line-by-line profiling, and can
1930cause each of two adjacent lines to be credited with half
1931a hit, for example.
1932
1933If call graph data is present, @code{cg_arcs.c:cg_assemble} is called.
1934First, if @samp{-c} was specified, a machine-dependant
1935routine (@code{find_call}) scans through each symbol's machine code,
1936looking for subroutine call instructions, and adding them
1937to the call graph with a zero call count.
1938A topological sort is performed by depth-first numbering
1939all the symbols (@code{cg_dfn.c:cg_dfn}), so that
1940children are always numbered less than their parents,
1941then making a array of pointers into the symbol table and sorting it into
1942numerical order, which is reverse topological
1943order (children appear before parents).
1944Cycles are also detected at this point, all members
1945of which are assigned the same topological number.
1946Two passes are now made through this sorted array of symbol pointers.
1947The first pass, from end to beginning (parents to children),
1948computes the fraction of child time to propogate to each parent
1949and a print flag.
1950The print flag reflects symspec handling of INCL_GRAPH/EXCL_GRAPH,
1951with a parent's include or exclude (print or no print) property
1952being propagated to its children, unless they themselves explicitly appear
1953in INCL_GRAPH or EXCL_GRAPH.
1954A second pass, from beginning to end (children to parents) actually
1955propogates the timings along the call graph, subject
1956to a check against INCL_TIME/EXCL_TIME.
1957With the print flag, fractions, and timings now stored in the symbol
1958structures, the topological sort array is now discarded, and a
1959new array of pointers is assembled, this time sorted by propagated time.
1960
1961Finally, print the various outputs the user requested, which is now fairly
1962straightforward. The call graph (@code{cg_print.c:cg_print}) and
1963flat profile (@code{hist.c:hist_print}) are regurgitations of values
1964already computed. The annotated source listing
1965(@code{basic_blocks.c:print_annotated_source}) uses basic-block
1966information, if present, to label each line of code with call counts,
1967otherwise only the function call counts are presented.
1968
1969The function ordering code is marginally well documented
1970in the source code itself (@code{cg_print.c}). Basically,
1971the functions with the most use and the most parents are
1972placed first, followed by other functions with the most use,
1973followed by lower use functions, followed by unused functions
1974at the end.
1975
1976@node Debugging,,Internals,Details
1977@subsection Debugging @code{gprof}
1978
1979If @code{gprof} was compiled with debugging enabled,
1980the @samp{-d} option triggers debugging output
1981(to stdout) which can be helpful in understanding its operation.
1982The debugging number specified is interpreted as a sum of the following
1983options:
1984
1985@table @asis
1986@item 2 - Topological sort
1987Monitor depth-first numbering of symbols during call graph analysis
1988@item 4 - Cycles
1989Shows symbols as they are identified as cycle heads
1990@item 16 - Tallying
1991As the call graph arcs are read, show each arc and how
1992the total calls to each function are tallied
1993@item 32 - Call graph arc sorting
1994Details sorting individual parents/children within each call graph entry
1995@item 64 - Reading histogram and call graph records
1996Shows address ranges of histograms as they are read, and each
1997call graph arc
1998@item 128 - Symbol table
1999Reading, classifying, and sorting the symbol table from the object file.
2000For line-by-line profiling (@samp{-l} option), also shows line numbers
2001being assigned to memory addresses.
2002@item 256 - Static call graph
2003Trace operation of @samp{-c} option
2004@item 512 - Symbol table and arc table lookups
2005Detail operation of lookup routines
2006@item 1024 - Call graph propagation
2007Shows how function times are propagated along the call graph
2008@item 2048 - Basic-blocks
2009Shows basic-block records as they are read from profile data
2010(only meaningful with @samp{-l} option)
2011@item 4096 - Symspecs
2012Shows symspec-to-symbol pattern matching operation
2013@item 8192 - Annotate source
2014Tracks operation of @samp{-A} option
2015@end table
2016
2017@contents
2018@bye
2019
2020NEEDS AN INDEX
2021
2022-T - "traditional BSD style": How is it different? Should the
2023differences be documented?
2024
2025example flat file adds up to 100.01%...
2026
2027note: time estimates now only go out to one decimal place (0.0), where
2028they used to extend two (78.67).
This page took 0.095123 seconds and 4 git commands to generate.