Thu Nov 19 16:02:46 1998 Dave Brolley <brolley@cygnus.com>
[deliverable/binutils-gdb.git] / gprof / gprof.texi
CommitLineData
be4e1cd5
JO
1\input texinfo @c -*-texinfo-*-
2@setfilename gprof.info
3@settitle GNU gprof
4@setchapternewpage odd
44c8c1d5
DZ
5
6@ifinfo
7@c This is a dir.info fragment to support semi-automated addition of
8@c manuals to an info tree. zoo@cygnus.com is developing this facility.
9@format
10START-INFO-DIR-ENTRY
5ee3dd17 11* gprof: (gprof). Profiling your program's execution
44c8c1d5
DZ
12END-INFO-DIR-ENTRY
13@end format
14@end ifinfo
15
be4e1cd5
JO
16@ifinfo
17This file documents the gprof profiler of the GNU system.
18
e2fd4231 19Copyright (C) 1988, 1992, 1997, 1998 Free Software Foundation, Inc.
be4e1cd5
JO
20
21Permission is granted to make and distribute verbatim copies of
22this manual provided the copyright notice and this permission notice
23are preserved on all copies.
24
25@ignore
26Permission is granted to process this file through Tex and print the
27results, provided the printed document carries copying permission
28notice identical to this one except for the removal of this paragraph
29(this paragraph not being relevant to the printed manual).
30
31@end ignore
32Permission is granted to copy and distribute modified versions of this
33manual under the conditions for verbatim copying, provided that the entire
34resulting derived work is distributed under the terms of a permission
35notice identical to this one.
36
37Permission is granted to copy and distribute translations of this manual
38into another language, under the above conditions for modified versions.
39@end ifinfo
40
41@finalout
42@smallbook
43
44@titlepage
45@title GNU gprof
46@subtitle The @sc{gnu} Profiler
47@author Jay Fenlason and Richard Stallman
48
49@page
50
51This manual describes the @sc{gnu} profiler, @code{gprof}, and how you
52can use it to determine which parts of a program are taking most of the
53execution time. We assume that you know how to write, compile, and
54execute programs. @sc{gnu} @code{gprof} was written by Jay Fenlason.
55
e2fd4231
ILT
56This manual was edited January 1993 by Jeffrey Osier
57and updated September 1997 by Brent Baccala.
be4e1cd5
JO
58
59@vskip 0pt plus 1filll
e2fd4231 60Copyright @copyright{} 1988, 1992, 1997, 1998 Free Software Foundation, Inc.
be4e1cd5
JO
61
62Permission is granted to make and distribute verbatim copies of
63this manual provided the copyright notice and this permission notice
64are preserved on all copies.
65
66@ignore
67Permission is granted to process this file through TeX and print the
68results, provided the printed document carries copying permission
69notice identical to this one except for the removal of this paragraph
70(this paragraph not being relevant to the printed manual).
71
72@end ignore
73Permission is granted to copy and distribute modified versions of this
74manual under the conditions for verbatim copying, provided that the entire
75resulting derived work is distributed under the terms of a permission
76notice identical to this one.
77
78Permission is granted to copy and distribute translations of this manual
79into another language, under the same conditions as for modified versions.
80
81@end titlepage
82
83@ifinfo
84@node Top
85@top Profiling a Program: Where Does It Spend Its Time?
86
87This manual describes the @sc{gnu} profiler, @code{gprof}, and how you
88can use it to determine which parts of a program are taking most of the
89execution time. We assume that you know how to write, compile, and
90execute programs. @sc{gnu} @code{gprof} was written by Jay Fenlason.
91
e2fd4231 92This manual was updated August 1997 by Brent Baccala.
a2b34707 93
be4e1cd5 94@menu
e2fd4231 95* Introduction:: What profiling means, and why it is useful.
be4e1cd5 96
e2fd4231
ILT
97* Compiling:: How to compile your program for profiling.
98* Executing:: Executing your program to generate profile data
99* Invoking:: How to run @code{gprof}, and its options
be4e1cd5 100
e2fd4231 101* Output:: Interpreting @code{gprof}'s output
be4e1cd5 102
e2fd4231
ILT
103* Inaccuracy:: Potential problems you should be aware of
104* How do I?:: Answers to common questions
105* Incompatibilities:: (between @sc{gnu} @code{gprof} and Unix @code{gprof}.)
106* Details:: Details of how profiling is done
be4e1cd5
JO
107@end menu
108@end ifinfo
109
e2fd4231
ILT
110@node Introduction
111@chapter Introduction to Profiling
be4e1cd5
JO
112
113Profiling allows you to learn where your program spent its time and which
114functions called which other functions while it was executing. This
115information can show you which pieces of your program are slower than you
116expected, and might be candidates for rewriting to make your program
117execute faster. It can also tell you which functions are being called more
118or less often than you expected. This may help you spot bugs that had
119otherwise been unnoticed.
120
121Since the profiler uses information collected during the actual execution
122of your program, it can be used on programs that are too large or too
123complex to analyze by reading the source. However, how your program is run
124will affect the information that shows up in the profile data. If you
125don't use some feature of your program while it is being profiled, no
126profile information will be generated for that feature.
127
128Profiling has several steps:
129
130@itemize @bullet
131@item
132You must compile and link your program with profiling enabled.
133@xref{Compiling}.
134
135@item
136You must execute your program to generate a profile data file.
137@xref{Executing}.
138
139@item
140You must run @code{gprof} to analyze the profile data.
141@xref{Invoking}.
142@end itemize
143
144The next three chapters explain these steps in greater detail.
145
e2fd4231 146Several forms of output are available from the analysis.
be4e1cd5 147
e2fd4231 148The @dfn{flat profile} shows how much time your program spent in each function,
be4e1cd5
JO
149and how many times that function was called. If you simply want to know
150which functions burn most of the cycles, it is stated concisely here.
151@xref{Flat Profile}.
152
e2fd4231 153The @dfn{call graph} shows, for each function, which functions called it, which
be4e1cd5
JO
154other functions it called, and how many times. There is also an estimate
155of how much time was spent in the subroutines of each function. This can
156suggest places where you might try to eliminate function calls that use a
157lot of time. @xref{Call Graph}.
158
e2fd4231
ILT
159The @dfn{annotated source} listing is a copy of the program's
160source code, labeled with the number of times each line of the
161program was executed. @xref{Annotated Source}.
162
163To better understand how profiling works, you may wish to read
164a description of its implementation.
165@xref{Implementation}.
166
be4e1cd5
JO
167@node Compiling
168@chapter Compiling a Program for Profiling
169
170The first step in generating profile information for your program is
171to compile and link it with profiling enabled.
172
173To compile a source file for profiling, specify the @samp{-pg} option when
174you run the compiler. (This is in addition to the options you normally
175use.)
176
177To link the program for profiling, if you use a compiler such as @code{cc}
178to do the linking, simply specify @samp{-pg} in addition to your usual
179options. The same option, @samp{-pg}, alters either compilation or linking
180to do what is necessary for profiling. Here are examples:
181
182@example
183cc -g -c myprog.c utils.c -pg
184cc -o myprog myprog.o utils.o -pg
185@end example
186
187The @samp{-pg} option also works with a command that both compiles and links:
188
189@example
190cc -o myprog myprog.c utils.c -g -pg
191@end example
192
193If you run the linker @code{ld} directly instead of through a compiler
e2fd4231
ILT
194such as @code{cc}, you may have to specify a profiling startup file
195@file{gcrt0.o} as the first input file instead of the usual startup
196file @file{crt0.o}. In addition, you would probably want to
197specify the profiling C library, @file{libc_p.a}, by writing
be4e1cd5
JO
198@samp{-lc_p} instead of the usual @samp{-lc}. This is not absolutely
199necessary, but doing this gives you number-of-calls information for
200standard library functions such as @code{read} and @code{open}. For
201example:
202
203@example
204ld -o myprog /lib/gcrt0.o myprog.o utils.o -lc_p
205@end example
206
207If you compile only some of the modules of the program with @samp{-pg}, you
208can still profile the program, but you won't get complete information about
209the modules that were compiled without @samp{-pg}. The only information
210you get for the functions in those modules is the total time spent in them;
211there is no record of how many times they were called, or from where. This
212will not affect the flat profile (except that the @code{calls} field for
213the functions will be blank), but will greatly reduce the usefulness of the
214call graph.
215
e2fd4231
ILT
216If you wish to perform line-by-line profiling,
217you will also need to specify the @samp{-g} option,
218instructing the compiler to insert debugging symbols into the program
219that match program addresses to source code lines.
220@xref{Line-by-line}.
221
222In addition to the @samp{-pg} and @samp{-g} options,
223you may also wish to specify the @samp{-a} option when compiling.
224This will instrument
225the program to perform basic-block counting. As the program runs,
226it will count how many times it executed each branch of each @samp{if}
227statement, each iteration of each @samp{do} loop, etc. This will
228enable @code{gprof} to construct an annotated source code
229listing showing how many times each line of code was executed.
230
be4e1cd5 231@node Executing
e2fd4231 232@chapter Executing the Program
be4e1cd5
JO
233
234Once the program is compiled for profiling, you must run it in order to
235generate the information that @code{gprof} needs. Simply run the program
236as usual, using the normal arguments, file names, etc. The program should
237run normally, producing the same output as usual. It will, however, run
238somewhat slower than normal because of the time spent collecting and the
239writing the profile data.
240
241The way you run the program---the arguments and input that you give
242it---may have a dramatic effect on what the profile information shows. The
243profile data will describe the parts of the program that were activated for
244the particular input you use. For example, if the first command you give
245to your program is to quit, the profile data will show the time used in
246initialization and in cleanup, but not much else.
247
e2fd4231 248Your program will write the profile data into a file called @file{gmon.out}
be4e1cd5
JO
249just before exiting. If there is already a file called @file{gmon.out},
250its contents are overwritten. There is currently no way to tell the
251program to write the profile data under a different name, but you can rename
252the file afterward if you are concerned that it may be overwritten.
253
254In order to write the @file{gmon.out} file properly, your program must exit
255normally: by returning from @code{main} or by calling @code{exit}. Calling
256the low-level function @code{_exit} does not write the profile data, and
257neither does abnormal termination due to an unhandled signal.
258
259The @file{gmon.out} file is written in the program's @emph{current working
260directory} at the time it exits. This means that if your program calls
261@code{chdir}, the @file{gmon.out} file will be left in the last directory
262your program @code{chdir}'d to. If you don't have permission to write in
e2fd4231
ILT
263this directory, the file is not written, and you will get an error message.
264
265Older versions of the @sc{gnu} profiling library may also write a file
266called @file{bb.out}. This file, if present, contains an human-readable
267listing of the basic-block execution counts. Unfortunately, the
268appearance of a human-readable @file{bb.out} means the basic-block
269counts didn't get written into @file{gmon.out}.
270The Perl script @code{bbconv.pl}, included with the @code{gprof}
271source distribution, will convert a @file{bb.out} file into
272a format readable by @code{gprof}.
be4e1cd5
JO
273
274@node Invoking
275@chapter @code{gprof} Command Summary
276
277After you have a profile data file @file{gmon.out}, you can run @code{gprof}
278to interpret the information in it. The @code{gprof} program prints a
279flat profile and a call graph on standard output. Typically you would
280redirect the output of @code{gprof} into a file with @samp{>}.
281
282You run @code{gprof} like this:
283
284@smallexample
285gprof @var{options} [@var{executable-file} [@var{profile-data-files}@dots{}]] [> @var{outfile}]
286@end smallexample
287
288@noindent
289Here square-brackets indicate optional arguments.
290
291If you omit the executable file name, the file @file{a.out} is used. If
292you give no profile data file name, the file @file{gmon.out} is used. If
293any file is not in the proper format, or if the profile data file does not
294appear to belong to the executable file, an error message is printed.
295
296You can give more than one profile data file by entering all their names
297after the executable file name; then the statistics in all the data files
298are summed together.
299
e2fd4231
ILT
300The order of these options does not matter.
301
302@menu
303* Output Options:: Controlling @code{gprof}'s output style
304* Analysis Options:: Controlling how @code{gprof} analyses its data
305* Miscellaneous Options::
306* Depricated Options:: Options you no longer need to use, but which
307 have been retained for compatibility
308* Symspecs:: Specifying functions to include or exclude
309@end menu
310
311@node Output Options,Analysis Options,,Invoking
312@section Output Options
313
314These options specify which of several output formats
315@code{gprof} should produce.
316
317Many of these options take an optional @dfn{symspec} to specify
318functions to be included or excluded. These options can be
319specified multiple times, with different symspecs, to include
320or exclude sets of symbols. @xref{Symspecs}.
321
322Specifying any of these options overrides the default (@samp{-p -q}),
323which prints a flat profile and call graph analysis
324for all functions.
be4e1cd5
JO
325
326@table @code
e2fd4231
ILT
327
328@item -A[@var{symspec}]
329@itemx --annotated-source[=@var{symspec}]
330The @samp{-A} option causes @code{gprof} to print annotated source code.
331If @var{symspec} is specified, print output only for matching symbols.
332@xref{Annotated Source}.
333
334@item -b
335@itemx --brief
336If the @samp{-b} option is given, @code{gprof} doesn't print the
337verbose blurbs that try to explain the meaning of all of the fields in
338the tables. This is useful if you intend to print out the output, or
339are tired of seeing the blurbs.
340
341@item -C[@var{symspec}]
342@itemx --exec-counts[=@var{symspec}]
343The @samp{-C} option causes @code{gprof} to
344print a tally of functions and the number of times each was called.
345If @var{symspec} is specified, print tally only for matching symbols.
346
347If the profile data file contains basic-block count records, specifing
348the @samp{-l} option, along with @samp{-C}, will cause basic-block
349execution counts to be tallied and displayed.
350
351@item -i
352@itemx --file-info
353The @samp{-i} option causes @code{gprof} to display summary information
354about the profile data file(s) and then exit. The number of histogram,
355call graph, and basic-block count records is displayed.
356
357@item -I @var{dirs}
358@itemx --directory-path=@var{dirs}
359The @samp{-I} option specifies a list of search directories in
360which to find source files. Environment variable @var{GPROF_PATH}
361can also be used to convery this information.
362Used mostly for annotated source output.
363
364@item -J[@var{symspec}]
365@itemx --no-annotated-source[=@var{symspec}]
366The @samp{-J} option causes @code{gprof} not to
367print annotated source code.
368If @var{symspec} is specified, @code{gprof} prints annotated source,
369but excludes matching symbols.
370
371@item -L
372@itemx --print-path
373Normally, source filenames are printed with the path
374component suppressed. The @samp{-L} option causes @code{gprof}
375to print the full pathname of
376source filenames, which is determined
377from symbolic debugging information in the image file
378and is relative to the directory in which the compiler
379was invoked.
380
381@item -p[@var{symspec}]
382@itemx --flat-profile[=@var{symspec}]
383The @samp{-p} option causes @code{gprof} to print a flat profile.
384If @var{symspec} is specified, print flat profile only for matching symbols.
385@xref{Flat Profile}.
386
387@item -P[@var{symspec}]
388@itemx --no-flat-profile[=@var{symspec}]
389The @samp{-P} option causes @code{gprof} to suppress printing a flat profile.
390If @var{symspec} is specified, @code{gprof} prints a flat profile,
391but excludes matching symbols.
392
393@item -q[@var{symspec}]
394@itemx --graph[=@var{symspec}]
395The @samp{-q} option causes @code{gprof} to print the call graph analysis.
396If @var{symspec} is specified, print call graph only for matching symbols
397and their children.
398@xref{Call Graph}.
399
400@item -Q[@var{symspec}]
401@itemx --no-graph[=@var{symspec}]
402The @samp{-Q} option causes @code{gprof} to suppress printing the
403call graph.
404If @var{symspec} is specified, @code{gprof} prints a call graph,
405but excludes matching symbols.
406
407@item -y
408@itemx --separate-files
409This option affects annotated source output only.
410Normally, gprof prints annotated source files
411to standard-output. If this option is specified,
412annotated source for a file named @file{path/filename}
413is generated in the file @file{filename-ann}.
414
415@item -Z[@var{symspec}]
416@itemx --no-exec-counts[=@var{symspec}]
417The @samp{-Z} option causes @code{gprof} not to
418print a tally of functions and the number of times each was called.
419If @var{symspec} is specified, print tally, but exclude matching symbols.
420
421@item --function-ordering
422The @samp{--function-ordering} option causes @code{gprof} to print a
423suggested function ordering for the program based on profiling data.
424This option suggests an ordering which may improve paging, tlb and
425cache behavior for the program on systems which support arbitrary
426ordering of functions in an executable.
427
428The exact details of how to force the linker to place functions
429in a particular order is system dependent and out of the scope of this
430manual.
431
432@item --file-ordering @var{map_file}
433The @samp{--file-ordering} option causes @code{gprof} to print a
434suggested .o link line ordering for the program based on profiling data.
435This option suggests an ordering which may improve paging, tlb and
436cache behavior for the program on systems which do not support arbitrary
437ordering of functions in an executable.
438
439Use of the @samp{-a} argument is highly recommended with this option.
440
441The @var{map_file} argument is a pathname to a file which provides
442function name to object file mappings. The format of the file is similar to
443the output of the program @code{nm}.
444
445@smallexample
446@group
447c-parse.o:00000000 T yyparse
448c-parse.o:00000004 C yyerrflag
449c-lang.o:00000000 T maybe_objc_method_name
450c-lang.o:00000000 T print_lang_statistics
451c-lang.o:00000000 T recognize_objc_keyword
452c-decl.o:00000000 T print_lang_identifier
453c-decl.o:00000000 T print_lang_type
454@dots{}
455
456@end group
457@end smallexample
458
459GNU @code{nm} @samp{--extern-only} @samp{--defined-only} @samp{-v} @samp{--print-file-name} can be used to create @var{map_file}.
460
461@item -T
462@itemx --traditional
463The @samp{-T} option causes @code{gprof} to print its output in
464``traditional'' BSD style.
465
466@item -w @var{width}
467@itemx --width=@var{width}
468Sets width of output lines to @var{width}.
469Currently only used when printing the function index at the bottom
470of the call graph.
471
472@item -x
473@itemx --all-lines
474This option affects annotated source output only.
475By default, only the lines at the beginning of a basic-block
476are annotated. If this option is specified, every line in
477a basic-block is annotated by repeating the annotation for the
478first line. This behavior is similar to @code{tcov}'s @samp{-a}.
479
e4dee78c
ILT
480@item --demangle
481@itemx --no-demangle
482These options control whether C++ symbol names should be demangled when
483printing output. The default is to demangle symbols. The
484@code{--no-demangle} option may be used to turn off demangling.
485
e2fd4231
ILT
486@end table
487
488@node Analysis Options,Miscellaneous Options,Output Options,Invoking
489@section Analysis Options
490
491@table @code
492
be4e1cd5 493@item -a
e2fd4231 494@itemx --no-static
be4e1cd5
JO
495The @samp{-a} option causes @code{gprof} to suppress the printing of
496statically declared (private) functions. (These are functions whose
497names are not listed as global, and which are not visible outside the
498file/function/block where they were defined.) Time spent in these
499functions, calls to/from them, etc, will all be attributed to the
500function that was loaded directly before it in the executable file.
501@c This is compatible with Unix @code{gprof}, but a bad idea.
502This option affects both the flat profile and the call graph.
503
e2fd4231
ILT
504@item -c
505@itemx --static-call-graph
506The @samp{-c} option causes the call graph of the program to be
507augmented by a heuristic which examines the text space of the object
508file and identifies function calls in the binary machine code.
509Since normal call graph records are only generated when functions are
510entered, this option identifies children that could have been called,
511but never were. Calls to functions that were not compiled with
512profiling enabled are also identified, but only if symbol table
513entries are present for them.
514Calls to dynamic library routines are typically @emph{not} found
515by this option.
516Parents or children identified via this heuristic
517are indicated in the call graph with call counts of @samp{0}.
518
32843f94 519@item -D
e2fd4231 520@itemx --ignore-non-functions
32843f94
JL
521The @samp{-D} option causes @code{gprof} to ignore symbols which
522are not known to be functions. This option will give more accurate
523profile data on systems where it is supported (Solaris and HPUX for
524example).
525
e2fd4231
ILT
526@item -k @var{from}/@var{to}
527The @samp{-k} option allows you to delete from the call graph any arcs from
528symbols matching symspec @var{from} to those matching symspec @var{to}.
529
530@item -l
531@itemx --line
532The @samp{-l} option enables line-by-line profiling, which causes
533histogram hits to be charged to individual source code lines,
534instead of functions.
535If the program was compiled with basic-block counting enabled,
536this option will also identify how many times each line of
537code was executed.
538While line-by-line profiling can help isolate where in a large function
539a program is spending its time, it also significantly increases
540the running time of @code{gprof}, and magnifies statistical
541inaccuracies.
542@xref{Sampling Error}.
543
544@item -m @var{num}
545@itemx --min-count=@var{num}
546This option affects execution count output only.
547Symbols that are executed less than @var{num} times are suppressed.
548
549@item -n[@var{symspec}]
550@itemx --time[=@var{symspec}]
551The @samp{-n} option causes @code{gprof}, in its call graph analysis,
552to only propagate times for symbols matching @var{symspec}.
553
554@item -N[@var{symspec}]
555@itemx --no-time[=@var{symspec}]
556The @samp{-n} option causes @code{gprof}, in its call graph analysis,
557not to propagate times for symbols matching @var{symspec}.
558
559@item -z
560@itemx --display-unused-functions
561If you give the @samp{-z} option, @code{gprof} will mention all
562functions in the flat profile, even those that were never called, and
563that had no time spent in them. This is useful in conjunction with the
564@samp{-c} option for discovering which routines were never called.
565
566@end table
567
568@node Miscellaneous Options,Depricated Options,Analysis Options,Invoking
569@section Miscellaneous Options
570
571@table @code
572
573@item -d[@var{num}]
574@itemx --debug[=@var{num}]
575The @samp{-d @var{num}} option specifies debugging options.
576If @var{num} is not specified, enable all debugging.
577@xref{Debugging}.
578
579@item -O@var{name}
580@itemx --file-format=@var{name}
581Selects the format of the profile data files.
582Recognized formats are @samp{auto} (the default), @samp{bsd}, @samp{magic},
583and @samp{prof} (not yet supported).
584
585@item -s
586@itemx --sum
587The @samp{-s} option causes @code{gprof} to summarize the information
588in the profile data files it read in, and write out a profile data
589file called @file{gmon.sum}, which contains all the information from
590the profile data files that @code{gprof} read in. The file @file{gmon.sum}
591may be one of the specified input files; the effect of this is to
592merge the data in the other input files into @file{gmon.sum}.
593
594Eventually you can run @code{gprof} again without @samp{-s} to analyze the
595cumulative data in the file @file{gmon.sum}.
596
597@item -v
598@itemx --version
599The @samp{-v} flag causes @code{gprof} to print the current version
600number, and then exit.
601
602@end table
603
604@node Depricated Options,Symspecs,Miscellaneous Options,Invoking
605@section Depricated Options
606
607@table @code
608
609These options have been replaced with newer versions that use symspecs.
610
be4e1cd5
JO
611@item -e @var{function_name}
612The @samp{-e @var{function}} option tells @code{gprof} to not print
613information about the function @var{function_name} (and its
614children@dots{}) in the call graph. The function will still be listed
615as a child of any functions that call it, but its index number will be
616shown as @samp{[not printed]}. More than one @samp{-e} option may be
617given; only one @var{function_name} may be indicated with each @samp{-e}
618option.
619
620@item -E @var{function_name}
621The @code{-E @var{function}} option works like the @code{-e} option, but
622time spent in the function (and children who were not called from
623anywhere else), will not be used to compute the percentages-of-time for
624the call graph. More than one @samp{-E} option may be given; only one
625@var{function_name} may be indicated with each @samp{-E} option.
626
627@item -f @var{function_name}
628The @samp{-f @var{function}} option causes @code{gprof} to limit the
629call graph to the function @var{function_name} and its children (and
630their children@dots{}). More than one @samp{-f} option may be given;
631only one @var{function_name} may be indicated with each @samp{-f}
632option.
633
634@item -F @var{function_name}
635The @samp{-F @var{function}} option works like the @code{-f} option, but
636only time spent in the function and its children (and their
637children@dots{}) will be used to determine total-time and
638percentages-of-time for the call graph. More than one @samp{-F} option
639may be given; only one @var{function_name} may be indicated with each
640@samp{-F} option. The @samp{-F} option overrides the @samp{-E} option.
641
be4e1cd5
JO
642@end table
643
be4e1cd5
JO
644Note that only one function can be specified with each @code{-e},
645@code{-E}, @code{-f} or @code{-F} option. To specify more than one
646function, use multiple options. For example, this command:
647
648@example
649gprof -e boring -f foo -f bar myprogram > gprof.output
650@end example
651
652@noindent
653lists in the call graph all functions that were reached from either
654@code{foo} or @code{bar} and were not reachable from @code{boring}.
655
e2fd4231
ILT
656@node Symspecs,,Depricated Options,Invoking
657@section Symspecs
be4e1cd5 658
e2fd4231
ILT
659Many of the output options allow functions to be included or excluded
660using @dfn{symspecs} (symbol specifications), which observe the
661following syntax:
64c50fc5 662
e2fd4231
ILT
663@example
664 filename_containing_a_dot
665| funcname_not_containing_a_dot
666| linenumber
667| ( [ any_filename ] `:' ( any_funcname | linenumber ) )
668@end example
64c50fc5 669
e2fd4231 670Here are some sample symspecs:
64c50fc5 671
e2fd4231
ILT
672@table @code
673@item main.c
674Selects everything in file "main.c"---the
675dot in the string tells gprof to interpret
676the string as a filename, rather than as
677a function name. To select a file whose
678name does not contain a dot, a trailing colon
679should be specified. For example, "odd:" is
680interpreted as the file named "odd".
681
682@item main
683Selects all functions named "main". Notice
684that there may be multiple instances of the
685same function name because some of the
686definitions may be local (i.e., static).
687Unless a function name is unique in a program,
688you must use the colon notation explained
689below to specify a function from a specific
690source file. Sometimes, function names contain
691dots. In such cases, it is necessar to
692add a leading colon to the name. For example,
693":.mul" selects function ".mul".
694
695@item main.c:main
696Selects function "main" in file "main.c".
697
698@item main.c:134
699Selects line 134 in file "main.c".
700@end table
64c50fc5 701
e2fd4231
ILT
702@node Output
703@chapter Interpreting @code{gprof}'s Output
64c50fc5 704
e2fd4231
ILT
705@code{gprof} can produce several different output styles, the
706most important of which are described below. The simplest output
707styles (file information, execution count, and function and file ordering)
708are not described here, but are documented with the respective options
709that trigger them.
710@xref{Output Options}.
64c50fc5 711
e2fd4231
ILT
712@menu
713* Flat Profile:: The flat profile shows how much time was spent
714 executing directly in each function.
715* Call Graph:: The call graph shows which functions called which
716 others, and how much time each function used
717 when its subroutine calls are included.
718* Line-by-line:: @code{gprof} can analyze individual source code lines
719* Annotated Source:: The annotated source listing displays source code
720 labeled with execution counts
721@end menu
64c50fc5 722
be4e1cd5 723
e2fd4231
ILT
724@node Flat Profile,Call Graph,,Output
725@section The Flat Profile
be4e1cd5
JO
726@cindex flat profile
727
728The @dfn{flat profile} shows the total amount of time your program
729spent executing each function. Unless the @samp{-z} option is given,
730functions with no apparent time spent in them, and no apparent calls
731to them, are not mentioned. Note that if a function was not compiled
732for profiling, and didn't run long enough to show up on the program
733counter histogram, it will be indistinguishable from a function that
734was never called.
735
736This is part of a flat profile for a small program:
737
738@smallexample
739@group
740Flat profile:
741
742Each sample counts as 0.01 seconds.
743 % cumulative self self total
744 time seconds seconds calls ms/call ms/call name
745 33.34 0.02 0.02 7208 0.00 0.00 open
746 16.67 0.03 0.01 244 0.04 0.12 offtime
747 16.67 0.04 0.01 8 1.25 1.25 memccpy
748 16.67 0.05 0.01 7 1.43 1.43 write
749 16.67 0.06 0.01 mcount
750 0.00 0.06 0.00 236 0.00 0.00 tzset
751 0.00 0.06 0.00 192 0.00 0.00 tolower
752 0.00 0.06 0.00 47 0.00 0.00 strlen
753 0.00 0.06 0.00 45 0.00 0.00 strchr
754 0.00 0.06 0.00 1 0.00 50.00 main
755 0.00 0.06 0.00 1 0.00 0.00 memcpy
756 0.00 0.06 0.00 1 0.00 10.11 print
757 0.00 0.06 0.00 1 0.00 0.00 profil
758 0.00 0.06 0.00 1 0.00 50.00 report
759@dots{}
760@end group
761@end smallexample
762
763@noindent
e2fd4231
ILT
764The functions are sorted by first by decreasing run-time spent in them,
765then by decreasing number of calls, then alphabetically by name. The
be4e1cd5
JO
766functions @samp{mcount} and @samp{profil} are part of the profiling
767aparatus and appear in every flat profile; their time gives a measure of
768the amount of overhead due to profiling.
769
e2fd4231
ILT
770Just before the column headers, a statement appears indicating
771how much time each sample counted as.
772This @dfn{sampling period} estimates the margin of error in each of the time
be4e1cd5 773figures. A time figure that is not much larger than this is not
e2fd4231
ILT
774reliable. In this example, each sample counted as 0.01 seconds,
775suggesting a 100 Hz sampling rate.
776The program's total execution time was 0.06
777seconds, as indicated by the @samp{cumulative seconds} field. Since
778each sample counted for 0.01 seconds, this means only six samples
779were taken during the run. Two of the samples occured while the
780program was in the @samp{open} function, as indicated by the
781@samp{self seconds} field. Each of the other four samples
782occured one each in @samp{offtime}, @samp{memccpy}, @samp{write},
783and @samp{mcount}.
784Since only six samples were taken, none of these values can
785be regarded as particularly reliable.
786In another run,
787the @samp{self seconds} field for
788@samp{mcount} might well be @samp{0.00} or @samp{0.02}.
be4e1cd5
JO
789@xref{Sampling Error}, for a complete discussion.
790
e2fd4231
ILT
791The remaining functions in the listing (those whose
792@samp{self seconds} field is @samp{0.00}) didn't appear
793in the histogram samples at all. However, the call graph
794indicated that they were called, so therefore they are listed,
795sorted in decreasing order by the @samp{calls} field.
796Clearly some time was spent executing these functions,
797but the paucity of histogram samples prevents any
798determination of how much time each took.
799
be4e1cd5
JO
800Here is what the fields in each line mean:
801
802@table @code
803@item % time
804This is the percentage of the total execution time your program spent
805in this function. These should all add up to 100%.
806
807@item cumulative seconds
808This is the cumulative total number of seconds the computer spent
809executing this functions, plus the time spent in all the functions
810above this one in this table.
811
812@item self seconds
813This is the number of seconds accounted for by this function alone.
814The flat profile listing is sorted first by this number.
815
816@item calls
817This is the total number of times the function was called. If the
818function was never called, or the number of times it was called cannot
819be determined (probably because the function was not compiled with
820profiling enabled), the @dfn{calls} field is blank.
821
822@item self ms/call
823This represents the average number of milliseconds spent in this
824function per call, if this function is profiled. Otherwise, this field
825is blank for this function.
826
827@item total ms/call
828This represents the average number of milliseconds spent in this
829function and its descendants per call, if this function is profiled.
830Otherwise, this field is blank for this function.
e2fd4231 831This is the only field in the flat profile that uses call graph analysis.
be4e1cd5
JO
832
833@item name
834This is the name of the function. The flat profile is sorted by this
e2fd4231
ILT
835field alphabetically after the @dfn{self seconds} and @dfn{calls}
836fields are sorted.
be4e1cd5
JO
837@end table
838
e2fd4231
ILT
839@node Call Graph,Line-by-line,Flat Profile,Output
840@section The Call Graph
be4e1cd5
JO
841@cindex call graph
842
843The @dfn{call graph} shows how much time was spent in each function
844and its children. From this information, you can find functions that,
845while they themselves may not have used much time, called other
846functions that did use unusual amounts of time.
847
848Here is a sample call from a small program. This call came from the
849same @code{gprof} run as the flat profile example in the previous
850chapter.
851
852@smallexample
853@group
854granularity: each sample hit covers 2 byte(s) for 20.00% of 0.05 seconds
855
856index % time self children called name
857 <spontaneous>
858[1] 100.0 0.00 0.05 start [1]
859 0.00 0.05 1/1 main [2]
860 0.00 0.00 1/2 on_exit [28]
861 0.00 0.00 1/1 exit [59]
862-----------------------------------------------
863 0.00 0.05 1/1 start [1]
864[2] 100.0 0.00 0.05 1 main [2]
865 0.00 0.05 1/1 report [3]
866-----------------------------------------------
867 0.00 0.05 1/1 main [2]
868[3] 100.0 0.00 0.05 1 report [3]
869 0.00 0.03 8/8 timelocal [6]
870 0.00 0.01 1/1 print [9]
871 0.00 0.01 9/9 fgets [12]
872 0.00 0.00 12/34 strncmp <cycle 1> [40]
873 0.00 0.00 8/8 lookup [20]
874 0.00 0.00 1/1 fopen [21]
875 0.00 0.00 8/8 chewtime [24]
876 0.00 0.00 8/16 skipspace [44]
877-----------------------------------------------
878[4] 59.8 0.01 0.02 8+472 <cycle 2 as a whole> [4]
879 0.01 0.02 244+260 offtime <cycle 2> [7]
880 0.00 0.00 236+1 tzset <cycle 2> [26]
881-----------------------------------------------
882@end group
883@end smallexample
884
885The lines full of dashes divide this table into @dfn{entries}, one for each
886function. Each entry has one or more lines.
887
888In each entry, the primary line is the one that starts with an index number
889in square brackets. The end of this line says which function the entry is
890for. The preceding lines in the entry describe the callers of this
891function and the following lines describe its subroutines (also called
892@dfn{children} when we speak of the call graph).
893
894The entries are sorted by time spent in the function and its subroutines.
895
896The internal profiling function @code{mcount} (@pxref{Flat Profile})
897is never mentioned in the call graph.
898
899@menu
900* Primary:: Details of the primary line's contents.
901* Callers:: Details of caller-lines' contents.
902* Subroutines:: Details of subroutine-lines' contents.
903* Cycles:: When there are cycles of recursion,
904 such as @code{a} calls @code{b} calls @code{a}@dots{}
905@end menu
906
907@node Primary
e2fd4231 908@subsection The Primary Line
be4e1cd5
JO
909
910The @dfn{primary line} in a call graph entry is the line that
911describes the function which the entry is about and gives the overall
912statistics for this function.
913
914For reference, we repeat the primary line from the entry for function
915@code{report} in our main example, together with the heading line that
916shows the names of the fields:
917
918@smallexample
919@group
920index % time self children called name
921@dots{}
922[3] 100.0 0.00 0.05 1 report [3]
923@end group
924@end smallexample
925
926Here is what the fields in the primary line mean:
927
928@table @code
929@item index
930Entries are numbered with consecutive integers. Each function
931therefore has an index number, which appears at the beginning of its
932primary line.
933
934Each cross-reference to a function, as a caller or subroutine of
935another, gives its index number as well as its name. The index number
936guides you if you wish to look for the entry for that function.
937
938@item % time
939This is the percentage of the total time that was spent in this
940function, including time spent in subroutines called from this
941function.
942
943The time spent in this function is counted again for the callers of
944this function. Therefore, adding up these percentages is meaningless.
945
946@item self
947This is the total amount of time spent in this function. This
948should be identical to the number printed in the @code{seconds} field
949for this function in the flat profile.
950
951@item children
952This is the total amount of time spent in the subroutine calls made by
953this function. This should be equal to the sum of all the @code{self}
954and @code{children} entries of the children listed directly below this
955function.
956
957@item called
958This is the number of times the function was called.
959
960If the function called itself recursively, there are two numbers,
961separated by a @samp{+}. The first number counts non-recursive calls,
962and the second counts recursive calls.
963
964In the example above, the function @code{report} was called once from
965@code{main}.
966
967@item name
968This is the name of the current function. The index number is
969repeated after it.
970
971If the function is part of a cycle of recursion, the cycle number is
972printed between the function's name and the index number
973(@pxref{Cycles}). For example, if function @code{gnurr} is part of
974cycle number one, and has index number twelve, its primary line would
975be end like this:
976
977@example
978gnurr <cycle 1> [12]
979@end example
980@end table
981
982@node Callers, Subroutines, Primary, Call Graph
e2fd4231 983@subsection Lines for a Function's Callers
be4e1cd5
JO
984
985A function's entry has a line for each function it was called by.
986These lines' fields correspond to the fields of the primary line, but
987their meanings are different because of the difference in context.
988
989For reference, we repeat two lines from the entry for the function
990@code{report}, the primary line and one caller-line preceding it, together
991with the heading line that shows the names of the fields:
992
993@smallexample
994index % time self children called name
995@dots{}
996 0.00 0.05 1/1 main [2]
997[3] 100.0 0.00 0.05 1 report [3]
998@end smallexample
999
1000Here are the meanings of the fields in the caller-line for @code{report}
1001called from @code{main}:
1002
1003@table @code
1004@item self
1005An estimate of the amount of time spent in @code{report} itself when it was
1006called from @code{main}.
1007
1008@item children
1009An estimate of the amount of time spent in subroutines of @code{report}
1010when @code{report} was called from @code{main}.
1011
1012The sum of the @code{self} and @code{children} fields is an estimate
1013of the amount of time spent within calls to @code{report} from @code{main}.
1014
1015@item called
1016Two numbers: the number of times @code{report} was called from @code{main},
1017followed by the total number of nonrecursive calls to @code{report} from
1018all its callers.
1019
1020@item name and index number
1021The name of the caller of @code{report} to which this line applies,
1022followed by the caller's index number.
1023
1024Not all functions have entries in the call graph; some
1025options to @code{gprof} request the omission of certain functions.
1026When a caller has no entry of its own, it still has caller-lines
1027in the entries of the functions it calls.
1028
1029If the caller is part of a recursion cycle, the cycle number is
1030printed between the name and the index number.
1031@end table
1032
1033If the identity of the callers of a function cannot be determined, a
1034dummy caller-line is printed which has @samp{<spontaneous>} as the
1035``caller's name'' and all other fields blank. This can happen for
1036signal handlers.
1037@c What if some calls have determinable callers' names but not all?
1038@c FIXME - still relevant?
1039
1040@node Subroutines, Cycles, Callers, Call Graph
e2fd4231 1041@subsection Lines for a Function's Subroutines
be4e1cd5
JO
1042
1043A function's entry has a line for each of its subroutines---in other
1044words, a line for each other function that it called. These lines'
1045fields correspond to the fields of the primary line, but their meanings
1046are different because of the difference in context.
1047
1048For reference, we repeat two lines from the entry for the function
1049@code{main}, the primary line and a line for a subroutine, together
1050with the heading line that shows the names of the fields:
1051
1052@smallexample
1053index % time self children called name
1054@dots{}
1055[2] 100.0 0.00 0.05 1 main [2]
1056 0.00 0.05 1/1 report [3]
1057@end smallexample
1058
1059Here are the meanings of the fields in the subroutine-line for @code{main}
1060calling @code{report}:
1061
1062@table @code
1063@item self
1064An estimate of the amount of time spent directly within @code{report}
1065when @code{report} was called from @code{main}.
1066
1067@item children
1068An estimate of the amount of time spent in subroutines of @code{report}
1069when @code{report} was called from @code{main}.
1070
1071The sum of the @code{self} and @code{children} fields is an estimate
1072of the total time spent in calls to @code{report} from @code{main}.
1073
1074@item called
1075Two numbers, the number of calls to @code{report} from @code{main}
1076followed by the total number of nonrecursive calls to @code{report}.
e2fd4231
ILT
1077This ratio is used to determine how much of @code{report}'s @code{self}
1078and @code{children} time gets credited to @code{main}.
1079@xref{Assumptions}.
be4e1cd5
JO
1080
1081@item name
1082The name of the subroutine of @code{main} to which this line applies,
1083followed by the subroutine's index number.
1084
1085If the caller is part of a recursion cycle, the cycle number is
1086printed between the name and the index number.
1087@end table
1088
1089@node Cycles,, Subroutines, Call Graph
e2fd4231 1090@subsection How Mutually Recursive Functions Are Described
be4e1cd5
JO
1091@cindex cycle
1092@cindex recursion cycle
1093
1094The graph may be complicated by the presence of @dfn{cycles of
1095recursion} in the call graph. A cycle exists if a function calls
1096another function that (directly or indirectly) calls (or appears to
1097call) the original function. For example: if @code{a} calls @code{b},
1098and @code{b} calls @code{a}, then @code{a} and @code{b} form a cycle.
1099
e2fd4231 1100Whenever there are call paths both ways between a pair of functions, they
be4e1cd5
JO
1101belong to the same cycle. If @code{a} and @code{b} call each other and
1102@code{b} and @code{c} call each other, all three make one cycle. Note that
1103even if @code{b} only calls @code{a} if it was not called from @code{a},
1104@code{gprof} cannot determine this, so @code{a} and @code{b} are still
1105considered a cycle.
1106
1107The cycles are numbered with consecutive integers. When a function
1108belongs to a cycle, each time the function name appears in the call graph
1109it is followed by @samp{<cycle @var{number}>}.
1110
1111The reason cycles matter is that they make the time values in the call
1112graph paradoxical. The ``time spent in children'' of @code{a} should
1113include the time spent in its subroutine @code{b} and in @code{b}'s
1114subroutines---but one of @code{b}'s subroutines is @code{a}! How much of
1115@code{a}'s time should be included in the children of @code{a}, when
1116@code{a} is indirectly recursive?
1117
1118The way @code{gprof} resolves this paradox is by creating a single entry
1119for the cycle as a whole. The primary line of this entry describes the
1120total time spent directly in the functions of the cycle. The
1121``subroutines'' of the cycle are the individual functions of the cycle, and
1122all other functions that were called directly by them. The ``callers'' of
1123the cycle are the functions, outside the cycle, that called functions in
1124the cycle.
1125
1126Here is an example portion of a call graph which shows a cycle containing
1127functions @code{a} and @code{b}. The cycle was entered by a call to
1128@code{a} from @code{main}; both @code{a} and @code{b} called @code{c}.
1129
1130@smallexample
1131index % time self children called name
1132----------------------------------------
1133 1.77 0 1/1 main [2]
1134[3] 91.71 1.77 0 1+5 <cycle 1 as a whole> [3]
1135 1.02 0 3 b <cycle 1> [4]
1136 0.75 0 2 a <cycle 1> [5]
1137----------------------------------------
1138 3 a <cycle 1> [5]
1139[4] 52.85 1.02 0 0 b <cycle 1> [4]
1140 2 a <cycle 1> [5]
1141 0 0 3/6 c [6]
1142----------------------------------------
1143 1.77 0 1/1 main [2]
1144 2 b <cycle 1> [4]
1145[5] 38.86 0.75 0 1 a <cycle 1> [5]
1146 3 b <cycle 1> [4]
1147 0 0 3/6 c [6]
1148----------------------------------------
1149@end smallexample
1150
1151@noindent
1152(The entire call graph for this program contains in addition an entry for
1153@code{main}, which calls @code{a}, and an entry for @code{c}, with callers
1154@code{a} and @code{b}.)
1155
1156@smallexample
1157index % time self children called name
1158 <spontaneous>
1159[1] 100.00 0 1.93 0 start [1]
1160 0.16 1.77 1/1 main [2]
1161----------------------------------------
1162 0.16 1.77 1/1 start [1]
1163[2] 100.00 0.16 1.77 1 main [2]
1164 1.77 0 1/1 a <cycle 1> [5]
1165----------------------------------------
1166 1.77 0 1/1 main [2]
1167[3] 91.71 1.77 0 1+5 <cycle 1 as a whole> [3]
1168 1.02 0 3 b <cycle 1> [4]
1169 0.75 0 2 a <cycle 1> [5]
1170 0 0 6/6 c [6]
1171----------------------------------------
1172 3 a <cycle 1> [5]
1173[4] 52.85 1.02 0 0 b <cycle 1> [4]
1174 2 a <cycle 1> [5]
1175 0 0 3/6 c [6]
1176----------------------------------------
1177 1.77 0 1/1 main [2]
1178 2 b <cycle 1> [4]
1179[5] 38.86 0.75 0 1 a <cycle 1> [5]
1180 3 b <cycle 1> [4]
1181 0 0 3/6 c [6]
1182----------------------------------------
1183 0 0 3/6 b <cycle 1> [4]
1184 0 0 3/6 a <cycle 1> [5]
1185[6] 0.00 0 0 6 c [6]
1186----------------------------------------
1187@end smallexample
1188
1189The @code{self} field of the cycle's primary line is the total time
1190spent in all the functions of the cycle. It equals the sum of the
1191@code{self} fields for the individual functions in the cycle, found
1192in the entry in the subroutine lines for these functions.
1193
1194The @code{children} fields of the cycle's primary line and subroutine lines
1195count only subroutines outside the cycle. Even though @code{a} calls
1196@code{b}, the time spent in those calls to @code{b} is not counted in
1197@code{a}'s @code{children} time. Thus, we do not encounter the problem of
1198what to do when the time in those calls to @code{b} includes indirect
1199recursive calls back to @code{a}.
1200
1201The @code{children} field of a caller-line in the cycle's entry estimates
1202the amount of time spent @emph{in the whole cycle}, and its other
1203subroutines, on the times when that caller called a function in the cycle.
1204
1205The @code{calls} field in the primary line for the cycle has two numbers:
1206first, the number of times functions in the cycle were called by functions
1207outside the cycle; second, the number of times they were called by
1208functions in the cycle (including times when a function in the cycle calls
1209itself). This is a generalization of the usual split into nonrecursive and
1210recursive calls.
1211
1212The @code{calls} field of a subroutine-line for a cycle member in the
1213cycle's entry says how many time that function was called from functions in
1214the cycle. The total of all these is the second number in the primary line's
1215@code{calls} field.
1216
1217In the individual entry for a function in a cycle, the other functions in
1218the same cycle can appear as subroutines and as callers. These lines show
1219how many times each function in the cycle called or was called from each other
1220function in the cycle. The @code{self} and @code{children} fields in these
1221lines are blank because of the difficulty of defining meanings for them
1222when recursion is going on.
1223
e2fd4231
ILT
1224@node Line-by-line,Annotated Source,Call Graph,Output
1225@section Line-by-line Profiling
be4e1cd5 1226
e2fd4231
ILT
1227@code{gprof}'s @samp{-l} option causes the program to perform
1228@dfn{line-by-line} profiling. In this mode, histogram
1229samples are assigned not to functions, but to individual
1230lines of source code. The program usually must be compiled
1231with a @samp{-g} option, in addition to @samp{-pg}, in order
1232to generate debugging symbols for tracking source code lines.
be4e1cd5 1233
e2fd4231
ILT
1234The flat profile is the most useful output table
1235in line-by-line mode.
1236The call graph isn't as useful as normal, since
1237the current version of @code{gprof} does not propagate
1238call graph arcs from source code lines to the enclosing function.
1239The call graph does, however, show each line of code
1240that called each function, along with a count.
be4e1cd5 1241
e2fd4231
ILT
1242Here is a section of @code{gprof}'s output, without line-by-line profiling.
1243Note that @code{ct_init} accounted for four histogram hits, and
124413327 calls to @code{init_block}.
be4e1cd5 1245
e2fd4231
ILT
1246@smallexample
1247Flat profile:
be4e1cd5 1248
e2fd4231
ILT
1249Each sample counts as 0.01 seconds.
1250 % cumulative self self total
1251 time seconds seconds calls us/call us/call name
1252 30.77 0.13 0.04 6335 6.31 6.31 ct_init
be4e1cd5 1253
e2fd4231
ILT
1254
1255 Call graph (explanation follows)
1256
1257
1258granularity: each sample hit covers 4 byte(s) for 7.69% of 0.13 seconds
1259
1260index % time self children called name
1261
1262 0.00 0.00 1/13496 name_too_long
1263 0.00 0.00 40/13496 deflate
1264 0.00 0.00 128/13496 deflate_fast
1265 0.00 0.00 13327/13496 ct_init
1266[7] 0.0 0.00 0.00 13496 init_block
1267
1268@end smallexample
1269
1270Now let's look at some of @code{gprof}'s output from the same program run,
1271this time with line-by-line profiling enabled. Note that @code{ct_init}'s
1272four histogram hits are broken down into four lines of source code - one hit
1273occured on each of lines 349, 351, 382 and 385. In the call graph,
1274note how
1275@code{ct_init}'s 13327 calls to @code{init_block} are broken down
1276into one call from line 396, 3071 calls from line 384, 3730 calls
1277from line 385, and 6525 calls from 387.
1278
1279@smallexample
1280Flat profile:
1281
1282Each sample counts as 0.01 seconds.
1283 % cumulative self
1284 time seconds seconds calls name
1285 7.69 0.10 0.01 ct_init (trees.c:349)
1286 7.69 0.11 0.01 ct_init (trees.c:351)
1287 7.69 0.12 0.01 ct_init (trees.c:382)
1288 7.69 0.13 0.01 ct_init (trees.c:385)
1289
1290
1291 Call graph (explanation follows)
1292
1293
1294granularity: each sample hit covers 4 byte(s) for 7.69% of 0.13 seconds
1295
1296 % time self children called name
1297
1298 0.00 0.00 1/13496 name_too_long (gzip.c:1440)
1299 0.00 0.00 1/13496 deflate (deflate.c:763)
1300 0.00 0.00 1/13496 ct_init (trees.c:396)
1301 0.00 0.00 2/13496 deflate (deflate.c:727)
1302 0.00 0.00 4/13496 deflate (deflate.c:686)
1303 0.00 0.00 5/13496 deflate (deflate.c:675)
1304 0.00 0.00 12/13496 deflate (deflate.c:679)
1305 0.00 0.00 16/13496 deflate (deflate.c:730)
1306 0.00 0.00 128/13496 deflate_fast (deflate.c:654)
1307 0.00 0.00 3071/13496 ct_init (trees.c:384)
1308 0.00 0.00 3730/13496 ct_init (trees.c:385)
1309 0.00 0.00 6525/13496 ct_init (trees.c:387)
1310[6] 0.0 0.00 0.00 13496 init_block (trees.c:408)
1311
1312@end smallexample
1313
1314
1315@node Annotated Source,,Line-by-line,Output
1316@section The Annotated Source Listing
1317
1318@code{gprof}'s @samp{-A} option triggers an annotated source listing,
1319which lists the program's source code, each function labeled with the
1320number of times it was called. You may also need to specify the
1321@samp{-I} option, if @code{gprof} can't find the source code files.
1322
1323Compiling with @samp{gcc @dots{} -g -pg -a} augments your program
1324with basic-block counting code, in addition to function counting code.
1325This enables @code{gprof} to determine how many times each line
1326of code was exeucted.
1327For example, consider the following function, taken from gzip,
1328with line numbers added:
1329
1330@smallexample
1331 1 ulg updcrc(s, n)
1332 2 uch *s;
1333 3 unsigned n;
1334 4 @{
1335 5 register ulg c;
1336 6
1337 7 static ulg crc = (ulg)0xffffffffL;
1338 8
1339 9 if (s == NULL) @{
134010 c = 0xffffffffL;
134111 @} else @{
134212 c = crc;
134313 if (n) do @{
134414 c = crc_32_tab[...];
134515 @} while (--n);
134616 @}
134717 crc = c;
134818 return c ^ 0xffffffffL;
134919 @}
1350
1351@end smallexample
1352
1353@code{updcrc} has at least five basic-blocks.
1354One is the function itself. The
1355@code{if} statement on line 9 generates two more basic-blocks, one
1356for each branch of the @code{if}. A fourth basic-block results from
1357the @code{if} on line 13, and the contents of the @code{do} loop form
1358the fifth basic-block. The compiler may also generate additional
1359basic-blocks to handle various special cases.
1360
1361A program augmented for basic-block counting can be analyzed with
1362@code{gprof -l -A}. I also suggest use of the @samp{-x} option,
1363which ensures that each line of code is labeled at least once.
1364Here is @code{updcrc}'s
1365annotated source listing for a sample @code{gzip} run:
1366
1367@smallexample
1368 ulg updcrc(s, n)
1369 uch *s;
1370 unsigned n;
1371 2 ->@{
1372 register ulg c;
1373
1374 static ulg crc = (ulg)0xffffffffL;
1375
1376 2 -> if (s == NULL) @{
1377 1 -> c = 0xffffffffL;
1378 1 -> @} else @{
1379 1 -> c = crc;
1380 1 -> if (n) do @{
1381 26312 -> c = crc_32_tab[...];
138226312,1,26311 -> @} while (--n);
1383 @}
1384 2 -> crc = c;
1385 2 -> return c ^ 0xffffffffL;
1386 2 ->@}
1387@end smallexample
1388
1389In this example, the function was called twice, passing once through
1390each branch of the @code{if} statement. The body of the @code{do}
1391loop was executed a total of 26312 times. Note how the @code{while}
1392statement is annotated. It began execution 26312 times, once for
1393each iteration through the loop. One of those times (the last time)
1394it exited, while it branched back to the beginning of the loop 26311 times.
1395
1396@node Inaccuracy
1397@chapter Inaccuracy of @code{gprof} Output
1398
1399@menu
1400* Sampling Error:: Statistical margins of error
1401* Assumptions:: Estimating children times
1402@end menu
1403
1404@node Sampling Error,Assumptions,,Inaccuracy
1405@section Statistical Sampling Error
be4e1cd5
JO
1406
1407The run-time figures that @code{gprof} gives you are based on a sampling
1408process, so they are subject to statistical inaccuracy. If a function runs
1409only a small amount of time, so that on the average the sampling process
1410ought to catch that function in the act only once, there is a pretty good
1411chance it will actually find that function zero times, or twice.
1412
e2fd4231
ILT
1413By contrast, the number-of-calls and basic-block figures
1414are derived by counting, not
be4e1cd5
JO
1415sampling. They are completely accurate and will not vary from run to run
1416if your program is deterministic.
1417
1418The @dfn{sampling period} that is printed at the beginning of the flat
1419profile says how often samples are taken. The rule of thumb is that a
1420run-time figure is accurate if it is considerably bigger than the sampling
1421period.
1422
e2fd4231
ILT
1423The actual amount of error can be predicted.
1424For @var{n} samples, the @emph{expected} error
1425is the square-root of @var{n}. For example,
1426if the sampling period is 0.01 seconds and @code{foo}'s run-time is 1 second,
1427@var{n} is 100 samples (1 second/0.01 seconds), sqrt(@var{n}) is 10 samples, so
1428the expected error in @code{foo}'s run-time is 0.1 seconds (10*0.01 seconds),
1429or ten percent of the observed value.
1430Again, if the sampling period is 0.01 seconds and @code{bar}'s run-time is
1431100 seconds, @var{n} is 10000 samples, sqrt(@var{n}) is 100 samples, so
1432the expected error in @code{bar}'s run-time is 1 second,
1433or one percent of the observed value.
1434It is likely to
be4e1cd5
JO
1435vary this much @emph{on the average} from one profiling run to the next.
1436(@emph{Sometimes} it will vary more.)
1437
1438This does not mean that a small run-time figure is devoid of information.
1439If the program's @emph{total} run-time is large, a small run-time for one
1440function does tell you that that function used an insignificant fraction of
1441the whole program's time. Usually this means it is not worth optimizing.
1442
1443One way to get more accuracy is to give your program more (but similar)
1444input data so it will take longer. Another way is to combine the data from
1445several runs, using the @samp{-s} option of @code{gprof}. Here is how:
1446
1447@enumerate
1448@item
1449Run your program once.
1450
1451@item
1452Issue the command @samp{mv gmon.out gmon.sum}.
1453
1454@item
1455Run your program again, the same as before.
1456
1457@item
1458Merge the new data in @file{gmon.out} into @file{gmon.sum} with this command:
1459
1460@example
1461gprof -s @var{executable-file} gmon.out gmon.sum
1462@end example
1463
1464@item
1465Repeat the last two steps as often as you wish.
1466
1467@item
1468Analyze the cumulative data using this command:
1469
1470@example
1471gprof @var{executable-file} gmon.sum > @var{output-file}
1472@end example
1473@end enumerate
1474
e2fd4231
ILT
1475@node Assumptions,,Sampling Error,Inaccuracy
1476@section Estimating @code{children} Times
be4e1cd5
JO
1477
1478Some of the figures in the call graph are estimates---for example, the
1479@code{children} time values and all the the time figures in caller and
1480subroutine lines.
1481
1482There is no direct information about these measurements in the profile
1483data itself. Instead, @code{gprof} estimates them by making an assumption
1484about your program that might or might not be true.
1485
1486The assumption made is that the average time spent in each call to any
1487function @code{foo} is not correlated with who called @code{foo}. If
1488@code{foo} used 5 seconds in all, and 2/5 of the calls to @code{foo} came
1489from @code{a}, then @code{foo} contributes 2 seconds to @code{a}'s
1490@code{children} time, by assumption.
1491
1492This assumption is usually true enough, but for some programs it is far
1493from true. Suppose that @code{foo} returns very quickly when its argument
1494is zero; suppose that @code{a} always passes zero as an argument, while
1495other callers of @code{foo} pass other arguments. In this program, all the
1496time spent in @code{foo} is in the calls from callers other than @code{a}.
1497But @code{gprof} has no way of knowing this; it will blindly and
1498incorrectly charge 2 seconds of time in @code{foo} to the children of
1499@code{a}.
1500
1501@c FIXME - has this been fixed?
1502We hope some day to put more complete data into @file{gmon.out}, so that
1503this assumption is no longer needed, if we can figure out how. For the
1504nonce, the estimated figures are usually more useful than misleading.
1505
e2fd4231
ILT
1506@node How do I?
1507@chapter Answers to Common Questions
1508
1509@table @asis
1510@item How do I find which lines in my program were executed the most times?
1511
1512Compile your program with basic-block counting enabled, run it, then
1513use the following pipeline:
1514
1515@example
1516gprof -l -C @var{objfile} | sort -k 3 -n -r
1517@end example
1518
1519This listing will show you the lines in your code executed most often,
1520but not necessarily those that consumed the most time.
1521
1522@item How do I find which lines in my program called a particular function?
1523
1524Use @code{gprof -l} and lookup the function in the call graph.
1525The callers will be broken down by function and line number.
1526
1527@item How do I analyze a program that runs for less than a second?
1528
1529Try using a shell script like this one:
1530
1531@example
1532for i in `seq 1 100`; do
1533 fastprog
1534 mv gmon.out gmon.out.$i
1535done
1536
1537gprof -s fastprog gmon.out.*
1538
1539gprof fastprog gmon.sum
1540@end example
1541
1542If your program is completely deterministic, all the call counts
1543will be simple multiples of 100 (i.e. a function called once in
1544each run will appear with a call count of 100).
1545
1546@end table
1547
1548@node Incompatibilities
be4e1cd5
JO
1549@chapter Incompatibilities with Unix @code{gprof}
1550
1551@sc{gnu} @code{gprof} and Berkeley Unix @code{gprof} use the same data
1552file @file{gmon.out}, and provide essentially the same information. But
1553there are a few differences.
1554
1555@itemize @bullet
e2fd4231
ILT
1556@item
1557@sc{gnu} @code{gprof} uses a new, generalized file format with support
1558for basic-block execution counts and non-realtime histograms. A magic
1559cookie and version number allows @code{gprof} to easily identify
1560new style files. Old BSD-style files can still be read.
1561@xref{File Format}.
1562
be4e1cd5
JO
1563@item
1564For a recursive function, Unix @code{gprof} lists the function as a
1565parent and as a child, with a @code{calls} field that lists the number
1566of recursive calls. @sc{gnu} @code{gprof} omits these lines and puts
1567the number of recursive calls in the primary line.
1568
1569@item
1570When a function is suppressed from the call graph with @samp{-e}, @sc{gnu}
1571@code{gprof} still lists it as a subroutine of functions that call it.
1572
e2fd4231
ILT
1573@item
1574@sc{gnu} @code{gprof} accepts the @samp{-k} with its argument
1575in the form @samp{from/to}, instead of @samp{from to}.
1576
1577@item
1578In the annotated source listing,
1579if there are multiple basic blocks on the same line,
1580@sc{gnu} @code{gprof} prints all of their counts, seperated by commas.
1581
be4e1cd5
JO
1582@ignore - it does this now
1583@item
1584The function names printed in @sc{gnu} @code{gprof} output do not include
1585the leading underscores that are added internally to the front of all
1586C identifiers on many operating systems.
1587@end ignore
1588
1589@item
1590The blurbs, field widths, and output formats are different. @sc{gnu}
1591@code{gprof} prints blurbs after the tables, so that you can see the
1592tables without skipping the blurbs.
c142a1f5 1593@end itemize
be4e1cd5 1594
e2fd4231
ILT
1595@node Details
1596@chapter Details of Profiling
be4e1cd5 1597
e2fd4231
ILT
1598@menu
1599* Implementation:: How a program collets profiling information
1600* File Format:: Format of @samp{gmon.out} files
1601* Internals:: @code{gprof}'s internal operation
1602* Debugging:: Using @code{gprof}'s @samp{-d} option
1603@end menu
1604
1605@node Implementation,File Format,,Details
1606@section Implementation of Profiling
1607
1608Profiling works by changing how every function in your program is compiled
1609so that when it is called, it will stash away some information about where
1610it was called from. From this, the profiler can figure out what function
1611called it, and can count how many times it was called. This change is made
1612by the compiler when your program is compiled with the @samp{-pg} option,
1613which causes every function to call @code{mcount}
1614(or @code{_mcount}, or @code{__mcount}, depending on the OS and compiler)
1615as one of its first operations.
1616
1617The @code{mcount} routine, included in the profiling library,
1618is responsible for recording in an in-memory call graph table
1619both its parent routine (the child) and its parent's parent. This is
1620typically done by examining the stack frame to find both
1621the address of the child, and the return address in the original parent.
1622Since this is a very machine-dependant operation, @code{mcount}
1623itself is typically a short assembly-language stub routine
1624that extracts the required
1625information, and then calls @code{__mcount_internal}
1626(a normal C function) with two arguments - @code{frompc} and @code{selfpc}.
1627@code{__mcount_internal} is responsible for maintaining
1628the in-memory call graph, which records @code{frompc}, @code{selfpc},
1629and the number of times each of these call arcs was transversed.
1630
1631GCC Version 2 provides a magical function (@code{__builtin_return_address}),
1632which allows a generic @code{mcount} function to extract the
1633required information from the stack frame. However, on some
1634architectures, most notably the SPARC, using this builtin can be
1635very computationally expensive, and an assembly language version
1636of @code{mcount} is used for performance reasons.
1637
1638Number-of-calls information for library routines is collected by using a
1639special version of the C library. The programs in it are the same as in
1640the usual C library, but they were compiled with @samp{-pg}. If you
1641link your program with @samp{gcc @dots{} -pg}, it automatically uses the
1642profiling version of the library.
1643
1644Profiling also involves watching your program as it runs, and keeping a
1645histogram of where the program counter happens to be every now and then.
1646Typically the program counter is looked at around 100 times per second of
1647run time, but the exact frequency may vary from system to system.
be4e1cd5 1648
e2fd4231
ILT
1649This is done is one of two ways. Most UNIX-like operating systems
1650provide a @code{profil()} system call, which registers a memory
1651array with the kernel, along with a scale
1652factor that determines how the program's address space maps
1653into the array.
1654Typical scaling values cause every 2 to 8 bytes of address space
1655to map into a single array slot.
1656On every tick of the system clock
1657(assuming the profiled program is running), the value of the
1658program counter is examined and the corresponding slot in
1659the memory array is incremented. Since this is done in the kernel,
1660which had to interrupt the process anyway to handle the clock
1661interrupt, very little additional system overhead is required.
1662
1663However, some operating systems, most notably Linux 2.0 (and earlier),
1664do not provide a @code{profil()} system call. On such a system,
1665arrangements are made for the kernel to periodically deliver
1666a signal to the process (typically via @code{setitimer()}),
1667which then performs the same operation of examining the
1668program counter and incrementing a slot in the memory array.
1669Since this method requires a signal to be delivered to
1670user space every time a sample is taken, it uses considerably
1671more overhead than kernel-based profiling. Also, due to the
1672added delay required to deliver the signal, this method is
1673less accurate as well.
1674
1675A special startup routine allocates memory for the histogram and
1676either calls @code{profil()} or sets up
1677a clock signal handler.
1678This routine (@code{monstartup}) can be invoked in several ways.
1679On Linux systems, a special profiling startup file @code{gcrt0.o},
1680which invokes @code{monstartup} before @code{main},
1681is used instead of the default @code{crt0.o}.
1682Use of this special startup file is one of the effects
1683of using @samp{gcc @dots{} -pg} to link.
1684On SPARC systems, no special startup files are used.
1685Rather, the @code{mcount} routine, when it is invoked for
1686the first time (typically when @code{main} is called),
1687calls @code{monstartup}.
1688
1689If the compiler's @samp{-a} option was used, basic-block counting
1690is also enabled. Each object file is then compiled with a static array
1691of counts, initially zero.
1692In the executable code, every time a new basic-block begins
1693(i.e. when an @code{if} statement appears), an extra instruction
1694is inserted to increment the corresponding count in the array.
1695At compile time, a paired array was constructed that recorded
1696the starting address of each basic-block. Taken together,
1697the two arrays record the starting address of every basic-block,
1698along with the number of times it was executed.
1699
1700The profiling library also includes a function (@code{mcleanup}) which is
1701typically registered using @code{atexit()} to be called as the
1702program exits, and is responsible for writing the file @file{gmon.out}.
1703Profiling is turned off, various headers are output, and the histogram
1704is written, followed by the call-graph arcs and the basic-block counts.
be4e1cd5 1705
e2fd4231
ILT
1706The output from @code{gprof} gives no indication of parts of your program that
1707are limited by I/O or swapping bandwidth. This is because samples of the
1708program counter are taken at fixed intervals of the program's run time.
1709Therefore, the
1710time measurements in @code{gprof} output say nothing about time that your
1711program was not running. For example, a part of the program that creates
1712so much data that it cannot all fit in physical memory at once may run very
1713slowly due to thrashing, but @code{gprof} will say it uses little time. On
1714the other hand, sampling by run time has the advantage that the amount of
1715load due to other users won't directly affect the output you get.
1716
1717@node File Format,Internals,Implementation,Details
1718@section Profiling Data File Format
1719
1720The old BSD-derived file format used for profile data does not contain a
1721magic cookie that allows to check whether a data file really is a
1722gprof file. Furthermore, it does not provide a version number, thus
1723rendering changes to the file format almost impossible. @sc{gnu} @code{gprof}
1724uses a new file format that provides these features. For backward
1725compatibility, @sc{gnu} @code{gprof} continues to support the old BSD-derived
1726format, but not all features are supported with it. For example,
1727basic-block execution counts cannot be accommodated by the old file
1728format.
1729
1730The new file format is defined in header file @file{gmon_out.h}. It
1731consists of a header containing the magic cookie and a version number,
1732as well as some spare bytes available for future extensions. All data
1733in a profile data file is in the native format of the host on which
1734the profile was collected. @sc{gnu} @code{gprof} adapts automatically to the
1735byte-order in use.
1736
1737In the new file format, the header is followed by a sequence of
1738records. Currently, there are three different record types: histogram
1739records, call-graph arc records, and basic-block execution count
1740records. Each file can contain any number of each record type. When
1741reading a file, @sc{gnu} @code{gprof} will ensure records of the same type are
1742compatible with each other and compute the union of all records. For
1743example, for basic-block execution counts, the union is simply the sum
1744of all execution counts for each basic-block.
1745
1746@subsection Histogram Records
1747
1748Histogram records consist of a header that is followed by an array of
1749bins. The header contains the text-segment range that the histogram
1750spans, the size of the histogram in bytes (unlike in the old BSD
1751format, this does not include the size of the header), the rate of the
1752profiling clock, and the physical dimension that the bin counts
1753represent after being scaled by the profiling clock rate. The
1754physical dimension is specified in two parts: a long name of up to 15
1755characters and a single character abbreviation. For example, a
1756histogram representing real-time would specify the long name as
1757"seconds" and the abbreviation as "s". This feature is useful for
1758architectures that support performance monitor hardware (which,
1759fortunately, is becoming increasingly common). For example, under DEC
1760OSF/1, the "uprofile" command can be used to produce a histogram of,
1761say, instruction cache misses. In this case, the dimension in the
1762histogram header could be set to "i-cache misses" and the abbreviation
1763could be set to "1" (because it is simply a count, not a physical
1764dimension). Also, the profiling rate would have to be set to 1 in
1765this case.
1766
1767Histogram bins are 16-bit numbers and each bin represent an equal
1768amount of text-space. For example, if the text-segment is one
1769thousand bytes long and if there are ten bins in the histogram, each
1770bin represents one hundred bytes.
1771
1772
1773@subsection Call-Graph Records
1774
1775Call-graph records have a format that is identical to the one used in
1776the BSD-derived file format. It consists of an arc in the call graph
1777and a count indicating the number of times the arc was traversed
1778during program execution. Arcs are specified by a pair of addresses:
1779the first must be within caller's function and the second must be
1780within the callee's function. When performing profiling at the
1781function level, these addresses can point anywhere within the
1782respective function. However, when profiling at the line-level, it is
1783better if the addresses are as close to the call-site/entry-point as
1784possible. This will ensure that the line-level call-graph is able to
1785identify exactly which line of source code performed calls to a
1786function.
1787
1788@subsection Basic-Block Execution Count Records
1789
1790Basic-block execution count records consist of a header followed by a
1791sequence of address/count pairs. The header simply specifies the
1792length of the sequence. In an address/count pair, the address
1793identifies a basic-block and the count specifies the number of times
1794that basic-block was executed. Any address within the basic-address can
1795be used.
1796
1797@node Internals,Debugging,File Format,Details
1798@section @code{gprof}'s Internal Operation
1799
1800Like most programs, @code{gprof} begins by processing its options.
1801During this stage, it may building its symspec list
1802(@code{sym_ids.c:sym_id_add}), if
1803options are specified which use symspecs.
1804@code{gprof} maintains a single linked list of symspecs,
1805which will eventually get turned into 12 symbol tables,
1806organized into six include/exclude pairs - one
1807pair each for the flat profile (INCL_FLAT/EXCL_FLAT),
1808the call graph arcs (INCL_ARCS/EXCL_ARCS),
1809printing in the call graph (INCL_GRAPH/EXCL_GRAPH),
1810timing propagation in the call graph (INCL_TIME/EXCL_TIME),
1811the annotated source listing (INCL_ANNO/EXCL_ANNO),
1812and the execution count listing (INCL_EXEC/EXCL_EXEC).
1813
1814After option processing, @code{gprof} finishes
1815building the symspec list by adding all the symspecs in
1816@code{default_excluded_list} to the exclude lists
1817EXCL_TIME and EXCL_GRAPH, and if line-by-line profiling is specified,
1818EXCL_FLAT as well.
1819These default excludes are not added to EXCL_ANNO, EXCL_ARCS, and EXCL_EXEC.
1820
1821Next, the BFD library is called to open the object file,
1822verify that it is an object file,
1823and read its symbol table (@code{core.c:core_init}),
1824using @code{bfd_canonicalize_symtab} after mallocing
1825an appropiate sized array of asymbols. At this point,
1826function mappings are read (if the @samp{--file-ordering} option
1827has been specified), and the core text space is read into
1828memory (if the @samp{-c} option was given).
1829
1830@code{gprof}'s own symbol table, an array of Sym structures,
1831is now built.
1832This is done in one of two ways, by one of two routines, depending
1833on whether line-by-line profiling (@samp{-l} option) has been
1834enabled.
1835For normal profiling, the BFD canonical symbol table is scanned.
1836For line-by-line profiling, every
1837text space address is examined, and a new symbol table entry
1838gets created every time the line number changes.
1839In either case, two passes are made through the symbol
1840table - one to count the size of the symbol table required,
1841and the other to actually read the symbols. In between the
1842two passes, a single array of type @code{Sym} is created of
1843the appropiate length.
1844Finally, @code{symtab.c:symtab_finalize}
1845is called to sort the symbol table and remove duplicate entries
1846(entries with the same memory address).
1847
1848The symbol table must be a contiguous array for two reasons.
1849First, the @code{qsort} library function (which sorts an array)
1850will be used to sort the symbol table.
1851Also, the symbol lookup routine (@code{symtab.c:sym_lookup}),
1852which finds symbols
1853based on memory address, uses a binary search algorithm
1854which requires the symbol table to be a sorted array.
1855Function symbols are indicated with an @code{is_func} flag.
1856Line number symbols have no special flags set.
1857Additionally, a symbol can have an @code{is_static} flag
1858to indicate that it is a local symbol.
1859
1860With the symbol table read, the symspecs can now be translated
1861into Syms (@code{sym_ids.c:sym_id_parse}). Remember that a single
1862symspec can match multiple symbols.
1863An array of symbol tables
1864(@code{syms}) is created, each entry of which is a symbol table
1865of Syms to be included or excluded from a particular listing.
1866The master symbol table and the symspecs are examined by nested
1867loops, and every symbol that matches a symspec is inserted
1868into the appropriate syms table. This is done twice, once to
1869count the size of each required symbol table, and again to build
1870the tables, which have been malloced between passes.
1871From now on, to determine whether a symbol is on an include
1872or exclude symspec list, @code{gprof} simply uses its
1873standard symbol lookup routine on the appropriate table
1874in the @code{syms} array.
1875
1876Now the profile data file(s) themselves are read
1877(@code{gmon_io.c:gmon_out_read}),
1878first by checking for a new-style @samp{gmon.out} header,
1879then assuming this is an old-style BSD @samp{gmon.out}
1880if the magic number test failed.
1881
1882New-style histogram records are read by @code{hist.c:hist_read_rec}.
1883For the first histogram record, allocate a memory array to hold
1884all the bins, and read them in.
1885When multiple profile data files (or files with multiple histogram
1886records) are read, the starting address, ending address, number
1887of bins and sampling rate must match between the various histograms,
1888or a fatal error will result.
1889If everything matches, just sum the additional histograms into
1890the existing in-memory array.
1891
1892As each call graph record is read (@code{call_graph.c:cg_read_rec}),
1893the parent and child addresses
1894are matched to symbol table entries, and a call graph arc is
1895created by @code{cg_arcs.c:arc_add}, unless the arc fails a symspec
1896check against INCL_ARCS/EXCL_ARCS. As each arc is added,
1897a linked list is maintained of the parent's child arcs, and of the child's
1898parent arcs.
1899Both the child's call count and the arc's call count are
1900incremented by the record's call count.
1901
1902Basic-block records are read (@code{basic_blocks.c:bb_read_rec}),
1903but only if line-by-line profiling has been selected.
1904Each basic-block address is matched to a corresponding line
1905symbol in the symbol table, and an entry made in the symbol's
1906bb_addr and bb_calls arrays. Again, if multiple basic-block
1907records are present for the same address, the call counts
1908are cumulative.
1909
1910A gmon.sum file is dumped, if requested (@code{gmon_io.c:gmon_out_write}).
1911
1912If histograms were present in the data files, assign them to symbols
1913(@code{hist.c:hist_assign_samples}) by iterating over all the sample
1914bins and assigning them to symbols. Since the symbol table
1915is sorted in order of ascending memory addresses, we can
1916simple follow along in the symbol table as we make our pass
1917over the sample bins.
1918This step includes a symspec check against INCL_FLAT/EXCL_FLAT.
1919Depending on the histogram
1920scale factor, a sample bin may span multiple symbols,
1921in which case a fraction of the sample count is allocated
1922to each symbol, proportional to the degree of overlap.
1923This effect is rare for normal profiling, but overlaps
1924are more common during line-by-line profiling, and can
1925cause each of two adjacent lines to be credited with half
1926a hit, for example.
1927
1928If call graph data is present, @code{cg_arcs.c:cg_assemble} is called.
1929First, if @samp{-c} was specified, a machine-dependant
1930routine (@code{find_call}) scans through each symbol's machine code,
1931looking for subroutine call instructions, and adding them
1932to the call graph with a zero call count.
1933A topological sort is performed by depth-first numbering
1934all the symbols (@code{cg_dfn.c:cg_dfn}), so that
1935children are always numbered less than their parents,
1936then making a array of pointers into the symbol table and sorting it into
1937numerical order, which is reverse topological
1938order (children appear before parents).
1939Cycles are also detected at this point, all members
1940of which are assigned the same topological number.
1941Two passes are now made through this sorted array of symbol pointers.
1942The first pass, from end to beginning (parents to children),
1943computes the fraction of child time to propogate to each parent
1944and a print flag.
1945The print flag reflects symspec handling of INCL_GRAPH/EXCL_GRAPH,
1946with a parent's include or exclude (print or no print) property
1947being propagated to its children, unless they themselves explicitly appear
1948in INCL_GRAPH or EXCL_GRAPH.
1949A second pass, from beginning to end (children to parents) actually
1950propogates the timings along the call graph, subject
1951to a check against INCL_TIME/EXCL_TIME.
1952With the print flag, fractions, and timings now stored in the symbol
1953structures, the topological sort array is now discarded, and a
1954new array of pointers is assembled, this time sorted by propagated time.
1955
1956Finally, print the various outputs the user requested, which is now fairly
1957straightforward. The call graph (@code{cg_print.c:cg_print}) and
1958flat profile (@code{hist.c:hist_print}) are regurgitations of values
1959already computed. The annotated source listing
1960(@code{basic_blocks.c:print_annotated_source}) uses basic-block
1961information, if present, to label each line of code with call counts,
1962otherwise only the function call counts are presented.
1963
1964The function ordering code is marginally well documented
1965in the source code itself (@code{cg_print.c}). Basically,
1966the functions with the most use and the most parents are
1967placed first, followed by other functions with the most use,
1968followed by lower use functions, followed by unused functions
1969at the end.
1970
1971@node Debugging,,Internals,Details
1972@subsection Debugging @code{gprof}
1973
1974If @code{gprof} was compiled with debugging enabled,
1975the @samp{-d} option triggers debugging output
1976(to stdout) which can be helpful in understanding its operation.
1977The debugging number specified is interpreted as a sum of the following
1978options:
1979
1980@table @asis
1981@item 2 - Topological sort
1982Monitor depth-first numbering of symbols during call graph analysis
1983@item 4 - Cycles
1984Shows symbols as they are identified as cycle heads
1985@item 16 - Tallying
1986As the call graph arcs are read, show each arc and how
1987the total calls to each function are tallied
1988@item 32 - Call graph arc sorting
1989Details sorting individual parents/children within each call graph entry
1990@item 64 - Reading histogram and call graph records
1991Shows address ranges of histograms as they are read, and each
1992call graph arc
1993@item 128 - Symbol table
1994Reading, classifying, and sorting the symbol table from the object file.
1995For line-by-line profiling (@samp{-l} option), also shows line numbers
1996being assigned to memory addresses.
1997@item 256 - Static call graph
1998Trace operation of @samp{-c} option
1999@item 512 - Symbol table and arc table lookups
2000Detail operation of lookup routines
2001@item 1024 - Call graph propagation
2002Shows how function times are propagated along the call graph
2003@item 2048 - Basic-blocks
2004Shows basic-block records as they are read from profile data
2005(only meaningful with @samp{-l} option)
2006@item 4096 - Symspecs
2007Shows symspec-to-symbol pattern matching operation
2008@item 8192 - Annotate source
2009Tracks operation of @samp{-A} option
2010@end table
be4e1cd5 2011
e2fd4231
ILT
2012@contents
2013@bye
2014
2015NEEDS AN INDEX
be4e1cd5
JO
2016
2017-T - "traditional BSD style": How is it different? Should the
2018differences be documented?
2019
be4e1cd5
JO
2020example flat file adds up to 100.01%...
2021
2022note: time estimates now only go out to one decimal place (0.0), where
2023they used to extend two (78.67).
This page took 0.330996 seconds and 4 git commands to generate.