Copyright year range updates after running gdb/copyright.py
[deliverable/binutils-gdb.git] / gdb / testsuite / gdb.perf / README
1 The GDB Performance Testsuite
2 =============================
3
4 This README contains notes on hacking on GDB's performance testsuite.
5 For notes on GDB's regular testsuite or how to run the performance testsuite,
6 see ../README.
7
8 Generated tests
9 ***************
10
11 The testcase generator lets us easily test GDB on large programs.
12 The "monster" tests are mocks of real programs where GDB's
13 performance has been a problem. Often it is difficult to build
14 these monster programs, but when measuring performance one doesn't
15 need the "real" program, all one needs is something that looks like
16 the real program along the axis one is measuring; for example, the
17 number of CUs (compilation units).
18
19 Structure of generated tests
20 ****************************
21
22 Generated tests consist of a binary and potentially any number of
23 shared libraries. One of these shared libraries, called "tail", is
24 special. It is used to provide mocks of system provided code, and
25 contains no generated code. Typically system-provided libraries
26 are searched last which can have significant performance consequences,
27 so we provide a means to exercise that.
28
29 The binary and the generated shared libraries can have a mix of
30 manually written and generated code. Manually written code is
31 specified with the {binary,gen_shlib}_extra_sources config parameters,
32 which are lists of source files in testsuite/gdb.perf. Generated
33 files are controlled with various configuration knobs.
34
35 Once a large test program is built, it makes sense to use it as much
36 as possible (i.e., with multiple tests). Therefore perf data collection
37 for generated tests is split into two passes: the first pass builds
38 all the generated tests, and the second pass runs all the performance
39 tests. The first pass is called "build-perf" and the second pass is
40 called "check-perf". See ../README for instructions on running the tests.
41
42 Generated test directory layout
43 *******************************
44
45 All output lives under testsuite/gdb.perf in the build directory.
46
47 Because some of the tests can get really large (and take potentially
48 minutes to compile), parallelism is built into their compilation.
49 Note however that we don't run the tests in parallel as it can skew
50 the results.
51
52 To keep things simple and stay consistent, we use the same
53 mechanism used by "make check-parallel". There is one catch: we need
54 one .exp for each "worker" but the .exp file must come from the source
55 tree. To avoid generating .exp files for each worker we invoke
56 lib/build-piece.exp for each worker with different arguments.
57 The file build.piece.exp lives in "lib" to prevent dejagnu from finding
58 it when it goes to look for .exp scripts to run.
59
60 Another catch is that each parallel build worker needs its own directory
61 so that their gdb.{log,sum} files don't collide. On the other hand
62 its easier if their output (all the object files and shared libraries)
63 are in the same directory.
64
65 The above considerations yield the resulting layout:
66
67 $objdir/testsuite/gdb.perf/
68
69 gdb.log, gdb.sum: result of doing final link and running tests
70
71 workers/
72
73 gdb.log, gdb.sum: result of gen-workers step
74
75 $program_name/
76
77 ${program_name}-0.worker
78 ...
79 ${program_name}-N.worker: input to build-pieces step
80
81 outputs/
82
83 ${program_name}/
84
85 ${program_name}-0/
86 ...
87 ${program_name}-N/
88
89 gdb.log, gdb.sum: for each build-piece worker
90
91 pieces/
92
93 generated sources, object files, shlibs
94
95 ${run_name_1}: binary for test config #1
96 ...
97 ${run_name_N}: binary for test config #N
98
99 Generated test configuration knobs
100 **********************************
101
102 The monster program generator provides various knobs for building various
103 kinds of monster programs. For a list of the knobs see function
104 GenPerfTest::init_testcase in testsuite/lib/perftest.exp.
105 Most knobs are self-explanatory.
106 Here is a description of the less obvious ones.
107
108 binary_extra_sources
109
110 This is the list of non-machine generated sources that go
111 into the test binary. There must be at least one: the one
112 with main.
113
114 class_specs
115
116 List of pairs of keys and values.
117 Supported keys are:
118 count: number of classes
119 Default: 1
120 name: list of namespaces and class name prefix
121 E.g., { ns0 ns1 foo } -> ns0::ns1::foo_<cu#>_{0,1,...}
122 There is no default, this value must be specified.
123 nr_members: number of members
124 Default: 0
125 nr_static_members: number of static members
126 Default: 0
127 nr_methods: number of methods
128 Default: 0
129 nr_inline_methods: number of inline methods
130 Default: 0
131 nr_static_methods: number of static methods
132 Default: 0
133 nr_static_inline_methods: number of static inline methods
134 Default: 0
135
136 E.g.,
137 class foo {};
138 namespace ns1 { class bar {}; }
139 would be represented as:
140 {
141 { count 1 name { foo } }
142 { count 1 name { ns1 bar } }
143 }
144
145 The naming of each class is "class_<cu_nr>_<class_nr>",
146 where <cu_nr> is the number of the compilation unit the
147 class is defined in.
148
149 There's currently no support for nesting classes in classes,
150 or for specifying baseclasses or templates.
151
152 Misc. configuration knobs
153 *************************
154
155 These knobs control building or running of the test and are specified
156 like any global Tcl variable.
157
158 CAT_PROGRAM
159
160 Default is /bin/cat, you shouldn't need to change this.
161
162 SHA1SUM_PROGRAM
163
164 Default is /usr/bin/sha1sum.
165
166 PERF_TEST_COMPILE_PARALLELISM
167
168 An integer, specifies the amount of parallelism in the builds.
169 Akin to make's -j flag. The default is 10.
170
171 Writing a generated test program
172 ********************************
173
174 The best way to write a generated test program is to take an existing
175 one as boilerplate. Two good examples are gmonster1.exp and gmonster2.exp.
176 gmonster1.exp builds a big binary with various custom manually written
177 code, and gmonster2 is (essentially) the equivalent binary split up over
178 several shared libraries.
179
180 Writing a performance test that uses a generated program
181 ********************************************************
182
183 The best way to write a test is to take an existing one as boilerplate.
184 Good examples are gmonster1-*.exp and gmonster2-*.exp.
185
186 The naming used thus far is that "foo.exp" builds the test program
187 and there is one "foo-bar.exp" file for each performance test
188 that uses test program "foo".
189
190 In addition to writing the test driver .exp script, one must also
191 write a python script that is used to run the test.
192 This contents of this script is defined by the performance testsuite
193 harness. It defines a class, which is a subclass of one of the
194 classes in gdb.perf/lib/perftest/perftest.py.
195 See gmonster-null-lookup.py for an example.
196
197 Note: Since gmonster1 and gmonster2 are treated as being variations of
198 the same program, each test shares the same python script.
199 E.g., gmonster1-null-lookup.exp and gmonster2-null-lookup.exp
200 both use gmonster-null-lookup.py.
201
202 Running performance tests for generated programs
203 ************************************************
204
205 There are two steps: build and run.
206
207 Example:
208
209 bash$ make -j10 build-perf RUNTESTFLAGS="gmonster1.exp"
210 bash$ make -j10 check-perf RUNTESTFLAGS="gmonster1-null-lookup.exp" \
211 GDB_PERFTEST_MODE=run
This page took 0.034333 seconds and 4 git commands to generate.