X-Git-Url: http://git.efficios.com/?a=blobdiff_plain;f=doc%2Forg.eclipse.tracecompass.doc.user%2Fdoc%2FUser-Guide.mediawiki;h=4e5720fd1e25a67206af284b73d668e5ac7e7ba7;hb=6e3c2bb9431b4c3c7710d3d4aced14040b6222b5;hp=fc36e38884204b119da99eadca8e5c706280f480;hpb=b37a85cce1b5510f5bbedf9e1d82b800d1b48d81;p=deliverable%2Ftracecompass.git
diff --git a/doc/org.eclipse.tracecompass.doc.user/doc/User-Guide.mediawiki b/doc/org.eclipse.tracecompass.doc.user/doc/User-Guide.mediawiki
index fc36e38884..4e5720fd1e 100644
--- a/doc/org.eclipse.tracecompass.doc.user/doc/User-Guide.mediawiki
+++ b/doc/org.eclipse.tracecompass.doc.user/doc/User-Guide.mediawiki
@@ -58,12 +58,20 @@ At present, the LTTng plug-ins support the following kernel-oriented views:
* ''Control Flow'' - to visualize processes state transitions
* ''Resources'' - to visualize system resources state transitions
-* ''CPU usage'' - to visualize the usage of the processor with respect to the time in traces
+* ''CPU Usage'' - to visualize the usage of the processor with respect to the time in traces
+* ''Kernel Memory Usage'' - to visualize the relative usage of system memory
+* ''IO Usage'' - to visualize the usage of input/output devices
+* ''System Calls'' - presents all the system calls in a table view
+* ''System Call Statistics'' - present all the system calls statistics
+* ''System Call Density'' - to visualize the system calls displayed by duration
+* ''System Call vs Time'' - to visualize when system calls occur
Also, the LTTng plug-ins supports the following User Space traces views:
* ''Memory Usage'' - to visualize the memory usage per thread with respect to time in the traces
* ''Call Stack'' - to visualize the call stack's evolution over time
+* ''Function Duration Density'' - to visualize function calls displayed by duration
+* ''Flame Graph'' - to visualize why the CPU is busy
Finally, the LTTng plug-ins supports the following Control views:
* ''Control'' - to control the tracer and configure the tracepoints
@@ -574,7 +582,7 @@ The header displays the current trace (or experiment) name.
The columns of the table are defined by the fields (aspects) of the specific trace type. These are the defaults:
* '''Timestamp''': the event timestamp
-* '''Type''': the event type
+* '''Event Type''': the event type
* '''Contents''': the fields (or payload) of this event
The first row of the table is the header row a.k.a. the Search and Filter row.
@@ -587,17 +595,21 @@ An event range can be selected by holding the '''Shift''' key while clicking ano
The Events editor can be closed, disposing a trace. When this is done, all the views displaying the information will be updated with the trace data of the next event editor tab. If all the editor tabs are closed, then the views will display their empty states.
+Column order and size is preserved when changed. If a column is lost due to it being resized to 0 pixels, right click on the context menu and select '''Show All''', it will be restored to a visible size.
+
=== Searching and Filtering ===
Searching and filtering of events in the table can be performed by entering matching conditions in one or multiple columns in the header row (the first row below the column header).
-To toggle between searching and filtering, click on the 'search' ([[Image:images/TmfEventSearch.gif]]) or 'filter' ([[Image:images/TmfEventFilter.gif]]) icon in the header row's left margin, or right-click on the header row and select '''Show Filter Bar''' or '''Show Search Bar''' in the context menu.
+To apply a matching condition to a specific column, click on the column's header row cell, type in a [http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html regular expression]. You can also enter a simple text string and it will be automatically be replaced with a 'contains' regular expression.
+
+Press the '''Enter''' key to apply the condition as a search condition. It will be added to any existing search conditions.
-To apply a matching condition to a specific column, click on the column's header row cell, type in a [http://download.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html regular expression] and press the '''ENTER''' key. You can also enter a simple text string and it will be automatically be replaced with a 'contains' regular expression.
+Press the '''Ctrl+Enter''' key to immediately add the condition (and any other existing search conditions) as a filter instead.
When matching conditions are applied to two or more columns, all conditions must be met for the event to match (i.e. 'and' behavior).
-To clear all matching conditions in the header row, press the '''DEL''' key.
+A preset filter created in the [[#Filters_View | Filters]] view can also be applied by right-clicking on the table and selecting '''Apply preset filter...''' > ''filter name''
==== Searching ====
@@ -607,25 +619,33 @@ All matching events will have a 'search match' icon in their left margin. Non-ma
[[Image:images/TraceEditor-Search.png]]
-Pressing the '''ENTER''' key will search and select the next matching event. Pressing the '''SHIFT-ENTER''' key will search and select the previous matching event. Wrapping will occur in both directions.
+Pressing the '''Enter''' key will search and select the next matching event. Pressing the '''Shift+Enter''' key will search and select the previous matching event. Wrapping will occur in both directions.
-Press '''ESC''' to cancel an ongoing search.
+Press '''Esc''' to cancel an ongoing search.
-Press '''DEL''' to clear the header row and reset all events to normal.
+To add the currently applied search condition(s) as filter(s), click the '''Add as Filter''' [[Image:images/filter_add.gif]] button in the header row margin, or press the '''Ctrl+Enter''' key.
+
+Press '''Delete''' to clear the header row and reset all events to normal.
==== Filtering ====
-When a filtering condition is entered in the head row, the table will clear all events and fill itself with matching events as they are found from the beginning of the trace. The characters in each column which match the regular expression will be highlighted.
+When a new filter is applied, the table will clear all events and fill itself with matching events as they are found from the beginning of the trace. The characters in each column which match the regular expression will be highlighted.
A status row will be displayed before and after the matching events, dynamically showing how many matching events were found and how many events were processed so far. Once the filtering is completed, the status row icon in the left margin will change from a 'stop' to a 'filter' icon.
[[Image:images/TraceEditor-Filter.png]]
-Press '''ESC''' to stop an ongoing filtering. In this case the status row icon will remain as a 'stop' icon to indicate that not all events were processed.
+Press '''Esc''' to stop an ongoing filtering. In this case the status row icon will remain as a 'stop' icon to indicate that not all events were processed.
+
+The header bar will be displayed above the table and will show a label for each applied filter. Clicking on a label will highlight the matching strings in the events that correspond to this filter condition. Pressing the '''Delete''' key will clear this highlighting.
-Press '''DEL''' or right-click on the table and select '''Clear Filters''' from the context menu to clear the header row and remove the filtering. All trace events will be now shown in the table. Note that the currently selected event will remain selected even after the filter is removed.
+To remove a specific filter, click on the [[Image:images/delete_button.gif]] icon on its label in the header bar. The table will be updated with the events matching the remaining filters.
-You can also search on the subset of filtered events by toggling the header row to the Search Bar while a filter is applied. Searching and filtering conditions are independent of each other.
+The header bar can be collapsed and expanded by clicking on the [[Image:images/expanded_ovr.gif]][[Image:images/collapsed_ovr.gif]] icons in the top-left corner or on its background. In collapsed mode, a minimized version of the filter labels will be shown that can also be used to highlight or remove the corresponding filter.
+
+Right-click on the table and select '''Clear Filters''' from the context menu to remove all filters. All trace events will be now shown in the table. Note that the currently selected event will remain selected even after the filters are removed.
+
+You can also search on the subset of filtered events by entering a search condition in the header row while a filter is applied. Searching and filtering conditions are independent of each other.
==== Bookmarking ====
@@ -647,7 +667,7 @@ The text of selected events can be copied to the clipboard by right-clicking on
=== Event Source Lookup ===
-For CTF traces using specification v1.8.2 or above, information can optionally be embedded in the trace to indicate the source of a trace event. This is accessed through the event context menu by right-clicking on an event in the table.
+Some trace types can optionally embed information in the trace to indicate the source of a trace event. This is accessed through the event context menu by right-clicking on an event in the table.
==== Source Code ====
@@ -664,6 +684,7 @@ It is possible to export the content of the trace to a text file based on the co
''Note'': The columns in the text file are separated by tabs.
=== Refreshing of Trace ===
+
It's possible to refresh the content of the trace and resume indexing in case the current open trace was updated on the media. To refresh the trace, right-click into the table and select menu item '''Refresh'''. Alternatively, press key '''F5'''.
=== Collapsing of Repetitive Events ===
@@ -678,7 +699,7 @@ A status row will be displayed before and after the events, dynamically showing
[[Image:images/TablePostCollapse.png]]
-To clear collapsing, press the right mouse button in the table and select menu item '''Clear Filters''' in the context sensitive menu. ''Note'' that collapsing is also removed when another filter is applied to the table.
+To remove the collapse filter, press the ([[Image:images/delete_button.gif]]) icon on the '''Collapse''' label in the header bar, or press the right mouse button in the table and select menu item '''Clear Filters''' in the context sensitive menu (this will also remove any other filters).
=== Customization ===
@@ -746,9 +767,17 @@ In each histogram, the following keys are handled:
== Statistics View ==
-The Statistics View displays the various event counters that are collected when analyzing a trace. The data is organized per trace. After opening a trace, the element '''Statistics''' is added under the '''Tmf Statistics Analysis''' tree element in the Project Explorer. To open the view, double-click the '''Statistics''' tree element. Alternatively, select '''Statistics''' under '''Tracing''' within the '''Show View''' window ('''Window''' -> '''Show View''' -> '''Other...'''). This view shows 3 columns: ''Level'' ''Events total'' and ''Events in selected time range''. After parsing a trace the view will display the number of events per event type in the second column and in the third, the currently selected time range's event type distribution is shown. The cells where the number of events are printed also contain a colored bar with a number that indicates the percentage of the event count in relation to the total number of events. The statistics is collected for the whole trace. This view is part of the '''Tracing and Monitoring Framework (TMF)''' and is generic. It will work for any trace type extensions. For the LTTng 2.0 integration the Statistics view will display statistics as shown below.:
+The Statistics View displays the various event counters that are collected when analyzing a trace. After opening a trace, the element '''Statistics''' is added under the '''Tmf Statistics Analysis''' tree element in the Project Explorer. To open the view, double-click the '''Statistics''' tree element. Alternatively, select '''Statistics''' under '''Tracing''' within the '''Show View''' window ('''Window''' -> '''Show View''' -> '''Other...'''). The statistics is collected for the whole trace. This view is part of the '''Tracing and Monitoring Framework (TMF)''' and is generic. It will work for any trace type extensions.
-[[Image:images/LTTng2StatisticsView.png]]
+The view is separated in two sides. The left side of the view presents the Statistics in a table. The table shows 3 columns: ''Level'' ''Events total'' and ''Events in selected time range''. The data is organized per trace. After parsing a trace the view will display the number of events per event type in the second column and in the third, the currently selected time range's event type distribution is shown. The cells where the number of events are printed also contain a colored bar with a number that indicates the percentage of the event count in relation to the total number of events.
+
+[[Image:images/LTTng2StatisticsTableView.png]]
+
+The right side illustrates the proportion of types of events into two pie charts. The legend of each pie chart gives the representation of each color in the chart.
+* The ''Global'' pie chart displays the general proportion of the events in the trace.
+* When there is a range selection, the ''Events in selection'' pie chart appears next to the ''Global'' pie chart and displays the proportion the event in the selected range of the trace.
+
+[[Image:images/LTTng2StatisticsPieChartView.png]]
By default, the statistics use a state system, therefore will load very quickly once the state system is written to the disk as a supplementary file.
@@ -824,6 +853,74 @@ The view shows a tree of currently selected traces and their registered state sy
To modify the time of attributes shown in the view, select a different current time in other views that support time synchronization (e.g. event table, histogram view). When a time range is selected, this view uses the begin time.
+== External Analyses ==
+
+Trace Compass supports the execution of '''external analyses''' conforming to the [https://github.com/lttng/lami-spec/blob/v1.0.1/lami.adoc LAMI 1.0.x specification]. This includes recent versions of the [https://github.com/lttng/lttng-analyses LTTng-Analyses project].
+
+An external analysis is a [[#Running an External Analysis|program executed by Trace Compass]]. When the program is done analyzing, Trace Compass generates a '''[[#Opening a Report|report]]''' containing its results. A report contains one or more tables which can also be viewed as bar and scatter [[#Creating a Chart from a Result Table|charts]].
+
+'''Note''': The program to execute is found by searching the directories listed in the standard $PATH
environment variable when no path separator (/
on Unix and OS X, \
on Windows) is found in its command.
+
+Trace Compass ships with a default list of ''descriptors'' of external analyses (not the analyses themselves), including the descriptors of the [http://github.com/lttng/lttng-analyses LTTng analyses]. If the LTTng analyses project is installed, its analyses are available when opening or importing an LTTng kernel trace.
+
+=== Running an External Analysis ===
+
+To run an external analysis:
+
+# [[#Importing Traces to the Project|Import a trace to the project]].
+# Make sure the trace is opened by double-clicking its name in the [[#Project Explorer View]].
+# Under the trace in the [[#Project Explorer View]], expand ''External Analyses'' to view the list of available external analyses.
The external analyses which are either missing or not compatible with the trace are stroke and cannot be executed.
[[Image:images/externalAnalyses/external-analyses-list.png]]
+# '''Optional''': If you want the external analysis to analyze a specific time range of the current trace, make a time range selection.You can use views like the [[#Histogram View]] and the [[#Control Flow View]] (if it's available for this trace) to make a time range selection.
External analyses are executed on the current time range selection if there is one, or on the whole trace otherwise.
+# Right-click the external analysis to run and click '''Run External Analysis'''.[[Image:images/externalAnalyses/run-external-analysis.png]]
+# In the opened ''External Analysis Parameters'' window, optionally enter extra parameters to pass to the program.[[Image:images/externalAnalyses/external-analysis-parameters-dialog.png]]
+# Click '''OK''' to start the analysis. + +Note that many external analyses can be started concurrently. + +When the external analysis is done analyzing, its results are saved as a [[#Opening a Report|report]] in Trace Compass. The tables contained in this report are also automatically opened into a new report view when the analysis is finished. + +=== Opening a Report === + +A '''report''' is created after a successful [[#Running an External Analysis|execution of an external analysis]]. + +To open a report: + +* Under ''Reports'' under a trace in the [[#Project Explorer View]], double-click the report to open.Each result table generated by the external analysis is shown in its own tab in the opened report view.
[[Image:images/externalAnalyses/report-view.png]]
+ +=== Creating a Chart from a Result Table === + +To create a bar or a scatter chart from the data of a given result table: + +# [[#Opening a Report|Open the report]] containing the result table to use for creating the chart. +# In the opened report view, click the tab of the result table to use for creating the chart. +# Click the ''View Menu'' button, then click either '''New custom bar chart''' or '''New custom scatter chart'''.[[Image:images/externalAnalyses/new-custom-scatter-chart-menu.png]]
+# In the opened ''Bar chart series creation'' or ''Scatter chart series creation'' window, under ''Series creator'', select a column to use for the X axis of the chart, and one or more columns to use for the Y axis of the chart, then click '''Add''' to create a series.[[Image:images/externalAnalyses/chart-configuration-dialog.png]]
Repeat this step to create more series.
+# Click '''OK''' to create the chart.The chart is created and shown at the right of its source result table.
[[Image:images/externalAnalyses/table-and-chart.png]]
+ +=== Showing or Hiding a Result Table === + +To show or hide a result table once a [[#Creating a Chart from a Result Table|chart]] has been created: + +* In the report view, click the ''Toggle the Table view of the results'' button.[[Image:images/externalAnalyses/table-and-chart-toggle-button.png]]
If the result table was visible, it is now hidden:
[[Image:images/externalAnalyses/chart-only.png]]
+ +=== Adding and Removing a User-Defined External Analysis === + +You can add a user-defined external analysis to the current list of external analyses. Note that the command to invoke must conform to the machine interface of [http://github.com/lttng/lttng-analyses LTTng analyses] 0.4. + +'''Note''': If you want to create your own external analysis, consider following the [http://lttng.org/files/lami/lami-1.0.1.html LAMI 1.0 specification], which is supported by later versions of Trace Compass. + +To add a user-defined external analysis: + +# Under any trace in the [[#Project Explorer View]], right-click ''External Analyses'' and click '''Add External Analysis'''.[[Image:images/externalAnalyses/add-external-analysis.png]]
+# In the opened ''Add External Analysis'' window, enter the name of the new external analysis and the associated command to run.[[Image:images/externalAnalyses/add-external-analysis-dialog.png]]
The name is the title of the external analysis as shown under ''External Analyses'' in the [[#Project Explorer View]].
The command is the complete command line to execute. You can put arguments containing spaces or other special characters in double quotes.
'''Note''': If the command is not a file system path, then it must be found in the directories listed in the $PATH
environment variable.
A user-defined external analysis with a green icon is created under ''External Analyses'' in the [[#Project Explorer View]].
[[Image:images/externalAnalyses/user-defined-external-analysis.png]]
+ +'''Note''': The new external analysis entry is saved in the workspace. + +To remove a user-defined external analysis: + +* Under ''External Analyses'' in the [[#Project Explorer View]], right-click the external analysis to remove and click '''Remove External Analysis'''.[[Image:images/externalAnalyses/remove-external-analysis.png]]
'''Note''': Only user-defined (green icon) external analyses can be removed.
+ == Custom Parsers == Custom parser wizards allow the user to define their own parsers for text or XML traces. The user defines how the input should be parsed into internal trace events and identifies the event fields that should be created and displayed. Traces created using a custom parser can be correlated with other built-in traces or traces added by plug-in extension. @@ -840,23 +937,26 @@ The '''New Custom Text Parser''' wizard can be used to create a custom parser fo Fill out the first wizard page with the following information: * '''Category:''' Enter a category name for the trace type. -* '''Trace type:''' Enter a name for the trace type, which is also the name of the custom parser. -* '''Time Stamp format:''' Enter the date and time pattern that will be used to output the Time Stamp.lttng add-context -u -t vtid -t procname+* Set up a tracing session with the the ''vpid'', ''vtid'' and ''procname'' contexts. See the [[#Enabling UST Events On Session Level]] and [[#Adding Contexts to Channels and Events of a Domain]] sections. Or if using the command-line: +**
lttng enable-event -u -a+**
lttng add-context -u -t vpid -t vtid -t procname* Preload the ''liblttng-ust-cyg-profile'' library when running your program: **
LD_PRELOAD=/usr/lib/liblttng-ust-cyg-profile.so ./myprogram-Once you load the resulting trace, making sure it's set to the ''Common Trace Format - LTTng UST Trace'' type, the Callstack View should be populated with the relevant information. However, since GCC's cyg-profile instrumentation only provides function addresses, and not names, an additional step is required to get the function names showing in the view. The following section explains how to do so. +Once you load the resulting trace, the Callstack View should be populated with +the relevant information. -=== Importing a function name mapping file for LTTng-UST traces === +Note that for non-trivial applications, ''liblttng-ust-cyg-profile'' generates a +'''lot''' of events! You may need to increase the channel's subbuffer size to +avoid lost events. Refer to the +[http://lttng.org/docs/#doc-fine-tuning-channels LTTng documentation]. -If you followed the steps in the previous section, you should have a Callstack View populated with function entries and exits. However, the view will display the function addresses instead of names in the intervals, which are not very useful by themselves. To get the actual function names, you need to: +For traces taken with LTTng-UST 2.8 or later, the Callstack View should show the +function names automatically, since it will make use of the debug information +statedump events (which are enabled when using ''enable-event -u -a''). +For traces taken with prior versions of UST, you would need to set the path to +the binary file or mapping manually: + +=== Importing a binary or function name mapping file (for LTTng-UST <2.8 traces) === + +If you followed the steps in the previous section, you should have a Callstack +View populated with function entries and exits. However, the view will display +the function addresses instead of names in the intervals, which are not very +useful by themselves. To get the actual function names, you need to: + +* Click the '''Import Mapping File''' ([[Image:images/import.gif]]) button in the Callstack View. + +Then either: +* Point to the binary that was used for taking the trace +OR * Generate a mapping file from the binary, using: **
nm myprogram > mapping.txt-* Click the '''Import Mapping File''' ([[Image:images/import.gif]]) button in the Callstack View, and select the ''mapping.txt'' file that was just created. +** Select the ''mapping.txt'' file that was just created. + +(If you are dealing with C++ executables, you may want to use ''nm --demangle'' +instead to get readable function names.) + +The view should now update to display the function names instead. Make sure the +binary used for taking the trace is the one used for this step too (otherwise, +there is a good chance of the addresses not being the same). + +=== Navigation === + +See Control Flow View's '''[[#Using_the_mouse | Using the mouse]]''', '''[[#Using_the_keyboard | Using the keyboard]]''' and '''[[#Zoom_region | Zoom region]]'''. + +=== Marker Axis === + +See Control Flow View's '''[[#Marker_Axis | Marker Axis]]'''. + +== Flame Graph View == + +This is an aggregate view of the function calls from the '''Call Stack View'''. This shows a bird's eye view of what are the main +time sinks in the traced applications. Each entry in the '''Flame Graph''' represents an aggregation of all the calls to a function +in a certain depth of the call stack having the same caller. So, functions in the '''Flame Graph''' are aggregated by depth and +caller. This enables the user to find the most executed code path easily. + +* In a '''Flame Graph''', each entry (box) represents a function in the stack. +* If one takes a single vertical line in the view, it represents a full call stack with parents calling children. +* The ''x-axis'' represents total duration (execution time) and not absolute time, so it is not aligned with the other views. +* The width of an entry is the total time spent in that function, including time spent calling the children. +* The total time can exceed the longest duration, if the program is pre-empted and not running during its trace time. +* Each thread traced makes its own flame graph. + +The function name is visible on each Flame graph event if the size permits. Each box in the '''Flame Graph''' +has the same color as the box representing the same function in the '''Call Stack'''. + +To open this view select a trace, expand it in the '''Project Explorer''' then expand the +'''Call Graph Analysis''' (the trace must be loaded) and open the '''Flame Graph'''. +It's also possible to go in '''Window''' -> '''Show View''' -> '''Tracing''' then +select '''Flame Graph''' in the list. + +[[Image:images/Flame_Graph.png|Flame Graph View]] + +To use the '''Flame graph''', one can navigate it and find which function is consuming the most self-time. +This can be seen as a large plateau. Then the entry can be inspected. At this point, the worst offender in +terms of CPU usage will be highlighted, however, it is not a single call to investigate, but rather the +aggregation of all the calls. Right mouse-clicking on that entry will open a context sensitive menu. +Selecting '''Go to minimum''' or '''Go to maximum''' will take the user to the minimum or maximum +recorded times in the trace. This is interesting to compare and contrast the two. + +Hovering over a function will show a tooltip with the statistics on a per-function basis. One can see the total and self times +(''worst-case'', ''best-case'', ''average'', ''total time'', ''standard deviation'', ''number of calls'') for that function. + +=== How to use a Flame Graph === + +Observing the time spent in each function can show where most of the time is spent and where one could optimize. +An example in the image above: one can see that ''mp_sort'' is a recursive sort function, it takes approximately +40% of the execution time of the program. That means that perfectly parallelizing it can yield a gain of 20% for 2 threads, 33% for 3 +and so forth. Looking at the function '''print_current_files''', it takes about 30% of the time, and it has a child ''print_many_per_line'' that has a large +self time (above 10%). This could be another area that can be targeted for optimization. Knowing this in advance helps developers +know where to aim their efforts. + +It is recommended to have a kernel trace as well as a user space trace in an experiment +while using the '''Flame Graph''' as it will show what is causing the largest delays. +When using the '''Flame Graph''' together with a call stack and a kernel trace, +an example work flow would be to find the worst offender in terms of time taken for a function +that seems to be taking too longThen, using the context menu '''Go to maximum''', one can navigate +to the maximum duration and see if the OS is, for example, preempting the function for too long, +or if the issue is in the code being executed. + +=== Using the mouse === + +*'''Double-click on the duration ruler''' will zoom the graph to the selected duration range. +*'''Shift-left-click or drag''': Extend or shrink the selection range +*'''Mouse wheel up/down''': scroll up or down +* '''Shift-mouse wheel up/down''': scroll left or right +* '''Ctrl-mouse wheel up/down''': zoom in or out horizontally +* '''Shift-Ctrl-mouse wheel up/down''': zoom in or out vertically + +When the mouse cursor is over entries (left pane): + +*'''-''': Collapse the '''Flame Graph''' of the selected thread +*'''+''': Expand the '''Flame Graph''' of the selected thread + +=== Using the keyboard === + +The following keyboard shortcuts are available: + +*'''Down Arrow''': selects the next stack depth +*'''Up Arrow''': selects the previous stack depth +*'''Home''': selects the first thread's '''Flame Graph''' +*'''End''': selects the last thread's '''Flame Graph''''s deepest depth +*'''Enter''': toggles the expansion state of the current thread in the tree +*'''Ctrl + +''': Zoom-in vertically +*'''Ctrl + -''': Zoom-out vertically +*'''Ctrl + 0''': Reset the vertical zoom + +=== Toolbar === + +{| +| [[Image:images/sort_alpha.gif]] +| Sort by thread name +| Sort the threads by thread name. Clicking the icon a second time will sort the threads by name in reverse order and change the icon to [[Image:images/sort_alpha_rev.gif]] +|- +| [[Image:images/sort_num.gif]] +| Sort by thread id +| Sort the threads by thread ID. Clicking the icon a second time will sort the threads by ID in reverse order and change the icon to [[Image:images/sort_num_rev.gif]]. +|} + +=== Importing a binary or function name mapping file (for LTTng-UST <2.8 traces) === -(If you are dealing with C++ executables, you may want to use ''nm --demangle'' instead to get readable function names.) +See Call Stack View's '''[[#Call Stack View | Importing a binary or function name mapping file (for LTTng-UST <2.8 traces) ]]'''. -The view should now update to display the function names instead. Make sure the binary used for taking the trace is the one used for this step too (otherwise, there is a good chance of the addresses not being the same). +== Function Duration Density == +The '''Function Duration Density''' view shows the function duration of function displayed by duration for the current active time window range. This is useful to find global outliers. + +[[Image:images/FunctionDensityView.png|Function Duration Density View]] + +Using the right mouse button to drag horizontally it will update the table and graph to show only the density for the selected durations. Durations outside the selection range will be filtered out. Using the toolbar button [[Image:images/zoomout_nav.gif]] the zoom range will be reset. == Memory Usage == @@ -2066,11 +2623,13 @@ Please note this view will not show shared memory or stack memory usage. The Memory Usage chart is usable with the mouse. The following actions are set: * '''left-click''': select a time or time range begin time -* '''left-drag horizontally''': select a time range or change the time range begin or end time -* '''middle-drag''': pan left or right -* '''right-drag horizontally''': zoom region -* '''mouse wheel up/down''': zoom in or out +* '''Shift-left-click or drag''': Extend or shrink the selection range +* '''left-drag horizontally''': select a time range or change the time range begin or end time +* '''middle-drag or Ctrl-left-drag horizontally''': pan left or right +* '''right-drag horizontally''': [[#Zoom region|zoom region]] +* '''Shift-mouse wheel up/down''': scroll left or right +* '''Ctrl-mouse wheel up/down''': zoom in or out horizontally === Toolbar === @@ -2087,6 +2646,99 @@ The Memory Usage View '''toolbar''', located at the top right of the view, has s Please note this view will not show shared memory or stack memory usage. +== Source Lookup (for LTTng-UST 2.8+) == + +Starting with LTTng 2.8, the tracer can now provide enough information to +associate trace events with their location in the original source code. + +To make use of this feature, first make sure your binaries are compiled with +debug information (-g), so that the instruction pointers can be mapped to source +code locations. This lookup is made using the ''addr2line'' command-line utility, +which needs to be installed and on the '''$PATH''' of the system running Trace +Compass. ''addr2line'' is available in most Linux distributions, Mac OS X, Windows using Cygwin and others. + +The following trace events need to be present in the trace: + +* lttng_ust_statedump:start +* lttng_ust_statedump:end +* lttng_ust_statedump:bin_info +* lttng_ust_statedump:build_id + +as well as the following contexts: + +* vpid +* ip + +For ease of use, you can simply enable all the UST events when setting up your +session: + + lttng enable-event -u -a + lttng add-context -u -t vpid -t ip + +Note that you can also create and configure your session using the [[#Control View | Control View]]. + +If you want to track source locations in shared libraries loaded by the +application, you also need to enable the "lttng_ust_dl:*" events, as well +as preload the UST library providing them when running your program: + + LD_PRELOAD=/path/to/liblttng-ust-dl.so ./myprogram + +If all the required information is present, then the ''Source Location'' column +of the Event Table should be populated accordingly, and the ''Open Source Code'' +action should be available. Refer to the section [[#Event Source Lookup]] for +more details. + +The ''Binary Location'' information should be present even if the original +binaries are not available, since it only makes use of information found in the +trace. A '''+''' denotes a relative address (i.e. an offset within the object +itself), whereas a '''@''' denotes an absolute address, for +non-position-independent objects. + +[[Image:images/sourceLookup/trace-with-debug-info.png]] + +''Example of a trace with debug info and corresponding Source Lookup information, showing a tracepoint originating from a shared library'' + +=== Binary file location configuration === + +To resolve addresses to function names and source code locations, the analysis +makes use of the binary files (executables or shared libraries) present on the +system. By default, it will look for the file paths as they are found in the +trace, which means that it should work out-of-the-box if the trace was taken on +the same machine that Trace Compass is running. + +It is possible to configure a ''root directory'' that will be used as a prefix +for all file path resolutions. The button to open the configuration dialog is +called '''Configure how addresses are mapped to function names''' and is +currently located in the [[#Call Stack View]]. Note that the Call Stack View +will also make use of this configuration to resolve its function names. + +[[Image:images/sourceLookup/symbol-mapping-config-ust28.png]] + +''The symbol configuration dialog for LTTng-UST 2.8+ traces'' + +This can be useful if a trace was taken on a remote target, and an image of that +target is available locally. + +If a binary file is being traced on a target, the paths in the trace will refer +to the paths on the target. For example, if they are: + +* /usr/bin/program +* /usr/lib/libsomething.so +* /usr/local/lib/libcustom.so + +and an image of that target is copied locally on the system at +''/home/user/project/image'', which means the binaries above end up at: + +* /home/user/project/image/usr/bin/program +* /home/user/project/image/usr/lib/libsomething.so +* /home/user/project/image/usr/local/lib/libcustom.so + +Then selecting the ''/home/user/project/image'' directory in the configuration +dialog above will allow Trace Compass to read the debug symbols correctly. + +Note that this path prefix will apply to both binary file and source file +locations, which may or may not be desirable. + = Trace synchronization = It is possible to synchronize traces from different machines so that they have the same time reference. Events from the reference trace will have the same timestamps as usual, but the events from traces synchronized with the first one will have their timestamps transformed according to the formula obtained after synchronization. @@ -2277,6 +2929,10 @@ Click the '''Import''' button and select a file from the opened file dialog to i Select an XML file from the list, click the '''Export''' button and enter or select a file in the opened file dialog to export the XML analysis. Note that if an existing file containing an analysis is selected, its content will be replaced with the analysis to export. +* Edit + +Select an XML file from the list, click the '''Edit''' to open the XML editor. When the file is saved after being modified, it is validated and traces that are affected by this file are closed. + * Delete Select an XML file from the list and click the '''Delete''' button to remove the XML file. Deleting an XML file will close all the traces for which this analysis applies and remove the analysis. @@ -2390,7 +3046,7 @@ Optional header information can be added to the state provider. A "traceType" sh -If pre-defined values will be used in the state provider, they must be defined before the state providers. They can then be referred to in the state changes by name, preceded by the '$' sign. It is not necessary to use pre-defined values, the state change can use values like (100, 101, 102) directly. +If predefined values will be used in the state provider, they must be defined before the state providers. They can then be referred to in the state changes by name, preceded by the '$' sign. It is not necessary to use predefined values, the state change can use values like (100, 101, 102) directly.
@@ -2487,11 +3143,309 @@ If there are corrections to make, you may modify the XML state provider file, an If modifications are made to the XML state provider after it has been "published", the '''version''' attribute of the '''xmlStateProvider''' element should be updated. This avoids having to delete each trace's supplementary file manually. If the saved state system used a previous version, it will automatically be rebuilt from the XML file. +== Defining an XML pattern provider == +It exists patterns within an execution trace that can provide high level details about the system execution. A '''pattern''' is a particular combination of events or states that are expected to occur within a trace. It may be composed of several state machines that inherit or communicate through a common state system. + +We may have multiple instances (scenarios) of a running state machine within a pattern. Each scenario which has its own path in the state system can generate segments to populate the data-driven views + +=== The state system structure === + +The pattern analysis generates a predefined attribute tree described as follows : + + +|- state machines +| |- state machine 0 +| |- scenario 0 +| |- status +| |- state +| |- start +| ... +| |- storedFields +| |- field 1 +| ... +| |- startTime +| ... +| ... +| |- scenarios 1 +| ... +| |- state machine 1 +| ... ++ +The user can add custom data in this tree or determine its own attribute tree beside of this one. + +=== Writing the XML pattern provider === +Details about the XML structure are available in the XSD files. + +First define the pattern element. As the state provider element described in [[#Writing_the_XML_state_provider | Writing the XML state provider]], it has a "version" attribute and an "id" attribute. + +++ +Optional header information as well as predefined values like described in [[#Writing_the_XML_state_provider | Writing the XML state provider]] can be added. + +Stored values can be added before the pattern handler. The predefined action '''saveStoredField''' triggers the updates of the stored fields and the predefined action '''clearStoredFields''' reset the values. + ++ ++ +The behavior of the pattern and the models it needs are described in the pattern handler element. + +The structure of the state machine (FSM) is based on the SCXML structure. The following example describe an FSM that matches all the system call in an LTTng kernel trace. + ++ ++ +The value of the target attribute corresponds to the 'id' of a test element described in the XML file and is a reference to it. Similarly, the value of the action attribute corresponds to the 'id' of an action element described in the XML file and is a reference to it. + +Conditions are used in the transitions to switch between the state of an FSM. They are defined under the '''test''' element. Two types of conditions are allowed : '''Data condition''' and '''Time condition'''. It is possible to combine several conditions using a logical operator (OR, AND, ...). + +Data conditions tests the ongoing event information against the data in the state system or constant values. The following condition tests whether the current thread running on the CPU is also the ongoing scenario thread. + ++ ++ ++ + ++ + ++ +Two types of time conditions are available: +* Time range conditions tests whether the ongoing event happens between a specific range of time. The following condition tests whether the ongoing event happens between 1 nanosecond and 3 nanoseconds. + ++ ++ ++ ++ ++ + + ++ + ++ +* Elapsed time conditions tests the value of the time spent since a specific state of an fsm. The following condition tests whether the ongoing event happens less than 3 nanoseconds after that the scenario reaches the state "syscall_entry_x". + ++ ++ ++ ++ ++ ++ +Two types of actions are allowed : +* State changes update values of attributes into the state system. The following example set the value of the thread for the current scenario. + ++ ++ ++ ++ ++ ++ +* Generate segments. The following example represents a system call segment. + ++ ++ ++ + + ++ + ++ +When existing, the stored fields will be added as fields for the generated segments. + +Here is the complete XML file by combining all the examples models above: + ++ ++ ++ ++ ++ ++ + + + ++ +=== Representing the scenarios === + +Segments generated by the pattern analysis are used to populate latency views. A description of these views can be found in [[#Latency_Analyses | Latency Analyses]]. + +The full XML analysis example described above will generate the following views : + +* Latency Table + +[[Image:images/XMLPatternAnalysis/LatencyTable.png| Latency Table example - System Call pattern]] + +* Latency vs Time + +[[Image:images/XMLPatternAnalysis/LatencyVSTime.png| Latency vs Time example - System Call pattern]] + +* Latency Statistics + +[[Image:images/XMLPatternAnalysis/LatencyStatistics.png| Latency Statistics example - System Call pattern]] + +* Latency vs Count + +[[Image:images/XMLPatternAnalysis/LatencyVSCount.png| Latency vs Count example - System Call pattern]] + == Defining an XML time graph view == A time graph view is a view divided in two, with a tree viewer on the left showing information on the different entries to display and a Gantt-like viewer on the right, showing the state of the entries over time. The [[#Control_Flow_View | Control Flow View]] is an example of a time graph view. -Such views can be defined in XML using the data in the state system. The state system itself could have been built by an XML-defined state provider or by any pre-defined Java analysis. It only requires knowing the structure of the state system, which can be explored using the [[#State System Explorer View | State System Explorer View]] (or programmatically using the methods in ''ITmfStateSystem''). +Such views can be defined in XML using the data in the state system. The state system itself could have been built by an XML-defined state provider or by any predefined Java analysis. It only requires knowing the structure of the state system, which can be explored using the [[#State System Explorer View | State System Explorer View]] (or programmatically using the methods in ''ITmfStateSystem''). In the example above, suppose we want to display the status for each task. In the state system, it means the path of the entries to display is "Tasks/*". The attribute whose value should be shown in the Gantt chart is the entry attribute itself. So the XML to display these entries would be as such: @@ -2549,11 +3503,14 @@ The following screenshot shows the result of the preceding example on a test tra [[Image:images/Xml_analysis_screenshot.png| XML analysis with view]] +==== Using the keyboard ==== +*'''Ctrl + F''': Search in the view. (see [[#Searching in Time Graph Views | Searching in Time Graph Views]]) + == Defining an XML XY chart == An XY chart displays series as a set of numerical values over time. The X-axis represents the time and is synchronized with the trace's current time range. The Y-axis can be any numerical value. -Such views can be defined in XML using the data in the state system. The state system itself could have been built by an XML-defined state provider or by any pre-defined Java analysis. It only requires knowing the structure of the state system, which can be explored using the [[#State System Explorer View | State System Explorer View]] (or programmatically using the methods in ''ITmfStateSystem''). +Such views can be defined in XML using the data in the state system. The state system itself could have been built by an XML-defined state provider or by any predefined Java analysis. It only requires knowing the structure of the state system, which can be explored using the [[#State System Explorer View | State System Explorer View]] (or programmatically using the methods in ''ITmfStateSystem''). We will use the Linux Kernel Analysis on LTTng kernel traces to show an example XY chart. In this state system, the status of each CPU is a numerical value. We will display this value as the Y axis of the series. There will be one series per CPU. The XML to display these entries would be as such: @@ -2574,7 +3531,7 @@ Like for the time graph views, optional header information can be added to the v+ + ++ + ++ + + + + + + + + + + + + + + + ++ + ++ + + + + ++ + + ++ + ++ ++ ++ ++ ++ ++ + ++ ++ + + ++ ++ ++ ++ + + ++ + + + ++ + ++ + + + + + ++ + + + ++ + + ++ + + + ++ ++ ++ ++ ++ + + + ++ ++ + ++ + -@@ -2605,7 +3562,7 @@ The following screenshot shows the result of the preceding example on a LTTng Ke Trace Compass offers a feature called Latency analysis. This allows an analysis to return intervals and these intervals will be displayed in four different views. An example analysis is provided with kernel system call latencies being provided. The available views are: -* System Call Latencies +* System Call Latencies A '''table''' of the raw latencies. This view is useful to inspect individual latencies. [[Image:images/LatenciesTable.png| Latency Table example - System Call Latencies]] @@ -2618,16 +3575,21 @@ A time aligned '''scatter chart''' of the latencies with respect to the current * System Call Latency Statistics -A view of the total '''statistics''' of the latencies. These show the ''minimum'', ''maximum'', ''average'' and ''standard deviation'' of the latencies when applicable. This tool is useful for finding the outliers on a per-category basis. +A view of the total '''statistics''' of the latencies. These show the ''minimum'', ''maximum'', ''average'', ''standard deviation'', and ''count'' of the latencies when applicable. This tool is useful for finding the outliers on a per-category basis. -[[Image:images/LatenciesStatistics.png| Latency Statistics example - System Call Latency Statistics]] +Right-clicking on an entry of the table and select '''Go to minimum''' allows to select the range of the minimum latency for the selected entry and synchronize the other views to this time range. +Right-clicking on an entry of the table and select '''Go to maximum''' allows to select the range of the maximum latency for the selected entry and synchronize the other views to this time range. + +[[Image:images/LatenciesStatistics.png| Latency Statistics example - System Call Latency Statistics]] * System Call Density -A '''density''' view, analyzing the current time range. This is useful to find global outliers. +A '''density''' view, analyzing the current time range. This is useful to find global outliers. Selecting a duration in the table it will synchronize other views to this time range. [[Image:images/LatenciesDensity.png| Latency Densities example - System Call Density]] +Using the right mouse button to drag horizontally it will update the table and graph to show only the density for the selected durations. Durations outside the selection range will be filtered out. Using the toolbar button [[Image:images/zoomout_nav.gif]] the zoom range will be reset. + = Virtual Machine Analysis = Virtual environments are usually composed of host machines, who each run an hypervisor program on which one or many guests can be run. Tracing a guest machine alone can often yield some strange results as from its point of view, it has full use of the resources, but in reality, most resources are shared with the host and other guests. @@ -2654,6 +3616,9 @@ The entries for each thread of the machine corresponds to the one from the [[#Co [[Image:images/vmAnalysis/VM_CPU_view.png | Virtual CPU view]] +==== Using the keyboard ==== +*'''Ctrl + F''': Search in the view. (see [[#Searching in Time Graph Views | Searching in Time Graph Views]]) + == Hypervisor-specific Tracing == In order to be able to correlate data from the guests and hosts traces, each hypervisor supported by Trace Compass requires some specific events, that are sometimes not available in the default installation of the tracer. @@ -2702,6 +3667,83 @@ The following tracepoints will be available Host and guests can now be traced together and their traces added to an experiment. Because each guest has a different clock than the host, it is necessary to synchronize the traces together. Unfortunately, automatic synchronization with the virtual machine events is not completely implemented yet, so another kind of synchronization needs to be done, with TCP packets for instance. See section on [[#Trace synchronization | trace synchronization]] for information on how to obtain synchronizable traces. += Java Logging = + +Trace Compass contains some Java Utility Logging (JUL) tracepoints in various places in the code. To diagnose issues with Trace Compass or when reporting problems with the application, a JUL trace may be useful to help pinpoint the problem. The following sections explain how to enable JUL logging in Trace Compass and use various handlers to handle the data. + +== Enable JUL Logging == + +By default, all the logging of the Trace Compass namespace is disabled. To enable it, one needs to add the following property to the ''vmargs'': ''-Dorg.eclipse.tracecompass.logging=true''. + +The log levels and components can be controlled via a configuration file whose path is specified also in the ''vmargs'' like this: ''-Djava.util.logging.config.file=/path/to/logger.properties''. An example configuration file can be found in the next section. + +If running the RCP, these arguments can be appended at the end of the ''tracecompass.ini'' file located in the folder where the executable is located. If running from Eclipse in development mode, in the ''Run configurations...'', the arguments should be added in the ''Arguments'' tab in the ''VM args'' box. + +== Configuring JUL logging == + +JUL logging can be fine-tuned to log only specific components, specific levels, but also to different log handlers, with different formats, etc. Or else, the default level is INFO and the default log handler is a ConsoleHandler which displays all log message to the Console, which can be quite cumbersome. + +Here is an example ''logger.properties'' file to control what is being logged and where. + + # Specify the handlers to create in the root logger + # (all loggers are children of the root logger) + # These are example handlers + + # Console handler + handlers = java.util.logging.ConsoleHandler + # Console and file handlers + #handlers = java.util.logging.ConsoleHandler, java.util.logging.FileHandler + # No handler + #handlers = + + # Set the default logging level for the root logger + # Possible values: OFF, SEVERE, WARNING, INFO, CONFIG, FINE, FINER, FINEST, ALL + .level = OFF + + # Fine tune log levels for specific components + # Use the INFO level for all tracecompass, but FINEST for the StateSystem component + #org.eclipse.tracecompass.internal.statesystem.core.StateSystem.level = FINEST + org.eclipse.tracecompass.level = INFO + +== LTTng JUL log handler == + +The various log handlers have an overhead on the application. The ConsoleHandler has a visible impact on Trace Compass performance. The FileHandler also has an overhead though less visible, but when logging from multiple threads at the same time, the file becomes a bottleneck, so that logging data cannot be used with accuracy for performance analysis. The [http://lttng.org/docs/#doc-java-application LTTng log handler] is much better in a multi-threads context. + +LTTng-UST comes with the Java JUL agent in most distros. Otherwise, it is possible to manually compile lttng-ust with options ''--enable-java-agent-jul'' and install it. + + git clone git://git.lttng.org/lttng-ust.git + cd lttng-ust + ./bootstrap + ./configure --enable-java-agent-jul + make + sudo make install + +The necessary classes for the java agent will have been installed on the system. Since Equinox (the OSGi implementation used by Eclipse and thus Trace Compass) uses its own classpath and ignores any classpath entered on the command line for security reasons, one needs to specify the agent class path with the bootclasspath argument: + + -Xbootclasspath/a:/usr/local/share/java/lttng-ust-agent-jul.jar:/usr/local/share/java/lttng-ust-agent-common.jar + +Note that unlike the -classpath argument, -Xbootsclasspath does not follow the dependencies specified by a jar's Manifest, thus it is required to list both the -jul and the -common jars here. + +These classes need to load the LTTng JNI library. Because they were loaded from the boot class path by the boot ClassLoader, the library path entered on the command line is ignored. A workaround is to manually copy the library to the jvm's main library path. For example + + sudo cp /usr/local/lib/liblttng-ust-jul-jni.so /usr/lib/jvm/java-8-openjdk/jre/lib/amd64/ + +Or to overwrite the JVM's library path with the following VM argument. + + -Dsun.boot.library.path=/usr/local/lib + +''Disclaimer: this last method overwrites the main java library path. It may have unknown side-effects. None were found yet.'' + +LTTng can now be used as a handler for Trace Compass's JUL, by adding the following line to the logger.properties file + + handlers = org.lttng.ust.agent.jul.LttngLogHandler + +The tracepoints will be those logged by a previously defined configuration file. Here is how to setup LTTng to handle JUL logging: + + lttng create + lttng enable-event -j -a + lttng start + = Limitations = * When parsing text traces, the timestamps are assumed to be in the local time zone. This means that when combining it to CTF binary traces, there could be offsets by a few hours depending on where the traces were taken and where they were read.+