X-Git-Url: http://git.efficios.com/?a=blobdiff_plain;ds=sidebyside;f=doc%2Forg.eclipse.tracecompass.doc.user%2Fdoc%2FUser-Guide.mediawiki;h=b4a37db26ca8d31458dbbc5bb2b33986359ae4a7;hb=a212ec16563abd328568ff889e24608349a73ea7;hp=98197aa63028c5ac0757fbab3b0a17bbca79ac74;hpb=494d213d9f24f962ac538a065ad3bbc1049d3d6e;p=deliverable%2Ftracecompass.git
diff --git a/doc/org.eclipse.tracecompass.doc.user/doc/User-Guide.mediawiki b/doc/org.eclipse.tracecompass.doc.user/doc/User-Guide.mediawiki
index 98197aa630..b4a37db26c 100644
--- a/doc/org.eclipse.tracecompass.doc.user/doc/User-Guide.mediawiki
+++ b/doc/org.eclipse.tracecompass.doc.user/doc/User-Guide.mediawiki
@@ -246,6 +246,8 @@ Note that traces of certain types (e.g. LTTng Kernel) are actually a composite o
The option '''Preserve folder structure''' will create, if necessary, the structure of folders relative to (and excluding) the selected '''Root directory''' (or '''Archive file''') into the target trace folder.
+The option '''Create Experiment''' will create an experiment with all imported traces. By default, the experiment name is the '''Root directory''' name, when importing from directory, or the ''' Archive file''' name, when importing from archive. One can change the experiment name by typing a new name in the text box beside the option.
+
[[Image:images/ProjectImportTraceDialog.png]]
If a trace already exists with the same name in the target trace folder, the user can choose to rename the imported trace, overwrite the original trace or skip the trace. When rename is chosen, a number is appended to the trace name, for example smalltrace becomes smalltrace(2).
@@ -330,6 +332,9 @@ If the wizard was opened using the File menu, the destination project has to be
When Finish is clicked, the trace is imported in the target folder. The folder structure from the trace package is restored in the target folder.
+=== Refreshing of Trace and Trace Folder ===
+Traces and trace folders in the workspace might be updated on the media. To refresh the content, right-click the trace or trace folder and select menu item '''Refresh'''. Alternatively, select the trace or trace folder and press key '''F5'''.
+
=== Remote Fetching ===
It is possible to import traces automatically from one or more remote hosts according to a predefined remote profile by using the '''Fetch Remote Traces''' wizard.
@@ -569,7 +574,7 @@ The header displays the current trace (or experiment) name.
The columns of the table are defined by the fields (aspects) of the specific trace type. These are the defaults:
* '''Timestamp''': the event timestamp
-* '''Type''': the event type
+* '''Event Type''': the event type
* '''Contents''': the fields (or payload) of this event
The first row of the table is the header row a.k.a. the Search and Filter row.
@@ -586,13 +591,15 @@ The Events editor can be closed, disposing a trace. When this is done, all the v
Searching and filtering of events in the table can be performed by entering matching conditions in one or multiple columns in the header row (the first row below the column header).
-To toggle between searching and filtering, click on the 'search' ([[Image:images/TmfEventSearch.gif]]) or 'filter' ([[Image:images/TmfEventFilter.gif]]) icon in the header row's left margin, or right-click on the header row and select '''Show Filter Bar''' or '''Show Search Bar''' in the context menu.
+To apply a matching condition to a specific column, click on the column's header row cell, type in a [http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html regular expression]. You can also enter a simple text string and it will be automatically be replaced with a 'contains' regular expression.
-To apply a matching condition to a specific column, click on the column's header row cell, type in a [http://download.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html regular expression] and press the '''ENTER''' key. You can also enter a simple text string and it will be automatically be replaced with a 'contains' regular expression.
+Press the '''Enter''' key to apply the condition as a search condition. It will be added to any existing search conditions.
+
+Press the '''Ctrl+Enter''' key to immediately add the condition (and any other existing search conditions) as a filter instead.
When matching conditions are applied to two or more columns, all conditions must be met for the event to match (i.e. 'and' behavior).
-To clear all matching conditions in the header row, press the '''DEL''' key.
+A preset filter created in the [[#Filters_View | Filters]] view can also be applied by right-clicking on the table and selecting '''Apply preset filter...''' > ''filter name''
==== Searching ====
@@ -602,25 +609,33 @@ All matching events will have a 'search match' icon in their left margin. Non-ma
[[Image:images/TraceEditor-Search.png]]
-Pressing the '''ENTER''' key will search and select the next matching event. Pressing the '''SHIFT-ENTER''' key will search and select the previous matching event. Wrapping will occur in both directions.
+Pressing the '''Enter''' key will search and select the next matching event. Pressing the '''Shift+Enter''' key will search and select the previous matching event. Wrapping will occur in both directions.
+
+Press '''Esc''' to cancel an ongoing search.
-Press '''ESC''' to cancel an ongoing search.
+To add the currently applied search condition(s) as filter(s), click the '''Add as Filter''' [[Image:images/filter_add.gif]] button in the header row margin, or press the '''Ctrl+Enter''' key.
-Press '''DEL''' to clear the header row and reset all events to normal.
+Press '''Delete''' to clear the header row and reset all events to normal.
==== Filtering ====
-When a filtering condition is entered in the head row, the table will clear all events and fill itself with matching events as they are found from the beginning of the trace. The characters in each column which match the regular expression will be highlighted.
+When a new filter is applied, the table will clear all events and fill itself with matching events as they are found from the beginning of the trace. The characters in each column which match the regular expression will be highlighted.
A status row will be displayed before and after the matching events, dynamically showing how many matching events were found and how many events were processed so far. Once the filtering is completed, the status row icon in the left margin will change from a 'stop' to a 'filter' icon.
[[Image:images/TraceEditor-Filter.png]]
-Press '''ESC''' to stop an ongoing filtering. In this case the status row icon will remain as a 'stop' icon to indicate that not all events were processed.
+Press '''Esc''' to stop an ongoing filtering. In this case the status row icon will remain as a 'stop' icon to indicate that not all events were processed.
+
+The header bar will be displayed above the table and will show a label for each applied filter. Clicking on a label will highlight the matching strings in the events that correspond to this filter condition. Pressing the '''Delete''' key will clear this highlighting.
+
+To remove a specific filter, click on the [[Image:images/delete_button.gif]] icon on its label in the header bar. The table will be updated with the events matching the remaining filters.
+
+The header bar can be collapsed and expanded by clicking on the [[Image:images/expanded_ovr.gif]][[Image:images/collapsed_ovr.gif]] icons in the top-left corner or on its background. In collapsed mode, a minimized version of the filter labels will be shown that can also be used to highlight or remove the corresponding filter.
-Press '''DEL''' or right-click on the table and select '''Clear Filters''' from the context menu to clear the header row and remove the filtering. All trace events will be now shown in the table. Note that the currently selected event will remain selected even after the filter is removed.
+Right-click on the table and select '''Clear Filters''' from the context menu to remove all filters. All trace events will be now shown in the table. Note that the currently selected event will remain selected even after the filters are removed.
-You can also search on the subset of filtered events by toggling the header row to the Search Bar while a filter is applied. Searching and filtering conditions are independent of each other.
+You can also search on the subset of filtered events by entering a search condition in the header row while a filter is applied. Searching and filtering conditions are independent of each other.
==== Bookmarking ====
@@ -642,7 +657,7 @@ The text of selected events can be copied to the clipboard by right-clicking on
=== Event Source Lookup ===
-For CTF traces using specification v1.8.2 or above, information can optionally be embedded in the trace to indicate the source of a trace event. This is accessed through the event context menu by right-clicking on an event in the table.
+Some trace types can optionally embed information in the trace to indicate the source of a trace event. This is accessed through the event context menu by right-clicking on an event in the table.
==== Source Code ====
@@ -658,6 +673,10 @@ It is possible to export the content of the trace to a text file based on the co
''Note'': The columns in the text file are separated by tabs.
+=== Refreshing of Trace ===
+
+It's possible to refresh the content of the trace and resume indexing in case the current open trace was updated on the media. To refresh the trace, right-click into the table and select menu item '''Refresh'''. Alternatively, press key '''F5'''.
+
=== Collapsing of Repetitive Events ===
The implementation for collapsing of repetitive events is trace type specific and is only available for certain trace types. For example, a trace type could allow collapsing of consecutive events that have the same event content but not the same timestamp. If a trace type supports this feature then it is possible to select the '''Collapse Events''' menu item after pressing the right mouse button in the table.
@@ -670,7 +689,7 @@ A status row will be displayed before and after the events, dynamically showing
[[Image:images/TablePostCollapse.png]]
-To clear collapsing, press the right mouse button in the table and select menu item '''Clear Filters''' in the context sensitive menu. ''Note'' that collapsing is also removed when another filter is applied to the table.
+To remove the collapse filter, press the ([[Image:images/delete_button.gif]]) icon on the '''Collapse''' label in the header bar, or press the right mouse button in the table and select menu item '''Clear Filters''' in the context sensitive menu (this will also remove any other filters).
=== Customization ===
@@ -738,9 +757,17 @@ In each histogram, the following keys are handled:
== Statistics View ==
-The Statistics View displays the various event counters that are collected when analyzing a trace. The data is organized per trace. After opening a trace, the element '''Statistics''' is added under the '''Tmf Statistics Analysis''' tree element in the Project Explorer. To open the view, double-click the '''Statistics''' tree element. Alternatively, select '''Statistics''' under '''Tracing''' within the '''Show View''' window ('''Window''' -> '''Show View''' -> '''Other...'''). This view shows 3 columns: ''Level'' ''Events total'' and ''Events in selected time range''. After parsing a trace the view will display the number of events per event type in the second column and in the third, the currently selected time range's event type distribution is shown. The cells where the number of events are printed also contain a colored bar with a number that indicates the percentage of the event count in relation to the total number of events. The statistics is collected for the whole trace. This view is part of the '''Tracing and Monitoring Framework (TMF)''' and is generic. It will work for any trace type extensions. For the LTTng 2.0 integration the Statistics view will display statistics as shown below.:
+The Statistics View displays the various event counters that are collected when analyzing a trace. After opening a trace, the element '''Statistics''' is added under the '''Tmf Statistics Analysis''' tree element in the Project Explorer. To open the view, double-click the '''Statistics''' tree element. Alternatively, select '''Statistics''' under '''Tracing''' within the '''Show View''' window ('''Window''' -> '''Show View''' -> '''Other...'''). The statistics is collected for the whole trace. This view is part of the '''Tracing and Monitoring Framework (TMF)''' and is generic. It will work for any trace type extensions.
+
+The view is separated in two sides. The left side of the view presents the Statistics in a table. The table shows 3 columns: ''Level'' ''Events total'' and ''Events in selected time range''. The data is organized per trace. After parsing a trace the view will display the number of events per event type in the second column and in the third, the currently selected time range's event type distribution is shown. The cells where the number of events are printed also contain a colored bar with a number that indicates the percentage of the event count in relation to the total number of events.
-[[Image:images/LTTng2StatisticsView.png]]
+[[Image:images/LTTng2StatisticsTableView.png]]
+
+The right side illustrates the proportion of types of events into two pie charts. The legend of each pie chart gives the representation of each color in the chart.
+* The ''Global'' pie chart displays the general proportion of the events in the trace.
+* When there is a range selection, the ''Events in selection'' pie chart appears next to the ''Global'' pie chart and displays the proportion the event in the selected range of the trace.
+
+[[Image:images/LTTng2StatisticsPieChartView.png]]
By default, the statistics use a state system, therefore will load very quickly once the state system is written to the disk as a supplementary file.
@@ -816,6 +843,74 @@ The view shows a tree of currently selected traces and their registered state sy
To modify the time of attributes shown in the view, select a different current time in other views that support time synchronization (e.g. event table, histogram view). When a time range is selected, this view uses the begin time.
+== External Analyses ==
+
+Trace Compass supports the execution of '''external analyses''' conforming to the machine interface of [https://github.com/lttng/lttng-analyses/releases/tag/v0.4.3 LTTng-Analyses 0.4.3], or any later [https://github.com/lttng/lttng-analyses/releases LTTng-Analyses 0.4.x] version. Later (0.5+) versions of LTTng-Analyses will be supported by later versions of Trace Compass.
+
+An external analysis is a [[#Run an External Analysis|program executed by Trace Compass]]. When the program is done analyzing, Trace Compass generates a '''[[#Open a Report|report]]''' containing its results. A report contains one or more tables which can also be viewed as bar and scatter [[#Create a Chart from a Result Table|charts]].
+
+'''Note''': The program to execute is found by searching the directories listed in the standard $PATH
environment variable when no path separator (/
on Unix and OS X, \
on Windows) is found in its command.
+
+Trace Compass ships with a default list of ''descriptors'' of external analyses (not the analyses themselves), including the descriptors of the [http://github.com/lttng/lttng-analyses LTTng analyses]. If the LTTng analyses project is installed, its analyses are available when opening or importing an LTTng kernel trace.
+
+=== Run an External Analysis ===
+
+To run an external analysis:
+
+# [[#Importing Traces to the Project|Import a trace to the project]].
+# Make sure the trace is opened by double-clicking its name in the [[#Project Explorer View]].
+# Under the trace in the [[#Project Explorer View]], expand ''External Analyses'' to view the list of available external analyses.
The external analyses which are either missing or not compatible with the trace are stroke and cannot be executed.
[[Image:images/externalAnalyses/external-analyses-list.png]]
+# '''Optional''': If you want the external analysis to analyze a specific time range of the current trace, make a time range selection.You can use views like the [[#Histogram View]] and the [[#Control Flow View]] (if it's available for this trace) to make a time range selection.
External analyses are executed on the current time range selection if there is one, or on the whole trace otherwise.
+# Right-click the external analysis to run and click '''Run External Analysis'''.[[Image:images/externalAnalyses/run-external-analysis.png]]
+# In the opened ''External Analysis Parameters'' window, optionally enter extra parameters to pass to the program.[[Image:images/externalAnalyses/external-analysis-parameters-dialog.png]]
+# Click '''OK''' to start the analysis. + +Note that many external analyses can be started concurrently. + +When the external analysis is done analyzing, its results are saved as a [[#Open a Report|report]] in Trace Compass. The tables contained in this report are also automatically opened into a new report view when the analysis is finished. + +=== Open a Report === + +A '''report''' is created after a successful [[#Run an External Analysis|execution of an external analysis]]. + +To open a report: + +* Under ''Reports'' under a trace in the [[#Project Explorer View]], double-click the report to open.Each result table generated by the external analysis is shown in its own tab in the opened report view.
[[Image:images/externalAnalyses/report-view.png]]
+ +=== Create a Chart from a Result Table === + +To create a bar or a scatter chart from the data of a given result table: + +# [[#Open a Report|Open the report]] containing the result table to use for creating the chart. +# In the opened report view, click the tab of the result table to use for creating the chart. +# Click the ''View Menu'' button, then click either '''New custom bar chart''' or '''New custom scatter chart'''.[[Image:images/externalAnalyses/new-custom-scatter-chart-menu.png]]
+# In the opened ''Bar chart series creation'' or ''Scatter chart series creation'' window, under ''Series creator'', select a column to use for the X axis of the chart, and one or more columns to use for the Y axis of the chart, then click '''Add''' to create a series.[[Image:images/externalAnalyses/chart-configuration-dialog.png]]
Repeat this step to create more series.
+# Click '''OK''' to create the chart.The chart is created and shown at the right of its source result table.
[[Image:images/externalAnalyses/table-and-chart.png]]
+ +=== Show or Hide a Result Table === + +To show or hide a result table once a [[#Create a Chart from a Result Table|chart]] has been created: + +* In the report view, click the ''Toggle the Table view of the results'' button.[[Image:images/externalAnalyses/table-and-chart-toggle-button.png]]
If the result table was visible, it is now hidden:
[[Image:images/externalAnalyses/chart-only.png]]
+ +=== Add and Remove a User-Defined External Analysis === + +You can add a user-defined external analysis to the current list of external analyses. Note that the command to invoke must conform to the machine interface of [http://github.com/lttng/lttng-analyses LTTng analyses] 0.4. + +'''Note''': If you want to create your own external analysis, consider following the [http://lttng.org/files/lami/lami-1.0.1.html LAMI 1.0 specification], which is supported by later versions of Trace Compass. + +To add a user-defined external analysis: + +# Under any trace in the [[#Project Explorer View]], right-click ''External Analyses'' and click '''Add External Analysis'''.[[Image:images/externalAnalyses/add-external-analysis.png]]
+# In the opened ''Add External Analysis'' window, enter the name of the new external analysis and the associated command to run.[[Image:images/externalAnalyses/add-external-analysis-dialog.png]]
The name is the title of the external analysis as shown under ''External Analyses'' in the [[#Project Explorer View]].
The command is the complete command line to execute. You can put arguments containing spaces or other special characters in double quotes.
'''Note''': If the command is not a file system path, then it must be found in the directories listed in the $PATH
environment variable.
A user-defined external analysis with a green icon is created under ''External Analyses'' in the [[#Project Explorer View]].
[[Image:images/externalAnalyses/user-defined-external-analysis.png]]
+ +'''Note''': The new external analysis entry is saved in the workspace. + +To remove a user-defined external analysis: + +* Under ''External Analyses'' in the [[#Project Explorer View]], right-click the external analysis to remove and click '''Remove External Analysis'''.[[Image:images/externalAnalyses/remove-external-analysis.png]]
'''Note''': Only user-defined (green icon) external analyses can be removed.
+ == Custom Parsers == Custom parser wizards allow the user to define their own parsers for text or XML traces. The user defines how the input should be parsed into internal trace events and identifies the event fields that should be created and displayed. Traces created using a custom parser can be correlated with other built-in traces or traces added by plug-in extension. @@ -832,23 +927,26 @@ The '''New Custom Text Parser''' wizard can be used to create a custom parser fo Fill out the first wizard page with the following information: * '''Category:''' Enter a category name for the trace type. -* '''Trace type:''' Enter a name for the trace type, which is also the name of the custom parser. +* '''Trace type:''' Enter a name for the trace type, which is also the name of the custom parser. This will also be the default event type name. * '''Time Stamp format:''' Enter the date and time pattern that will be used to output the Time Stamp.lttng add-context -u -t vtid -t procname+* Set up a tracing session with the the ''vpid'', ''vtid'' and ''procname'' contexts. See the [[#Enabling UST Events On Session Level]] and [[#Adding Contexts to Channels and Events of a Domain]] sections. Or if using the command-line: +**
lttng enable-event -u -a+**
lttng add-context -u -t vpid -t vtid -t procname* Preload the ''liblttng-ust-cyg-profile'' library when running your program: **
LD_PRELOAD=/usr/lib/liblttng-ust-cyg-profile.so ./myprogram-Once you load the resulting trace, making sure it's set to the ''Common Trace Format - LTTng UST Trace'' type, the Callstack View should be populated with the relevant information. However, since GCC's cyg-profile instrumentation only provides function addresses, and not names, an additional step is required to get the function names showing in the view. The following section explains how to do so. +Once you load the resulting trace, the Callstack View should be populated with +the relevant information. + +Note that for non-trivial applications, ''liblttng-ust-cyg-profile'' generates a +'''lot''' of events! You may need to increase the channel's subbuffer size to +avoid lost events. Refer to the +[http://lttng.org/docs/#doc-fine-tuning-channels LTTng documentation]. + +For traces taken with LTTng-UST 2.8 or later, the Callstack View should show the +function names automatically, since it will make use of the debug information +statedump events (which are enabled when using ''enable-event -u -a''). + +For traces taken with prior versions of UST, you would need to set the path to +the binary file or mapping manually: -=== Importing a function name mapping file for LTTng-UST traces === +=== Importing a binary or function name mapping file (for LTTng-UST <2.8 traces) === -If you followed the steps in the previous section, you should have a Callstack View populated with function entries and exits. However, the view will display the function addresses instead of names in the intervals, which are not very useful by themselves. To get the actual function names, you need to: +If you followed the steps in the previous section, you should have a Callstack +View populated with function entries and exits. However, the view will display +the function addresses instead of names in the intervals, which are not very +useful by themselves. To get the actual function names, you need to: +* Click the '''Import Mapping File''' ([[Image:images/import.gif]]) button in the Callstack View. + +Then either: +* Point to the binary that was used for taking the trace +OR * Generate a mapping file from the binary, using: **
nm myprogram > mapping.txt-* Click the '''Import Mapping File''' ([[Image:images/import.gif]]) button in the Callstack View, and select the ''mapping.txt'' file that was just created. +** Select the ''mapping.txt'' file that was just created. + +(If you are dealing with C++ executables, you may want to use ''nm --demangle'' +instead to get readable function names.) + +The view should now update to display the function names instead. Make sure the +binary used for taking the trace is the one used for this step too (otherwise, +there is a good chance of the addresses not being the same). -(If you are dealing with C++ executables, you may want to use ''nm --demangle'' instead to get readable function names.) +=== Navigation === + +See Control Flow View's '''[[#Using_the_mouse | Using the mouse]]''', '''[[#Using_the_keyboard | Using the keyboard]]''' and '''[[#Zoom_region | Zoom region]]'''. -The view should now update to display the function names instead. Make sure the binary used for taking the trace is the one used for this step too (otherwise, there is a good chance of the addresses not being the same). +=== Marker Axis === + +See Control Flow View's '''[[#Marker_Axis | Marker Axis]]'''. == Memory Usage == @@ -2027,11 +2402,13 @@ Please note this view will not show shared memory or stack memory usage. The Memory Usage chart is usable with the mouse. The following actions are set: * '''left-click''': select a time or time range begin time -* '''left-drag horizontally''': select a time range or change the time range begin or end time -* '''middle-drag''': pan left or right -* '''right-drag horizontally''': zoom region -* '''mouse wheel up/down''': zoom in or out +* '''Shift-left-click or drag''': Extend or shrink the selection range +* '''left-drag horizontally''': select a time range or change the time range begin or end time +* '''middle-drag or Ctrl-left-drag horizontally''': pan left or right +* '''right-drag horizontally''': [[#Zoom region|zoom region]] +* '''Shift-mouse wheel up/down''': scroll left or right +* '''Ctrl-mouse wheel up/down''': zoom in or out horizontally === Toolbar === @@ -2048,6 +2425,99 @@ The Memory Usage View '''toolbar''', located at the top right of the view, has s Please note this view will not show shared memory or stack memory usage. +== Source Lookup (for LTTng-UST 2.8+) == + +Starting with LTTng 2.8, the tracer can now provide enough information to +associate trace events with their location in the original source code. + +To make use of this feature, first make sure your binaries are compiled with +debug information (-g), so that the instruction pointers can be mapped to source +code locations. This lookup is made using the ''addr2line'' command-line utility, +which needs to be installed and on the '''$PATH''' of the system running Trace +Compass. ''addr2line'' is available in most Linux distributions, Mac OS X, Windows using Cygwin and others. + +The following trace events need to be present in the trace: + +* lttng_ust_statedump:start +* lttng_ust_statedump:end +* lttng_ust_statedump:bin_info +* lttng_ust_statedump:build_id + +as well as the following contexts: + +* vpid +* ip + +For ease of use, you can simply enable all the UST events when setting up your +session: + + lttng enable-event -u -a + lttng add-context -u -t vpid -t ip + +Note that you can also create and configure your session using the [[#Control View | Control View]]. + +If you want to track source locations in shared libraries loaded by the +application, you also need to enable the "lttng_ust_dl:*" events, as well +as preload the UST library providing them when running your program: + + LD_PRELOAD=/path/to/liblttng-ust-dl.so ./myprogram + +If all the required information is present, then the ''Source Location'' column +of the Event Table should be populated accordingly, and the ''Open Source Code'' +action should be available. Refer to the section [[#Event Source Lookup]] for +more details. + +The ''Binary Location'' information should be present even if the original +binaries are not available, since it only makes use of information found in the +trace. A '''+''' denotes a relative address (i.e. an offset within the object +itself), whereas a '''@''' denotes an absolute address, for +non-position-independent objects. + +[[Image:images/sourceLookup/trace-with-debug-info.png]] + +''Example of a trace with debug info and corresponding Source Lookup information, showing a tracepoint originating from a shared library'' + +=== Binary file location configuration === + +To resolve addresses to function names and source code locations, the analysis +makes use of the binary files (executables or shared libraries) present on the +system. By default, it will look for the file paths as they are found in the +trace, which means that it should work out-of-the-box if the trace was taken on +the same machine that Trace Compass is running. + +It is possible to configure a ''root directory'' that will be used as a prefix +for all file path resolutions. The button to open the configuration dialog is +called '''Configure how addresses are mapped to function names''' and is +currently located in the [[#Call Stack View]]. Note that the Call Stack View +will also make use of this configuration to resolve its function names. + +[[Image:images/sourceLookup/symbol-mapping-config-ust28.png]] + +''The symbol configuration dialog for LTTng-UST 2.8+ traces'' + +This can be useful if a trace was taken on a remote target, and an image of that +target is available locally. + +If a binary file is being traced on a target, the paths in the trace will refer +to the paths on the target. For example, if they are: + +* /usr/bin/program +* /usr/lib/libsomething.so +* /usr/local/lib/libcustom.so + +and an image of that target is copied locally on the system at +''/home/user/project/image'', which means the binaries above end up at: + +* /home/user/project/image/usr/bin/program +* /home/user/project/image/usr/lib/libsomething.so +* /home/user/project/image/usr/local/lib/libcustom.so + +Then selecting the ''/home/user/project/image'' directory in the configuration +dialog above will allow Trace Compass to read the debug symbols correctly. + +Note that this path prefix will apply to both binary file and source file +locations, which may or may not be desirable. + = Trace synchronization = It is possible to synchronize traces from different machines so that they have the same time reference. Events from the reference trace will have the same timestamps as usual, but the events from traces synchronized with the first one will have their timestamps transformed according to the formula obtained after synchronization. @@ -2217,15 +2687,34 @@ This will update all the displayed timestamps. It is possible to define custom trace analyses and a way to view them in an XML format. These kind of analyses allow doing more with the trace data than what the default analyses shipped with TMF offer. It can be customized to a specific problem, and fine-tuned to show exactly what you're looking for. -== Importing an XML file containing analysis == +== Managing XML files containing analyses == + +The '''Manage XML Analyses''' dialog is used to manage the list of XML files containing analysis. To open the dialog: -If you already have an XML file defining state providers and/or views, you can import it in your TMF workspace by right-clicking on the ''Traces'' or ''Experiments'' folder and selecting ''Import XML Analysis''. +* Open the '''Project Explorer''' view. +* Select '''Manage XML Analyses...''' from the '''Traces''' folder context menu. + +[[Image:images/ManageXMLAnalysis.png]] + +The list of currently defined XML analyses is displayed on the left side of the dialog. + +The following actions can be performed from this dialog: + +* Import + +Click the '''Import''' button and select a file from the opened file dialog to import an XML file containing an analysis. The file will be validated before importing it and if successful, the new analysis and views will be shown under the traces for which they apply. You will need to close any already opened traces and re-open them before the new analysis can be executed. If an invalid file is selected, an error message will be displayed to the user. -[[Image:images/import_XML_analysis.png| Import XML analysis menu]] +* Export -You will be prompted to select the file. It will be validated before importing it and if successful, the new analysis and views will be shown under the traces for which they apply. You will need to close any already opened traces and re-open them before the new analysis can be executed. +Select an XML file from the list, click the '''Export''' button and enter or select a file in the opened file dialog to export the XML analysis. Note that if an existing file containing an analysis is selected, its content will be replaced with the analysis to export. -Right now, there is no way to "unimport" analyses from within the application. A UI to manage the imported analyses is currently being worked on. In the meantime, you can navigate to your workspace directory, and delete the files in .metadata/.plugins/org.eclipse.tracecompass.tmf.analysis.xml.core/xml_files . +* Edit + +Select an XML file from the list, click the '''Edit''' to open the XML editor. When the file is saved after being modified, it is validated and traces that are affected by this file are closed. + +* Delete + +Select an XML file from the list and click the '''Delete''' button to remove the XML file. Deleting an XML file will close all the traces for which this analysis applies and remove the analysis. == Defining XML components == @@ -2336,7 +2825,7 @@ Optional header information can be added to the state provider. A "traceType" sh -If pre-defined values will be used in the state provider, they must be defined before the state providers. They can then be referred to in the state changes by name, preceded by the '$' sign. It is not necessary to use pre-defined values, the state change can use values like (100, 101, 102) directly. +If predefined values will be used in the state provider, they must be defined before the state providers. They can then be referred to in the state changes by name, preceded by the '$' sign. It is not necessary to use predefined values, the state change can use values like (100, 101, 102) directly.
@@ -2433,11 +2922,309 @@ If there are corrections to make, you may modify the XML state provider file, an If modifications are made to the XML state provider after it has been "published", the '''version''' attribute of the '''xmlStateProvider''' element should be updated. This avoids having to delete each trace's supplementary file manually. If the saved state system used a previous version, it will automatically be rebuilt from the XML file. +== Defining an XML pattern provider == +It exists patterns within an execution trace that can provide high level details about the system execution. A '''pattern''' is a particular combination of events or states that are expected to occur within a trace. It may be composed of several state machines that inherit or communicate through a common state system. + +We may have multiple instances (scenarios) of a running state machine within a pattern. Each scenario which has its own path in the state system can generate segments to populate the data-driven views + +=== The state system structure === + +The pattern analysis generates a predefined attribute tree described as follows : + + +|- state machines +| |- state machine 0 +| |- scenario 0 +| |- status +| |- state +| |- start +| ... +| |- storedFields +| |- field 1 +| ... +| |- startTime +| ... +| ... +| |- scenarios 1 +| ... +| |- state machine 1 +| ... ++ +The user can add custom data in this tree or determine its own attribute tree beside of this one. + +=== Writing the XML pattern provider === +Details about the XML structure are available in the XSD files. + +First define the pattern element. As the state provider element described in [[#Writing_the_XML_state_provider | Writing the XML state provider]], it has a "version" attribute and an "id" attribute. + +++ +Optional header information as well as predefined values like described in [[#Writing_the_XML_state_provider | Writing the XML state provider]] can be added. + +Stored values can be added before the pattern handler. The predefined action '''saveStoredField''' triggers the updates of the stored fields and the predefined action '''clearStoredFields''' reset the values. + ++ ++ +The behavior of the pattern and the models it needs are described in the pattern handler element. + +The structure of the state machine (FSM) is based on the SCXML structure. The following example describe an FSM that matches all the system call in an LTTng kernel trace. + ++ ++ +The value of the target attribute corresponds to the 'id' of a test element described in the XML file and is a reference to it. Similarly, the value of the action attribute corresponds to the 'id' of an action element described in the XML file and is a reference to it. + +Conditions are used in the transitions to switch between the state of an FSM. They are defined under the '''test''' element. Two types of conditions are allowed : '''Data condition''' and '''Time condition'''. It is possible to combine several conditions using a logical operator (OR, AND, ...). + +Data conditions tests the ongoing event information against the data in the state system or constant values. The following condition tests whether the current thread running on the CPU is also the ongoing scenario thread. + ++ ++ ++ + ++ + ++ +Two types of time conditions are available: +* Time range conditions tests whether the ongoing event happens between a specific range of time. The following condition tests whether the ongoing event happens between 1 nanosecond and 3 nanoseconds. + ++ ++ ++ ++ ++ + + ++ + ++ +* Elapsed time conditions tests the value of the time spent since a specific state of an fsm. The following condition tests whether the ongoing event happens less than 3 nanoseconds after that the scenario reaches the state "syscall_entry_x". + ++ ++ ++ ++ ++ ++ +Two types of actions are allowed : +* State changes update values of attributes into the state system. The following example set the value of the thread for the current scenario. + ++ ++ ++ ++ ++ ++ +* Generate segments. The following example represents a system call segment. + ++ ++ ++ + + ++ + ++ +When existing, the stored fields will be added as fields for the generated segments. + +Here is the complete XML file by combining all the examples models above: + ++ ++ ++ ++ ++ ++ + + + ++ +=== Representing the scenarios === + +Segments generated by the pattern analysis are used to populate latency views. A description of these views can be found in [[#Latency_Analyses | Latency Analyses]]. + +The full XML analysis example described above will generate the following views : + +* Latency Table + +[[Image:images/XMLPatternAnalysis/LatencyTable.png| Latency Table example - System Call pattern]] + +* Latency vs Time + +[[Image:images/XMLPatternAnalysis/LatencyVSTime.png| Latency vs Time example - System Call pattern]] + +* Latency Statistics + +[[Image:images/XMLPatternAnalysis/LatencyStatistics.png| Latency Statistics example - System Call pattern]] + +* Latency vs Count + +[[Image:images/XMLPatternAnalysis/LatencyVSCount.png| Latency vs Count example - System Call pattern]] + == Defining an XML time graph view == A time graph view is a view divided in two, with a tree viewer on the left showing information on the different entries to display and a Gantt-like viewer on the right, showing the state of the entries over time. The [[#Control_Flow_View | Control Flow View]] is an example of a time graph view. -Such views can be defined in XML using the data in the state system. The state system itself could have been built by an XML-defined state provider or by any pre-defined Java analysis. It only requires knowing the structure of the state system, which can be explored using the [[#State System Explorer View | State System Explorer View]] (or programmatically using the methods in ''ITmfStateSystem''). +Such views can be defined in XML using the data in the state system. The state system itself could have been built by an XML-defined state provider or by any predefined Java analysis. It only requires knowing the structure of the state system, which can be explored using the [[#State System Explorer View | State System Explorer View]] (or programmatically using the methods in ''ITmfStateSystem''). In the example above, suppose we want to display the status for each task. In the state system, it means the path of the entries to display is "Tasks/*". The attribute whose value should be shown in the Gantt chart is the entry attribute itself. So the XML to display these entries would be as such: @@ -2495,11 +3282,14 @@ The following screenshot shows the result of the preceding example on a test tra [[Image:images/Xml_analysis_screenshot.png| XML analysis with view]] +==== Using the keyboard ==== +*'''Ctrl + F''': Search in the view. (see [[#Searching in Time Graph Views | Searching in Time Graph Views]]) + == Defining an XML XY chart == An XY chart displays series as a set of numerical values over time. The X-axis represents the time and is synchronized with the trace's current time range. The Y-axis can be any numerical value. -Such views can be defined in XML using the data in the state system. The state system itself could have been built by an XML-defined state provider or by any pre-defined Java analysis. It only requires knowing the structure of the state system, which can be explored using the [[#State System Explorer View | State System Explorer View]] (or programmatically using the methods in ''ITmfStateSystem''). +Such views can be defined in XML using the data in the state system. The state system itself could have been built by an XML-defined state provider or by any predefined Java analysis. It only requires knowing the structure of the state system, which can be explored using the [[#State System Explorer View | State System Explorer View]] (or programmatically using the methods in ''ITmfStateSystem''). We will use the Linux Kernel Analysis on LTTng kernel traces to show an example XY chart. In this state system, the status of each CPU is a numerical value. We will display this value as the Y axis of the series. There will be one series per CPU. The XML to display these entries would be as such: @@ -2520,7 +3310,7 @@ Like for the time graph views, optional header information can be added to the v+ + ++ + ++ + + + + + + + + + + + + + + + ++ + ++ + + + + ++ + + ++ + ++ ++ ++ ++ ++ ++ + ++ ++ + + ++ ++ ++ ++ + + ++ + + + ++ + ++ + + + + + ++ + + + ++ + + ++ + + + ++ ++ ++ ++ ++ + + + ++ ++ + ++ + -@@ -2547,6 +3337,114 @@ The following screenshot shows the result of the preceding example on a LTTng Ke [[Image:images/XML_xy_chart.png| XML XY chart]] += Latency Analyses = + +Trace Compass offers a feature called Latency analysis. This allows an analysis to return intervals and these intervals will be displayed in four different views. An example analysis is provided with kernel system call latencies being provided. The available views are: + +* System Call Latencies +A '''table''' of the raw latencies. This view is useful to inspect individual latencies. + + [[Image:images/LatenciesTable.png| Latency Table example - System Call Latencies]] + + +* System Call Latency vs Time +A time aligned '''scatter chart''' of the latencies with respect to the current window range. This view is useful to see the overall form of the latencies as they arrive. + +[[Image:images/LatenciesScatter.png| Latency Scatter Chart example - System Call Latency vs Time]] + + +* System Call Latency Statistics +A view of the total '''statistics''' of the latencies. These show the ''minimum'', ''maximum'', ''average'', ''standard deviation'', and ''count'' of the latencies when applicable. This tool is useful for finding the outliers on a per-category basis. + +Right-clicking on an entry of the table and select '''Go to minimum''' allows to select the range of the minimum latency for the selected entry and synchronize the other views to this time range. + +Right-clicking on an entry of the table and select '''Go to maximum''' allows to select the range of the maximum latency for the selected entry and synchronize the other views to this time range. + +[[Image:images/LatenciesStatistics.png| Latency Statistics example - System Call Latency Statistics]] + + +* System Call Density +A '''density''' view, analyzing the current time range. This is useful to find global outliers. + +[[Image:images/LatenciesDensity.png| Latency Densities example - System Call Density]] + += Virtual Machine Analysis = + +Virtual environments are usually composed of host machines, who each run an hypervisor program on which one or many guests can be run. Tracing a guest machine alone can often yield some strange results as from its point of view, it has full use of the resources, but in reality, most resources are shared with the host and other guests. + +To better understand what is happening in such an environment, it is necessary to trace all the machines involved, guests and hosts, and correlate this information in an experiment that will display a complete view of the virtualized environment. + +== Virtual Machine Experiment == + +A trace has to be taken for each machine, guest and host, in the virtualized environment. The host trace is the most important to have, as missing guests will only give an incomplete view of the system, but missing hosts usually won't allow to identify the hypervisor, nor determine when a guest is preempted from the host CPUs. The virtual machine analysis only makes sense if the host trace is available. + +Once all the traces are imported in Trace Compass, they can be [[#Creating a Experiment | added to an experiment]]. The type of the experiment should by set to '''Virtual Machine Experiment''' by clicking on the right mouse button over the experiment name, then selecting '''Select Experiment Type...'''. + +[[Image:images/vmAnalysis/VM_experiment.png | Virtual Machine Experiment]] + +Depending on the hypervisor used, traces might need to be [[#Trace synchronization | synchronized]] so that they have the same time reference and their events can be correctly correlated. + +== Virtual CPU View == + +The Virtual CPU view shows the status of CPUs and threads on guests augmented with the preemption and hypervisor data we get from the host. + +In the image below, we see for the virtual CPU status that it has a few more states than the CPUs in the [[#Resources View | Resources View]]: in red and purple respectively, when the virtual CPU is running hypervisor code and when the CPU is preempted on the host. + +The entries for each thread of the machine corresponds to the one from the [[#Control flow | Control Flow View]], augmented with the data from the Virtual CPU, so that we see that even though it is running from the guest's point of view, it is actually not running when the Virtual CPU it runs on is in preempted or hypervisor mode. + +[[Image:images/vmAnalysis/VM_CPU_view.png | Virtual CPU view]] + +==== Using the keyboard ==== +*'''Ctrl + F''': Search in the view. (see [[#Searching in Time Graph Views | Searching in Time Graph Views]]) + +== Hypervisor-specific Tracing == + +In order to be able to correlate data from the guests and hosts traces, each hypervisor supported by Trace Compass requires some specific events, that are sometimes not available in the default installation of the tracer. + +The following sections describe how to obtain traces for each hypervisor. + +=== Qemu/KVM === + +The Qemu/KVM hypervisor require extra tracepoints not yet shipped in LTTng for both guests and hosts, as well as compilation with the full kernel source tree on the host, to have access to kvm_entry/kvm_exit events on x86. + +Obtain the source code with extra tracepoints, along with lttng-modules + + # git clone https://github.com/giraldeau/lttng-modules.git + # cd lttng-modules + +Checkout the addons branch, compile and install lttng-modules as per the lttng-modules documentation. + + # git checkout addons + # make + # sudo make modules_install + # sudo depmod -a + +On the host, to have complete kvm tracepoints support, the make command has to include the full kernel tree. So first, you'll need to obtain the kernel source tree. See your distribution's documentation on how to get it. This will compile extra modules, including lttng-probe-kvm-x86, which we need. + + # make KERNELDIR=/path/to/kernel/dir + +The lttng addons modules must be inserted manually for the virtual machine extra tracepoints to be available: + + # sudo modprobe lttng-addons + # sudo modprobe lttng-vmsync-host # on the host + # sudo modprobe lttng-vmsync-guest # on the guest + +The following tracepoints will be available + + # sudo lttng list -k + Kernel events: + ------------- + ... + kvm_entry (loglevel: TRACE_EMERG (0)) (type: tracepoint) + kvm_exit (loglevel: TRACE_EMERG (0)) (type: tracepoint) + vmsync_gh_guest (loglevel: TRACE_EMERG (0)) (type: tracepoint) # on the guest + vmsync_hg_guest (loglevel: TRACE_EMERG (0)) (type: tracepoint) # on the guest + vmsync_gh_host (loglevel: TRACE_EMERG (0)) (type: tracepoint) # on the host + vmsync_hg_host (loglevel: TRACE_EMERG (0)) (type: tracepoint) # on the host + ... + +Host and guests can now be traced together and their traces added to an experiment. Because each guest has a different clock than the host, it is necessary to synchronize the traces together. Unfortunately, automatic synchronization with the virtual machine events is not completely implemented yet, so another kind of synchronization needs to be done, with TCP packets for instance. See section on [[#Trace synchronization | trace synchronization]] for information on how to obtain synchronizable traces. + = Limitations = * When parsing text traces, the timestamps are assumed to be in the local time zone. This means that when combining it to CTF binary traces, there could be offsets by a few hours depending on where the traces were taken and where they were read.+