507f322acad248b0ae3d54a698460e38167a7b5a
[deliverable/tracecompass.git] / doc / org.eclipse.tracecompass.doc.dev / doc / Developer-Guide.mediawiki
1
2 = Table of Contents =
3
4 __TOC__
5
6 = Introduction =
7
8 The purpose of the '''Tracing Monitoring Framework (TMF)''' is to facilitate the integration of tracing and monitoring tools into Eclipse, to provide out-of-the-box generic functionalities/views and provide extension mechanisms of the base functionalities for application specific purposes.
9
10 = Implementing a New Trace Type =
11
12 The framework can easily be extended to support more trace types. To make a new trace type, one must define the following items:
13
14 * The event type
15 * The trace reader
16 * The trace context
17 * The trace location
18 * The ''org.eclipse.linuxtools.tmf.core.tracetype'' plug-in extension point
19 * (Optional) The ''org.eclipse.linuxtools.tmf.ui.tracetypeui'' plug-in extension point
20
21 The '''event type''' must implement an ''ITmfEvent'' or extend a class that implements an ''ITmfEvent''. Typically it will extend ''TmfEvent''. The event type must contain all the data of an event. The '''trace reader''' must be of an ''ITmfTrace'' type. The ''TmfTrace'' class will supply many background operations so that the reader only needs to implement certain functions. The '''trace context''' can be seen as the internals of an iterator. It is required by the trace reader to parse events as it iterates the trace and to keep track of its rank and location. It can have a timestamp, a rank, a file position, or any other element, it should be considered to be ephemeral. The '''trace location''' is an element that is cloned often to store checkpoints, it is generally persistent. It is used to rebuild a context, therefore, it needs to contain enough information to unambiguously point to one and only one event. Finally the ''tracetype'' plug-in extension associates a given trace, non-programmatically to a trace type for use in the UI.
22
23 == An Example: Nexus-lite parser ==
24
25 === Description of the file ===
26
27 This is a very small subset of the nexus trace format, with some changes to make it easier to read. There is one file. This file starts with 64 Strings containing the event names, then an arbitrarily large number of events. The events are each 64 bits long. the first 32 are the timestamp in microseconds, the second 32 are split into 6 bits for the event type, and 26 for the data payload.
28
29 The trace type will be made of two parts, part 1 is the event description, it is just 64 strings, comma seperated and then a line feed.
30
31 <pre>
32 Startup,Stop,Load,Add, ... ,reserved\n
33 </pre>
34
35 Then there will be the events in this format
36
37 {| width= "85%"
38 |style="width: 50%; background-color: #ffffcc;"|timestamp (32 bits)
39 |style="width: 10%; background-color: #ffccff;"|type (6 bits)
40 |style="width: 40%; background-color: #ccffcc;"|payload (26 bits)
41 |-
42 |style="background-color: #ffcccc;" colspan="3"|64 bits total
43 |}
44
45 all events will be the same size (64 bits).
46
47 === NexusLite Plug-in ===
48
49 Create a '''New''', '''Project...''', '''Plug-in Project''', set the title to '''com.example.nexuslite''', click '''Next >''' then click on '''Finish'''.
50
51 Now the structure for the Nexus trace Plug-in is set up.
52
53 Add a dependency to TMF core and UI by opening the '''MANIFEST.MF''' in '''META-INF''', selecting the '''Dependencies''' tab and '''Add ...''' '''org.eclipse.linuxtools.tmf.core''' and '''org.eclipse.linuxtools.tmf.ui'''.
54
55 [[Image:images/NTTAddDepend.png]]<br>
56 [[Image:images/NTTSelectProjects.png]]<br>
57
58 Now the project can access TMF classes.
59
60 === Trace Event ===
61
62 The '''TmfEvent''' class will work for this example. No code required.
63
64 === Trace Reader ===
65
66 The trace reader will extend a '''TmfTrace''' class.
67
68 It will need to implement:
69
70 * validate (is the trace format valid?)
71
72 * initTrace (called as the trace is opened
73
74 * seekEvent (go to a position in the trace and create a context)
75
76 * getNext (implemented in the base class)
77
78 * parseEvent (read the next element in the trace)
79
80 For reference, there is an example implementation of the Nexus Trace file in
81 org.eclipse.linuxtools.tracing.examples.core.trace.nexus.NexusTrace.java.
82
83 In this example, the '''validate''' function checks first checks if the file
84 exists, then makes sure that it is really a file, and not a directory. Then we
85 attempt to read the file header, to make sure that it is really a Nexus Trace.
86 If that check passes, we return a TmfValidationStatus with a confidence of 20.
87
88 Typically, TmfValidationStatus confidences should range from 1 to 100. 1 meaning
89 "there is a very small chance that this trace is of this type", and 100 meaning
90 "it is this type for sure, and cannot be anything else". At run-time, the
91 auto-detection will pick the the type which returned the highest confidence. So
92 checks of the type "does the file exist?" should not return a too high
93 confidence.
94
95 Here we used a confidence of 20, to leave "room" for more specific trace types
96 in the Nexus format that could be defined in TMF.
97
98 The '''initTrace''' function will read the event names, and find where the data starts. After this, the number of events is known, and since each event is 8 bytes long according to the specs, the seek is then trivial.
99
100 The '''seek''' here will just reset the reader to the right location.
101
102 The '''parseEvent''' method needs to parse and return the current event and store the current location.
103
104 The '''getNext''' method (in base class) will read the next event and update the context. It calls the '''parseEvent''' method to read the event and update the location. It does not need to be overridden and in this example it is not. The sequence of actions necessary are parse the next event from the trace, create an '''ITmfEvent''' with that data, update the current location, call '''updateAttributes''', update the context then return the event.
105
106 Traces will typically implement an index, to make seeking faster. The index can
107 be rebuilt every time the trace is opened. Alternatively, it can be saved to
108 disk, to make future openings of the same trace quicker. To do so, the trace
109 object can implement the '''ITmfPersistentlyIndexable''' interface.
110
111 === Trace Context ===
112
113 The trace context will be a '''TmfContext'''
114
115 === Trace Location ===
116
117 The trace location will be a long, representing the rank in the file. The '''TmfLongLocation''' will be the used, once again, no code is required.
118
119 === The ''org.eclipse.linuxtools.tmf.core.tracetype'' and ''org.eclipse.linuxtools.tmf.ui.tracetypeui'' plug-in extension point ===
120
121 One should implement the ''tmf.core.tracetype'' extension in their own plug-in.
122 In this example, the Nexus trace plug-in will be modified.
123
124 The '''plugin.xml''' file in the ui plug-in needs to be updated if one wants users to access the given event type. It can be updated in the Eclipse plug-in editor.
125
126 # In Extensions tab, add the '''org.eclipse.linuxtools.tmf.core.tracetype''' extension point.
127 [[Image:images/NTTExtension.png]]<br>
128 [[Image:images/NTTTraceType.png]]<br>
129 [[Image:images/NTTExtensionPoint.png]]<br>
130
131 # Add in the '''org.eclipse.linuxtools.tmf.ui.tracetype''' extension a new type. To do that, '''right click''' on the extension then in the context menu, go to '''New >''', '''type'''.
132
133 [[Image:images/NTTAddType.png]]<br>
134
135 The '''id''' is the unique identifier used to refer to the trace.
136
137 The '''name''' is the field that shall be displayed when a trace type is selected.
138
139 The '''trace type''' is the canonical path refering to the class of the trace.
140
141 The '''event type''' is the canonical path refering to the class of the events of a given trace.
142
143 The '''category''' (optional) is the container in which this trace type will be stored.
144
145 # (Optional) To also add UI-specific properties to your trace type, use the '''org.eclipse.linuxtools.tmf.ui.tracetypeui''' extension. To do that,
146 '''right click''' on the extension then in the context menu, go to
147 '''New >''', '''type'''.
148
149 The '''tracetype''' here is the '''id''' of the
150 ''org.eclipse.linuxtools.tmf.core.tracetype'' mentioned above.
151
152 The '''icon''' is the image to associate with that trace type.
153
154 In the end, the extension menu should look like this.
155
156 [[Image:images/NTTPluginxmlComplete.png]]<br>
157
158 == Other Considerations ==
159 The ''org.eclipse.linuxtools.tmf.ui.viewers.events.TmfEventsTable'' provides additional features that are active when the event class (defined in '''event type''') implements certain additional interfaces.
160
161 === Collapsing of repetitive events ===
162 By implementing the interface ''org.eclipse.linuxtools.tmf.core.event.collapse.ITmfCollapsibleEvent'' the events table will allow to collapse repetitive events by selecting the menu item '''Collapse Events''' after pressing the right mouse button in the table.
163
164 == Best Practices ==
165
166 * Do not load the whole trace in RAM, it will limit the size of the trace that can be read.
167 * Reuse as much code as possible, it makes the trace format much easier to maintain.
168 * Use Eclipse's editor instead of editing the XML directly.
169 * Do not forget Java supports only signed data types, there may be special care needed to handle unsigned data.
170 * If the support for your trace has custom UI elements (like icons, views, etc.), split the core and UI parts in separate plugins, named identically except for a ''.core'' or ''.ui'' suffix.
171 ** Implement the ''tmf.core.tracetype'' extension in the core plugin, and the ''tmf.ui.tracetypeui'' extension in the UI plugin if applicable.
172
173 == Download the Code ==
174
175 The described example is available in the
176 org.eclipse.linuxtools.tracing.examples.(tests.)trace.nexus packages with a
177 trace generator and a quick test case.
178
179 == Optional Trace Type Attributes ==
180
181 After defining the trace type as described in the previous chapters it is possible to define optional attributes for the trace type.
182
183 === Default Editor ===
184
185 The '''defaultEditor''' attribute of the '''org.eclipse.tmf.ui.tracetypeui'''
186 extension point allows for configuring the editor to use for displaying the
187 events. If omitted, the ''TmfEventsEditor'' is used as default.
188
189 To configure an editor, first add the '''defaultEditor''' attribute to the trace
190 type in the extension definition. This can be done by selecting the trace type
191 in the plug-in manifest editor. Then click the right mouse button and select
192 '''New -> defaultEditor''' in the context sensitive menu. Then select the newly
193 added attribute. Now you can specify the editor id to use on the right side of
194 the manifest editor. For example, this attribute could be used to implement an
195 extension of the class ''org.eclipse.ui.part.MultiPageEditor''. The first page
196 could use the ''TmfEventsEditor''' to display the events in a table as usual and
197 other pages can display other aspects of the trace.
198
199 === Events Table Type ===
200
201 The '''eventsTableType''' attribute of the '''org.eclipse.tmf.ui.tracetypeui'''
202 extension point allows for configuring the events table class to use in the
203 default events editor. If omitted, the default events table will be used.
204
205 To configure a trace type specific events table, first add the
206 '''eventsTableType''' attribute to the trace type in the extension definition.
207 This can be done by selecting the trace type in the plug-in manifest editor.
208 Then click the right mouse button and select '''New -> eventsTableType''' in the
209 context sensitive menu. Then select the newly added attribute and click on
210 ''class'' on the right side of the manifest editor. The new class wizard will
211 open. The ''superclass'' field will be already filled with the class ''org.eclipse.linuxtools.tmf.ui.viewers.events.TmfEventsTable''.
212
213 By using this attribute, a table with different columns than the default columns
214 can be defined. See the class org.eclipse.linuxtools.internal.lttng2.kernel.ui.viewers.events.Lttng2EventsTable
215 for an example implementation.
216
217 = View Tutorial =
218
219 This tutorial describes how to create a simple view using the TMF framework and the SWTChart library. SWTChart is a library based on SWT that can draw several types of charts including a line chart which we will use in this tutorial. We will create a view containing a line chart that displays time stamps on the X axis and the corresponding event values on the Y axis.
220
221 This tutorial will cover concepts like:
222
223 * Extending TmfView
224 * Signal handling (@TmfSignalHandler)
225 * Data requests (TmfEventRequest)
226 * SWTChart integration
227
228 '''Note''': TMF 3.0.0 provides base implementations for generating SWTChart viewers and views. For more details please refer to chapter [[#TMF Built-in Views and Viewers]].
229
230 === Prerequisites ===
231
232 The tutorial is based on Eclipse 4.4 (Eclipse Luna), TMF 3.0.0 and SWTChart 0.7.0. If you are using TMF from the source repository, SWTChart is already included in the target definition file (see org.eclipse.linuxtools.lttng.target). You can also install it manually by using the Orbit update site. http://download.eclipse.org/tools/orbit/downloads/
233
234 === Creating an Eclipse UI Plug-in ===
235
236 To create a new project with name org.eclipse.linuxtools.tmf.sample.ui select '''File -> New -> Project -> Plug-in Development -> Plug-in Project'''. <br>
237 [[Image:images/Screenshot-NewPlug-inProject1.png]]<br>
238
239 [[Image:images/Screenshot-NewPlug-inProject2.png]]<br>
240
241 [[Image:images/Screenshot-NewPlug-inProject3.png]]<br>
242
243 === Creating a View ===
244
245 To open the plug-in manifest, double-click on the MANIFEST.MF file. <br>
246 [[Image:images/SelectManifest.png]]<br>
247
248 Change to the Dependencies tab and select '''Add...''' of the ''Required Plug-ins'' section. A new dialog box will open. Next find plug-in ''org.eclipse.linuxtools.tmf.core'' and press '''OK'''<br>
249 Following the same steps, add ''org.eclipse.linuxtools.tmf.ui'' and ''org.swtchart''.<br>
250 [[Image:images/AddDependencyTmfUi.png]]<br>
251
252 Change to the Extensions tab and select '''Add...''' of the ''All Extension'' section. A new dialog box will open. Find the view extension ''org.eclipse.ui.views'' and press '''Finish'''.<br>
253 [[Image:images/AddViewExtension1.png]]<br>
254
255 To create a view, click the right mouse button. Then select '''New -> view'''<br>
256 [[Image:images/AddViewExtension2.png]]<br>
257
258 A new view entry has been created. Fill in the fields ''id'' and ''name''. For ''class'' click on the '''class hyperlink''' and it will show the New Java Class dialog. Enter the name ''SampleView'', change the superclass to ''TmfView'' and click Finish. This will create the source file and fill the ''class'' field in the process. We use TmfView as the superclass because it provides extra functionality like getting the active trace, pinning and it has support for signal handling between components.<br>
259 [[Image:images/FillSampleViewExtension.png]]<br>
260
261 This will generate an empty class. Once the quick fixes are applied, the following code is obtained:
262
263 <pre>
264 package org.eclipse.linuxtools.tmf.sample.ui;
265
266 import org.eclipse.swt.widgets.Composite;
267 import org.eclipse.ui.part.ViewPart;
268
269 public class SampleView extends TmfView {
270
271 public SampleView(String viewName) {
272 super(viewName);
273 // TODO Auto-generated constructor stub
274 }
275
276 @Override
277 public void createPartControl(Composite parent) {
278 // TODO Auto-generated method stub
279
280 }
281
282 @Override
283 public void setFocus() {
284 // TODO Auto-generated method stub
285
286 }
287
288 }
289 </pre>
290
291 This creates an empty view, however the basic structure is now is place.
292
293 === Implementing a view ===
294
295 We will start by adding a empty chart then it will need to be populated with the trace data. Finally, we will make the chart more visually pleasing by adjusting the range and formating the time stamps.
296
297 ==== Adding an Empty Chart ====
298
299 First, we can add an empty chart to the view and initialize some of its components.
300
301 <pre>
302 private static final String SERIES_NAME = "Series";
303 private static final String Y_AXIS_TITLE = "Signal";
304 private static final String X_AXIS_TITLE = "Time";
305 private static final String FIELD = "value"; // The name of the field that we want to display on the Y axis
306 private static final String VIEW_ID = "org.eclipse.linuxtools.tmf.sample.ui.view";
307 private Chart chart;
308 private ITmfTrace currentTrace;
309
310 public SampleView() {
311 super(VIEW_ID);
312 }
313
314 @Override
315 public void createPartControl(Composite parent) {
316 chart = new Chart(parent, SWT.BORDER);
317 chart.getTitle().setVisible(false);
318 chart.getAxisSet().getXAxis(0).getTitle().setText(X_AXIS_TITLE);
319 chart.getAxisSet().getYAxis(0).getTitle().setText(Y_AXIS_TITLE);
320 chart.getSeriesSet().createSeries(SeriesType.LINE, SERIES_NAME);
321 chart.getLegend().setVisible(false);
322 }
323
324 @Override
325 public void setFocus() {
326 chart.setFocus();
327 }
328 </pre>
329
330 The view is prepared. Run the Example. To launch the an Eclipse Application select the ''Overview'' tab and click on '''Launch an Eclipse Application'''<br>
331 [[Image:images/RunEclipseApplication.png]]<br>
332
333 A new Eclipse application window will show. In the new window go to '''Windows -> Show View -> Other... -> Other -> Sample View'''.<br>
334 [[Image:images/ShowViewOther.png]]<br>
335
336 You should now see a view containing an empty chart<br>
337 [[Image:images/EmptySampleView.png]]<br>
338
339 ==== Signal Handling ====
340
341 We would like to populate the view when a trace is selected. To achieve this, we can use a signal hander which is specified with the '''@TmfSignalHandler''' annotation.
342
343 <pre>
344 @TmfSignalHandler
345 public void traceSelected(final TmfTraceSelectedSignal signal) {
346
347 }
348 </pre>
349
350 ==== Requesting Data ====
351
352 Then we need to actually gather data from the trace. This is done asynchronously using a ''TmfEventRequest''
353
354 <pre>
355 @TmfSignalHandler
356 public void traceSelected(final TmfTraceSelectedSignal signal) {
357 // Don't populate the view again if we're already showing this trace
358 if (currentTrace == signal.getTrace()) {
359 return;
360 }
361 currentTrace = signal.getTrace();
362
363 // Create the request to get data from the trace
364
365 TmfEventRequest req = new TmfEventRequest(TmfEvent.class,
366 TmfTimeRange.ETERNITY, 0, ITmfEventRequest.ALL_DATA,
367 ITmfEventRequest.ExecutionType.BACKGROUND) {
368
369 @Override
370 public void handleData(ITmfEvent data) {
371 // Called for each event
372 super.handleData(data);
373 }
374
375 @Override
376 public void handleSuccess() {
377 // Request successful, not more data available
378 super.handleSuccess();
379 }
380
381 @Override
382 public void handleFailure() {
383 // Request failed, not more data available
384 super.handleFailure();
385 }
386 };
387 ITmfTrace trace = signal.getTrace();
388 trace.sendRequest(req);
389 }
390 </pre>
391
392 ==== Transferring Data to the Chart ====
393
394 The chart expects an array of doubles for both the X and Y axis values. To provide that, we can accumulate each event's time and value in their respective list then convert the list to arrays when all events are processed.
395
396 <pre>
397 TmfEventRequest req = new TmfEventRequest(TmfEvent.class,
398 TmfTimeRange.ETERNITY, 0, ITmfEventRequest.ALL_DATA,
399 ITmfEventRequest.ExecutionType.BACKGROUND) {
400
401 ArrayList<Double> xValues = new ArrayList<Double>();
402 ArrayList<Double> yValues = new ArrayList<Double>();
403
404 @Override
405 public void handleData(ITmfEvent data) {
406 // Called for each event
407 super.handleData(data);
408 ITmfEventField field = data.getContent().getField(FIELD);
409 if (field != null) {
410 yValues.add((Double) field.getValue());
411 xValues.add((double) data.getTimestamp().getValue());
412 }
413 }
414
415 @Override
416 public void handleSuccess() {
417 // Request successful, not more data available
418 super.handleSuccess();
419
420 final double x[] = toArray(xValues);
421 final double y[] = toArray(yValues);
422
423 // This part needs to run on the UI thread since it updates the chart SWT control
424 Display.getDefault().asyncExec(new Runnable() {
425
426 @Override
427 public void run() {
428 chart.getSeriesSet().getSeries()[0].setXSeries(x);
429 chart.getSeriesSet().getSeries()[0].setYSeries(y);
430
431 chart.redraw();
432 }
433
434 });
435 }
436
437 /**
438 * Convert List<Double> to double[]
439 */
440 private double[] toArray(List<Double> list) {
441 double[] d = new double[list.size()];
442 for (int i = 0; i < list.size(); ++i) {
443 d[i] = list.get(i);
444 }
445
446 return d;
447 }
448 };
449 </pre>
450
451 ==== Adjusting the Range ====
452
453 The chart now contains values but they might be out of range and not visible. We can adjust the range of each axis by computing the minimum and maximum values as we add events.
454
455 <pre>
456
457 ArrayList<Double> xValues = new ArrayList<Double>();
458 ArrayList<Double> yValues = new ArrayList<Double>();
459 private double maxY = -Double.MAX_VALUE;
460 private double minY = Double.MAX_VALUE;
461 private double maxX = -Double.MAX_VALUE;
462 private double minX = Double.MAX_VALUE;
463
464 @Override
465 public void handleData(ITmfEvent data) {
466 super.handleData(data);
467 ITmfEventField field = data.getContent().getField(FIELD);
468 if (field != null) {
469 Double yValue = (Double) field.getValue();
470 minY = Math.min(minY, yValue);
471 maxY = Math.max(maxY, yValue);
472 yValues.add(yValue);
473
474 double xValue = (double) data.getTimestamp().getValue();
475 xValues.add(xValue);
476 minX = Math.min(minX, xValue);
477 maxX = Math.max(maxX, xValue);
478 }
479 }
480
481 @Override
482 public void handleSuccess() {
483 super.handleSuccess();
484 final double x[] = toArray(xValues);
485 final double y[] = toArray(yValues);
486
487 // This part needs to run on the UI thread since it updates the chart SWT control
488 Display.getDefault().asyncExec(new Runnable() {
489
490 @Override
491 public void run() {
492 chart.getSeriesSet().getSeries()[0].setXSeries(x);
493 chart.getSeriesSet().getSeries()[0].setYSeries(y);
494
495 // Set the new range
496 if (!xValues.isEmpty() && !yValues.isEmpty()) {
497 chart.getAxisSet().getXAxis(0).setRange(new Range(0, x[x.length - 1]));
498 chart.getAxisSet().getYAxis(0).setRange(new Range(minY, maxY));
499 } else {
500 chart.getAxisSet().getXAxis(0).setRange(new Range(0, 1));
501 chart.getAxisSet().getYAxis(0).setRange(new Range(0, 1));
502 }
503 chart.getAxisSet().adjustRange();
504
505 chart.redraw();
506 }
507 });
508 }
509 </pre>
510
511 ==== Formatting the Time Stamps ====
512
513 To display the time stamps on the X axis nicely, we need to specify a format or else the time stamps will be displayed as ''long''. We use TmfTimestampFormat to make it consistent with the other TMF views. We also need to handle the '''TmfTimestampFormatUpdateSignal''' to make sure that the time stamps update when the preferences change.
514
515 <pre>
516 @Override
517 public void createPartControl(Composite parent) {
518 ...
519
520 chart.getAxisSet().getXAxis(0).getTick().setFormat(new TmfChartTimeStampFormat());
521 }
522
523 public class TmfChartTimeStampFormat extends SimpleDateFormat {
524 private static final long serialVersionUID = 1L;
525 @Override
526 public StringBuffer format(Date date, StringBuffer toAppendTo, FieldPosition fieldPosition) {
527 long time = date.getTime();
528 toAppendTo.append(TmfTimestampFormat.getDefaulTimeFormat().format(time));
529 return toAppendTo;
530 }
531 }
532
533 @TmfSignalHandler
534 public void timestampFormatUpdated(TmfTimestampFormatUpdateSignal signal) {
535 // Called when the time stamp preference is changed
536 chart.getAxisSet().getXAxis(0).getTick().setFormat(new TmfChartTimeStampFormat());
537 chart.redraw();
538 }
539 </pre>
540
541 We also need to populate the view when a trace is already selected and the view is opened. We can reuse the same code by having the view send the '''TmfTraceSelectedSignal''' to itself.
542
543 <pre>
544 @Override
545 public void createPartControl(Composite parent) {
546 ...
547
548 ITmfTrace trace = getActiveTrace();
549 if (trace != null) {
550 traceSelected(new TmfTraceSelectedSignal(this, trace));
551 }
552 }
553 </pre>
554
555 The view is now ready but we need a proper trace to test it. For this example, a trace was generated using LTTng-UST so that it would produce a sine function.<br>
556
557 [[Image:images/SampleView.png]]<br>
558
559 In summary, we have implemented a simple TMF view using the SWTChart library. We made use of signals and requests to populate the view at the appropriate time and we formated the time stamps nicely. We also made sure that the time stamp format is updated when the preferences change.
560
561 == TMF Built-in Views and Viewers ==
562
563 TMF provides base implementations for several types of views and viewers for generating custom X-Y-Charts, Time Graphs, or Trees. They are well integrated with various TMF features such as reading traces and time synchronization with other views. They also handle mouse events for navigating the trace and view, zooming or presenting detailed information at mouse position. The code can be found in the TMF UI plug-in ''org.eclipse.linuxtools.tmf.ui''. See below for a list of relevant java packages:
564
565 * Generic
566 ** ''org.eclipse.linuxtools.tmf.ui.views'': Common TMF view base classes
567 * X-Y-Chart
568 ** ''org.eclipse.linuxtools.tmf.ui.viewers.xycharts'': Common base classes for X-Y-Chart viewers based on SWTChart
569 ** ''org.eclipse.linuxtools.tmf.ui.viewers.xycharts.barcharts'': Base classes for bar charts
570 ** ''org.eclipse.linuxtools.tmf.ui.viewers.xycharts.linecharts'': Base classes for line charts
571 * Time Graph View
572 ** ''org.eclipse.linuxtools.tmf.ui.widgets.timegraph'': Base classes for time graphs e.g. Gantt-charts
573 * Tree Viewer
574 ** ''org.eclipse.linuxtools.tmf.ui.viewers.tree'': Base classes for TMF specific tree viewers
575
576 Several features in TMF and the Eclipse LTTng integration are using this framework and can be used as example for further developments:
577 * X-Y- Chart
578 ** ''org.eclipse.linuxtools.internal.lttng2.ust.ui.views.memusage.MemUsageView.java''
579 ** ''org.eclipse.linuxtools.internal.lttng2.kernel.ui.views.cpuusage.CpuUsageView.java''
580 ** ''org.eclipse.linuxtools.tracing.examples.ui.views.histogram.NewHistogramView.java''
581 * Time Graph View
582 ** ''org.eclipse.linuxtools.internal.lttng2.kernel.ui.views.controlflow.ControlFlowView.java''
583 ** ''org.eclipse.linuxtools.internal.lttng2.kernel.ui.views.resources.ResourcesView.java''
584 * Tree Viewer
585 ** ''org.eclipse.linuxtools.tmf.ui.views.statesystem.TmfStateSystemExplorer.java''
586 ** ''org.eclipse.linuxtools.internal.lttng2.kernel.ui.views.cpuusage.CpuUsageComposite.java''
587
588 = Component Interaction =
589
590 TMF provides a mechanism for different components to interact with each other using signals. The signals can carry information that is specific to each signal.
591
592 The TMF Signal Manager handles registration of components and the broadcasting of signals to their intended receivers.
593
594 Components can register as VIP receivers which will ensure they will receive the signal before non-VIP receivers.
595
596 == Sending Signals ==
597
598 In order to send a signal, an instance of the signal must be created and passed as argument to the signal manager to be dispatched. Every component that can handle the signal will receive it. The receivers do not need to be known by the sender.
599
600 <pre>
601 TmfExampleSignal signal = new TmfExampleSignal(this, ...);
602 TmfSignalManager.dispatchSignal(signal);
603 </pre>
604
605 If the sender is an instance of the class TmfComponent, the broadcast method can be used:
606
607 <pre>
608 TmfExampleSignal signal = new TmfExampleSignal(this, ...);
609 broadcast(signal);
610 </pre>
611
612 == Receiving Signals ==
613
614 In order to receive any signal, the receiver must first be registered with the signal manager. The receiver can register as a normal or VIP receiver.
615
616 <pre>
617 TmfSignalManager.register(this);
618 TmfSignalManager.registerVIP(this);
619 </pre>
620
621 If the receiver is an instance of the class TmfComponent, it is automatically registered as a normal receiver in the constructor.
622
623 When the receiver is destroyed or disposed, it should deregister itself from the signal manager.
624
625 <pre>
626 TmfSignalManager.deregister(this);
627 </pre>
628
629 To actually receive and handle any specific signal, the receiver must use the @TmfSignalHandler annotation and implement a method that will be called when the signal is broadcast. The name of the method is irrelevant.
630
631 <pre>
632 @TmfSignalHandler
633 public void example(TmfExampleSignal signal) {
634 ...
635 }
636 </pre>
637
638 The source of the signal can be used, if necessary, by a component to filter out and ignore a signal that was broadcast by itself when the component is also a receiver of the signal but only needs to handle it when it was sent by another component or another instance of the component.
639
640 == Signal Throttling ==
641
642 It is possible for a TmfComponent instance to buffer the dispatching of signals so that only the last signal queued after a specified delay without any other signal queued is sent to the receivers. All signals that are preempted by a newer signal within the delay are discarded.
643
644 The signal throttler must first be initialized:
645
646 <pre>
647 final int delay = 100; // in ms
648 TmfSignalThrottler throttler = new TmfSignalThrottler(this, delay);
649 </pre>
650
651 Then the sending of signals should be queued through the throttler:
652
653 <pre>
654 TmfExampleSignal signal = new TmfExampleSignal(this, ...);
655 throttler.queue(signal);
656 </pre>
657
658 When the throttler is no longer needed, it should be disposed:
659
660 <pre>
661 throttler.dispose();
662 </pre>
663
664 == Signal Reference ==
665
666 The following is a list of built-in signals defined in the framework.
667
668 === TmfStartSynchSignal ===
669
670 ''Purpose''
671
672 This signal is used to indicate the start of broadcasting of a signal. Internally, the data provider will not fire event requests until the corresponding TmfEndSynchSignal signal is received. This allows coalescing of requests triggered by multiple receivers of the broadcast signal.
673
674 ''Senders''
675
676 Sent by TmfSignalManager before dispatching a signal to all receivers.
677
678 ''Receivers''
679
680 Received by TmfDataProvider.
681
682 === TmfEndSynchSignal ===
683
684 ''Purpose''
685
686 This signal is used to indicate the end of broadcasting of a signal. Internally, the data provider fire all pending event requests that were received and buffered since the corresponding TmfStartSynchSignal signal was received. This allows coalescing of requests triggered by multiple receivers of the broadcast signal.
687
688 ''Senders''
689
690 Sent by TmfSignalManager after dispatching a signal to all receivers.
691
692 ''Receivers''
693
694 Received by TmfDataProvider.
695
696 === TmfTraceOpenedSignal ===
697
698 ''Purpose''
699
700 This signal is used to indicate that a trace has been opened in an editor.
701
702 ''Senders''
703
704 Sent by a TmfEventsEditor instance when it is created.
705
706 ''Receivers''
707
708 Received by TmfTrace, TmfExperiment, TmfTraceManager and every view that shows trace data. Components that show trace data should handle this signal.
709
710 === TmfTraceSelectedSignal ===
711
712 ''Purpose''
713
714 This signal is used to indicate that a trace has become the currently selected trace.
715
716 ''Senders''
717
718 Sent by a TmfEventsEditor instance when it receives focus. Components can send this signal to make a trace editor be brought to front.
719
720 ''Receivers''
721
722 Received by TmfTraceManager and every view that shows trace data. Components that show trace data should handle this signal.
723
724 === TmfTraceClosedSignal ===
725
726 ''Purpose''
727
728 This signal is used to indicate that a trace editor has been closed.
729
730 ''Senders''
731
732 Sent by a TmfEventsEditor instance when it is disposed.
733
734 ''Receivers''
735
736 Received by TmfTraceManager and every view that shows trace data. Components that show trace data should handle this signal.
737
738 === TmfTraceRangeUpdatedSignal ===
739
740 ''Purpose''
741
742 This signal is used to indicate that the valid time range of a trace has been updated. This triggers indexing of the trace up to the end of the range. In the context of streaming, this end time is considered a safe time up to which all events are guaranteed to have been completely received. For non-streaming traces, the end time is set to infinity indicating that all events can be read immediately. Any processing of trace events that wants to take advantage of request coalescing should be triggered by this signal.
743
744 ''Senders''
745
746 Sent by TmfExperiment and non-streaming TmfTrace. Streaming traces should send this signal in the TmfTrace subclass when a new safe time is determined by a specific implementation.
747
748 ''Receivers''
749
750 Received by TmfTrace, TmfExperiment and components that process trace events. Components that need to process trace events should handle this signal.
751
752 === TmfTraceUpdatedSignal ===
753
754 ''Purpose''
755
756 This signal is used to indicate that new events have been indexed for a trace.
757
758 ''Senders''
759
760 Sent by TmfCheckpointIndexer when new events have been indexed and the number of events has changed.
761
762 ''Receivers''
763
764 Received by components that need to be notified of a new trace event count.
765
766 === TmfTimeSynchSignal ===
767
768 ''Purpose''
769
770 This signal is used to indicate that a new time or time range has been
771 selected. It contains a begin and end time. If a single time is selected then
772 the begin and end time are the same.
773
774 ''Senders''
775
776 Sent by any component that allows the user to select a time or time range.
777
778 ''Receivers''
779
780 Received by any component that needs to be notified of the currently selected time or time range.
781
782 === TmfRangeSynchSignal ===
783
784 ''Purpose''
785
786 This signal is used to indicate that a new time range window has been set.
787
788 ''Senders''
789
790 Sent by any component that allows the user to set a time range window.
791
792 ''Receivers''
793
794 Received by any component that needs to be notified of the current visible time range window.
795
796 === TmfEventFilterAppliedSignal ===
797
798 ''Purpose''
799
800 This signal is used to indicate that a filter has been applied to a trace.
801
802 ''Senders''
803
804 Sent by TmfEventsTable when a filter is applied.
805
806 ''Receivers''
807
808 Received by any component that shows trace data and needs to be notified of applied filters.
809
810 === TmfEventSearchAppliedSignal ===
811
812 ''Purpose''
813
814 This signal is used to indicate that a search has been applied to a trace.
815
816 ''Senders''
817
818 Sent by TmfEventsTable when a search is applied.
819
820 ''Receivers''
821
822 Received by any component that shows trace data and needs to be notified of applied searches.
823
824 === TmfTimestampFormatUpdateSignal ===
825
826 ''Purpose''
827
828 This signal is used to indicate that the timestamp format preference has been updated.
829
830 ''Senders''
831
832 Sent by TmfTimestampFormat when the default timestamp format preference is changed.
833
834 ''Receivers''
835
836 Received by any component that needs to refresh its display for the new timestamp format.
837
838 === TmfStatsUpdatedSignal ===
839
840 ''Purpose''
841
842 This signal is used to indicate that the statistics data model has been updated.
843
844 ''Senders''
845
846 Sent by statistic providers when new statistics data has been processed.
847
848 ''Receivers''
849
850 Received by statistics viewers and any component that needs to be notified of a statistics update.
851
852 === TmfPacketStreamSelected ===
853
854 ''Purpose''
855
856 This signal is used to indicate that the user has selected a packet stream to analyze.
857
858 ''Senders''
859
860 Sent by the Stream List View when the user selects a new packet stream.
861
862 ''Receivers''
863
864 Received by views that analyze packet streams.
865
866 == Debugging ==
867
868 TMF has built-in Eclipse tracing support for the debugging of signal interaction between components. To enable it, open the '''Run/Debug Configuration...''' dialog, select a configuration, click the '''Tracing''' tab, select the plug-in '''org.eclipse.linuxtools.tmf.core''', and check the '''signal''' item.
869
870 All signals sent and received will be logged to the file TmfTrace.log located in the Eclipse home directory.
871
872 = Generic State System =
873
874 == Introduction ==
875
876 The Generic State System is a utility available in TMF to track different states
877 over the duration of a trace. It works by first sending some or all events of
878 the trace into a state provider, which defines the state changes for a given
879 trace type. Once built, views and analysis modules can then query the resulting
880 database of states (called "state history") to get information.
881
882 For example, let's suppose we have the following sequence of events in a kernel
883 trace:
884
885 10 s, sys_open, fd = 5, file = /home/user/myfile
886 ...
887 15 s, sys_read, fd = 5, size=32
888 ...
889 20 s, sys_close, fd = 5
890
891 Now let's say we want to implement an analysis module which will track the
892 amount of bytes read and written to each file. Here, of course the sys_read is
893 interesting. However, by just looking at that event, we have no information on
894 which file is being read, only its fd (5) is known. To get the match
895 fd5 = /home/user/myfile, we have to go back to the sys_open event which happens
896 5 seconds earlier.
897
898 But since we don't know exactly where this sys_open event is, we will have to go
899 back to the very start of the trace, and look through events one by one! This is
900 obviously not efficient, and will not scale well if we want to analyze many
901 similar patterns, or for very large traces.
902
903 A solution in this case would be to use the state system to keep track of the
904 amount of bytes read/written to every *filename* (instead of every file
905 descriptor, like we get from the events). Then the module could ask the state
906 system "what is the amount of bytes read for file "/home/user/myfile" at time
907 16 s", and it would return the answer "32" (assuming there is no other read
908 than the one shown).
909
910 == High-level components ==
911
912 The State System infrastructure is composed of 3 parts:
913 * The state provider
914 * The central state system
915 * The storage backend
916
917 The state provider is the customizable part. This is where the mapping from
918 trace events to state changes is done. This is what you want to implement for
919 your specific trace type and analysis type. It's represented by the
920 ITmfStateProvider interface (with a threaded implementation in
921 AbstractTmfStateProvider, which you can extend).
922
923 The core of the state system is exposed through the ITmfStateSystem and
924 ITmfStateSystemBuilder interfaces. The former allows only read-only access and
925 is typically used for views doing queries. The latter also allows writing to the
926 state history, and is typically used by the state provider.
927
928 Finally, each state system has its own separate backend. This determines how the
929 intervals, or the "state history", are saved (in RAM, on disk, etc.) You can
930 select the type of backend at construction time in the TmfStateSystemFactory.
931
932 == Definitions ==
933
934 Before we dig into how to use the state system, we should go over some useful
935 definitions:
936
937 === Attribute ===
938
939 An attribute is the smallest element of the model that can be in any particular
940 state. When we refer to the "full state", in fact it means we are interested in
941 the state of every single attribute of the model.
942
943 === Attribute Tree ===
944
945 Attributes in the model can be placed in a tree-like structure, a bit like files
946 and directories in a file system. However, note that an attribute can always
947 have both a value and sub-attributes, so they are like files and directories at
948 the same time. We are then able to refer to every single attribute with its
949 path in the tree.
950
951 For example, in the attribute tree for LTTng kernel traces, we use the following
952 attributes, among others:
953
954 <pre>
955 |- Processes
956 | |- 1000
957 | | |- PPID
958 | | |- Exec_name
959 | |- 1001
960 | | |- PPID
961 | | |- Exec_name
962 | ...
963 |- CPUs
964 |- 0
965 | |- Status
966 | |- Current_pid
967 ...
968 </pre>
969
970 In this model, the attribute "Processes/1000/PPID" refers to the PPID of process
971 with PID 1000. The attribute "CPUs/0/Status" represents the status (running,
972 idle, etc.) of CPU 0. "Processes/1000/PPID" and "Processes/1001/PPID" are two
973 different attribute, even though their base name is the same: the whole path is
974 the unique identifier.
975
976 The value of each attribute can change over the duration of the trace,
977 independently of the other ones, and independently of its position in the tree.
978
979 The tree-like organization is optional, all attributes could be at the same
980 level. But it's possible to put them in a tree, and it helps make things
981 clearer.
982
983 === Quark ===
984
985 In addition to a given path, each attribute also has a unique integer
986 identifier, called the "quark". To continue with the file system analogy, this
987 is like the inode number. When a new attribute is created, a new unique quark
988 will be assigned automatically. They are assigned incrementally, so they will
989 normally be equal to their order of creation, starting at 0.
990
991 Methods are offered to get the quark of an attribute from its path. The API
992 methods for inserting state changes and doing queries normally use quarks
993 instead of paths. This is to encourage users to cache the quarks and re-use
994 them, which avoids re-walking the attribute tree over and over, which avoids
995 unneeded hashing of strings.
996
997 === State value ===
998
999 The path and quark of an attribute will remain constant for the whole duration
1000 of the trace. However, the value carried by the attribute will change. The value
1001 of a specific attribute at a specific time is called the state value.
1002
1003 In the TMF implementation, state values can be integers, longs, doubles, or strings.
1004 There is also a "null value" type, which is used to indicate that no particular
1005 value is active for this attribute at this time, but without resorting to a
1006 'null' reference.
1007
1008 Any other type of value could be used, as long as the backend knows how to store
1009 it.
1010
1011 Note that the TMF implementation also forces every attribute to always carry the
1012 same type of state value. This is to make it simpler for views, so they can
1013 expect that an attribute will always use a given type, without having to check
1014 every single time. Null values are an exception, they are always allowed for all
1015 attributes, since they can safely be "unboxed" into all types.
1016
1017 === State change ===
1018
1019 A state change is the element that is inserted in the state system. It consists
1020 of:
1021 * a timestamp (the time at which the state change occurs)
1022 * an attribute (the attribute whose value will change)
1023 * a state value (the new value that the attribute will carry)
1024
1025 It's not an object per se in the TMF implementation (it's represented by a
1026 function call in the state provider). Typically, the state provider will insert
1027 zero, one or more state changes for every trace event, depending on its event
1028 type, payload, etc.
1029
1030 Note, we use "timestamp" here, but it's in fact a generic term that could be
1031 referred to as "index". For example, if a given trace type has no notion of
1032 timestamp, the event rank could be used.
1033
1034 In the TMF implementation, the timestamp is a long (64-bit integer).
1035
1036 === State interval ===
1037
1038 State changes are inserted into the state system, but state intervals are the
1039 objects that come out on the other side. Those are stocked in the storage
1040 backend. A state interval represents a "state" of an attribute we want to track.
1041 When doing queries on the state system, intervals are what is returned. The
1042 components of a state interval are:
1043 * Start time
1044 * End time
1045 * State value
1046 * Quark
1047
1048 The start and end times represent the time range of the state. The state value
1049 is the same as the state value in the state change that started this interval.
1050 The interval also keeps a reference to its quark, although you normally know
1051 your quark in advance when you do queries.
1052
1053 === State history ===
1054
1055 The state history is the name of the container for all the intervals created by
1056 the state system. The exact implementation (how the intervals are stored) is
1057 determined by the storage backend that is used.
1058
1059 Some backends will use a state history that is peristent on disk, others do not.
1060 When loading a trace, if a history file is available and the backend supports
1061 it, it will be loaded right away, skipping the need to go through another
1062 construction phase.
1063
1064 === Construction phase ===
1065
1066 Before we can query a state system, we need to build the state history first. To
1067 do so, trace events are sent one-by-one through the state provider, which in
1068 turn sends state changes to the central component, which then creates intervals
1069 and stores them in the backend. This is called the construction phase.
1070
1071 Note that the state system needs to receive its events into chronological order.
1072 This phase will end once the end of the trace is reached.
1073
1074 Also note that it is possible to query the state system while it is being build.
1075 Any timestamp between the start of the trace and the current end time of the
1076 state system (available with ITmfStateSystem#getCurrentEndTime()) is a valid
1077 timestamp that can be queried.
1078
1079 === Queries ===
1080
1081 As mentioned previously, when doing queries on the state system, the returned
1082 objects will be state intervals. In most cases it's the state *value* we are
1083 interested in, but since the backend has to instantiate the interval object
1084 anyway, there is no additional cost to return the interval instead. This way we
1085 also get the start and end times of the state "for free".
1086
1087 There are two types of queries that can be done on the state system:
1088
1089 ==== Full queries ====
1090
1091 A full query means that we want to retrieve the whole state of the model for one
1092 given timestamp. As we remember, this means "the state of every single attribute
1093 in the model". As parameter we only need to pass the timestamp (see the API
1094 methods below). The return value will be an array of intervals, where the offset
1095 in the array represents the quark of each attribute.
1096
1097 ==== Single queries ====
1098
1099 In other cases, we might only be interested in the state of one particular
1100 attribute at one given timestamp. For these cases it's better to use a
1101 single query. For a single query. we need to pass both a timestamp and a
1102 quark in parameter. The return value will be a single interval, representing
1103 the state that this particular attribute was at that time.
1104
1105 Single queries are typically faster than full queries (but once again, this
1106 depends on the backend that is used), but not by much. Even if you only want the
1107 state of say 10 attributes out of 200, it could be faster to use a full query
1108 and only read the ones you need. Single queries should be used for cases where
1109 you only want one attribute per timestamp (for example, if you follow the state
1110 of the same attribute over a time range).
1111
1112
1113 == Relevant interfaces/classes ==
1114
1115 This section will describe the public interface and classes that can be used if
1116 you want to use the state system.
1117
1118 === Main classes in org.eclipse.linuxtools.tmf.core.statesystem ===
1119
1120 ==== ITmfStateProvider / AbstractTmfStateProvider ====
1121
1122 ITmfStateProvider is the interface you have to implement to define your state
1123 provider. This is where most of the work has to be done to use a state system
1124 for a custom trace type or analysis type.
1125
1126 For first-time users, it's recommended to extend AbstractTmfStateProvider
1127 instead. This class takes care of all the initialization mumbo-jumbo, and also
1128 runs the event handler in a separate thread. You will only need to implement
1129 eventHandle, which is the call-back that will be called for every event in the
1130 trace.
1131
1132 For an example, you can look at StatsStateProvider in the TMF tree, or at the
1133 small example below.
1134
1135 ==== TmfStateSystemFactory ====
1136
1137 Once you have defined your state provider, you need to tell your trace type to
1138 build a state system with this provider during its initialization. This consists
1139 of overriding TmfTrace#buildStateSystems() and in there of calling the method in
1140 TmfStateSystemFactory that corresponds to the storage backend you want to use
1141 (see the section [[#Comparison of state system backends]]).
1142
1143 You will have to pass in parameter the state provider you want to use, which you
1144 should have defined already. Each backend can also ask for more configuration
1145 information.
1146
1147 You must then call registerStateSystem(id, statesystem) to make your state
1148 system visible to the trace objects and the views. The ID can be any string of
1149 your choosing. To access this particular state system, the views or modules will
1150 need to use this ID.
1151
1152 Also, don't forget to call super.buildStateSystems() in your implementation,
1153 unless you know for sure you want to skip the state providers built by the
1154 super-classes.
1155
1156 You can look at how LttngKernelTrace does it for an example. It could also be
1157 possible to build a state system only under certain conditions (like only if the
1158 trace contains certain event types).
1159
1160
1161 ==== ITmfStateSystem ====
1162
1163 ITmfStateSystem is the main interface through which views or analysis modules
1164 will access the state system. It offers a read-only view of the state system,
1165 which means that no states can be inserted, and no attributes can be created.
1166 Calling TmfTrace#getStateSystems().get(id) will return you a ITmfStateSystem
1167 view of the requested state system. The main methods of interest are:
1168
1169 ===== getQuarkAbsolute()/getQuarkRelative() =====
1170
1171 Those are the basic quark-getting methods. The goal of the state system is to
1172 return the state values of given attributes at given timestamps. As we've seen
1173 earlier, attributes can be described with a file-system-like path. The goal of
1174 these methods is to convert from the path representation of the attribute to its
1175 quark.
1176
1177 Since quarks are created on-the-fly, there is no guarantee that the same
1178 attributes will have the same quark for two traces of the same type. The views
1179 should always query their quarks when dealing with a new trace or a new state
1180 provider. Beyond that however, quarks should be cached and reused as much as
1181 possible, to avoid potentially costly string re-hashing.
1182
1183 getQuarkAbsolute() takes a variable amount of Strings in parameter, which
1184 represent the full path to the attribute. Some of them can be constants, some
1185 can come programatically, often from the event's fields.
1186
1187 getQuarkRelative() is to be used when you already know the quark of a certain
1188 attribute, and want to access on of its sub-attributes. Its first parameter is
1189 the origin quark, followed by a String varagrs which represent the relative path
1190 to the final attribute.
1191
1192 These two methods will throw an AttributeNotFoundException if trying to access
1193 an attribute that does not exist in the model.
1194
1195 These methods also imply that the view has the knowledge of how the attribute
1196 tree is organized. This should be a reasonable hypothesis, since the same
1197 analysis plugin will normally ship both the state provider and the view, and
1198 they will have been written by the same person. In other cases, it's possible to
1199 use getSubAttributes() to explore the organization of the attribute tree first.
1200
1201 ===== waitUntilBuilt() =====
1202
1203 This is a simple method used to block the caller until the construction phase of
1204 this state system is done. If the view prefers to wait until all information is
1205 available before starting to do queries (to get all known attributes right away,
1206 for example), this is the guy to call.
1207
1208 ===== queryFullState() =====
1209
1210 This is the method to do full queries. As mentioned earlier, you only need to
1211 pass a target timestamp in parameter. It will return a List of state intervals,
1212 in which the offset corresponds to the attribute quark. This will represent the
1213 complete state of the model at the requested time.
1214
1215 ===== querySingleState() =====
1216
1217 The method to do single queries. You pass in parameter both a timestamp and an
1218 attribute quark. This will return the single state matching this
1219 timestamp/attribute pair.
1220
1221 Other methods are available, you are encouraged to read their Javadoc and see if
1222 they can be potentially useful.
1223
1224 ==== ITmfStateSystemBuilder ====
1225
1226 ITmfStateSystemBuilder is the read-write interface to the state system. It
1227 extends ITmfStateSystem itself, so all its methods are available. It then adds
1228 methods that can be used to write to the state system, either by creating new
1229 attributes of inserting state changes.
1230
1231 It is normally reserved for the state provider and should not be visible to
1232 external components. However it will be available in AbstractTmfStateProvider,
1233 in the field 'ss'. That way you can call ss.modifyAttribute() etc. in your state
1234 provider to write to the state.
1235
1236 The main methods of interest are:
1237
1238 ===== getQuark*AndAdd() =====
1239
1240 getQuarkAbsoluteAndAdd() and getQuarkRelativeAndAdd() work exactly like their
1241 non-AndAdd counterparts in ITmfStateSystem. The difference is that the -AndAdd
1242 versions will not throw any exception: if the requested attribute path does not
1243 exist in the system, it will be created, and its newly-assigned quark will be
1244 returned.
1245
1246 When in a state provider, the -AndAdd version should normally be used (unless
1247 you know for sure the attribute already exist and don't want to create it
1248 otherwise). This means that there is no need to define the whole attribute tree
1249 in advance, the attributes will be created on-demand.
1250
1251 ===== modifyAttribute() =====
1252
1253 This is the main state-change-insertion method. As was explained before, a state
1254 change is defined by a timestamp, an attribute and a state value. Those three
1255 elements need to be passed to modifyAttribute as parameters.
1256
1257 Other state change insertion methods are available (increment-, push-, pop- and
1258 removeAttribute()), but those are simply convenience wrappers around
1259 modifyAttribute(). Check their Javadoc for more information.
1260
1261 ===== closeHistory() =====
1262
1263 When the construction phase is done, do not forget to call closeHistory() to
1264 tell the backend that no more intervals will be received. Depending on the
1265 backend type, it might have to save files, close descriptors, etc. This ensures
1266 that a persitent file can then be re-used when the trace is opened again.
1267
1268 If you use the AbstractTmfStateProvider, it will call closeHistory()
1269 automatically when it reaches the end of the trace.
1270
1271 === Other relevant interfaces ===
1272
1273 ==== o.e.l.tmf.core.statevalue.ITmfStateValue ====
1274
1275 This is the interface used to represent state values. Those are used when
1276 inserting state changes in the provider, and is also part of the state intervals
1277 obtained when doing queries.
1278
1279 The abstract TmfStateValue class contains the factory methods to create new
1280 state values of either int, long, double or string types. To retrieve the real
1281 object inside the state value, one can use the .unbox* methods.
1282
1283 Note: Do not instantiate null values manually, use TmfStateValue.nullValue()
1284
1285 ==== o.e.l.tmf.core.interval.ITmfStateInterval ====
1286
1287 This is the interface to represent the state intervals, which are stored in the
1288 state history backend, and are returned when doing state system queries. A very
1289 simple implementation is available in TmfStateInterval. Its methods should be
1290 self-descriptive.
1291
1292 === Exceptions ===
1293
1294 The following exceptions, found in o.e.l.tmf.core.exceptions, are related to
1295 state system activities.
1296
1297 ==== AttributeNotFoundException ====
1298
1299 This is thrown by getQuarkRelative() and getQuarkAbsolute() (but not byt the
1300 -AndAdd versions!) when passing an attribute path that is not present in the
1301 state system. This is to ensure that no new attribute is created when using
1302 these versions of the methods.
1303
1304 Views can expect some attributes to be present, but they should handle these
1305 exceptions for when the attributes end up not being in the state system (perhaps
1306 this particular trace didn't have a certain type of events, etc.)
1307
1308 ==== StateValueTypeException ====
1309
1310 This exception will be thrown when trying to unbox a state value into a type
1311 different than its own. You should always check with ITmfStateValue#getType()
1312 beforehand if you are not sure about the type of a given state value.
1313
1314 ==== TimeRangeException ====
1315
1316 This exception is thrown when trying to do a query on the state system for a
1317 timestamp that is outside of its range. To be safe, you should check with
1318 ITmfStateSystem#getStartTime() and #getCurrentEndTime() for the current valid
1319 range of the state system. This is especially important when doing queries on
1320 a state system that is currently being built.
1321
1322 ==== StateSystemDisposedException ====
1323
1324 This exception is thrown when trying to access a state system that has been
1325 disposed, with its dispose() method. This can potentially happen at shutdown,
1326 since Eclipse is not always consistent with the order in which the components
1327 are closed.
1328
1329
1330 == Comparison of state system backends ==
1331
1332 As we have seen in section [[#High-level components]], the state system needs
1333 a storage backend to save the intervals. Different implementations are
1334 available when building your state system from TmfStateSystemFactory.
1335
1336 Do not confuse full/single queries with full/partial history! All backend types
1337 should be able to handle any type of queries defined in the ITmfStateSystem API,
1338 unless noted otherwise.
1339
1340 === Full history ===
1341
1342 Available with TmfStateSystemFactory#newFullHistory(). The full history uses a
1343 History Tree data structure, which is an optimized structure store state
1344 intervals on disk. Once built, it can respond to queries in a ''log(n)'' manner.
1345
1346 You need to specify a file at creation time, which will be the container for
1347 the history tree. Once it's completely built, it will remain on disk (until you
1348 delete the trace from the project). This way it can be reused from one session
1349 to another, which makes subsequent loading time much faster.
1350
1351 This the backend used by the LTTng kernel plugin. It offers good scalability and
1352 performance, even at extreme sizes (it's been tested with traces of sizes up to
1353 500 GB). Its main downside is the amount of disk space required: since every
1354 single interval is written to disk, the size of the history file can quite
1355 easily reach and even surpass the size of the trace itself.
1356
1357 === Null history ===
1358
1359 Available with TmfStateSystemFactory#newNullHistory(). As its name implies the
1360 null history is in fact an absence of state history. All its query methods will
1361 return null (see the Javadoc in NullBackend).
1362
1363 Obviously, no file is required, and almost no memory space is used.
1364
1365 It's meant to be used in cases where you are not interested in past states, but
1366 only in the "ongoing" one. It can also be useful for debugging and benchmarking.
1367
1368 === In-memory history ===
1369
1370 Available with TmfStateSystemFactory#newInMemHistory(). This is a simple wrapper
1371 using a TreeSet to store all state intervals in memory. The implementation at
1372 the moment is quite simple, it will perform a binary search on entries when
1373 doing queries to find the ones that match.
1374
1375 The advantage of this method is that it's very quick to build and query, since
1376 all the information resides in memory. However, you are limited to 2^31 entries
1377 (roughly 2 billions), and depending on your state provider and trace type, that
1378 can happen really fast!
1379
1380 There are no safeguards, so if you bust the limit you will end up with
1381 ArrayOutOfBoundsException's everywhere. If your trace or state history can be
1382 arbitrarily big, it's probably safer to use a Full History instead.
1383
1384 === Partial history ===
1385
1386 Available with TmfStateSystemFactory#newPartialHistory(). The partial history is
1387 a more advanced form of the full history. Instead of writing all state intervals
1388 to disk like with the full history, we only write a small fraction of them, and
1389 go back to read the trace to recreate the states in-between.
1390
1391 It has a big advantage over a full history in terms of disk space usage. It's
1392 very possible to reduce the history tree file size by a factor of 1000, while
1393 keeping query times within a factor of two. Its main downside comes from the
1394 fact that you cannot do efficient single queries with it (they are implemented
1395 by doing full queries underneath).
1396
1397 This makes it a poor choice for views like the Control Flow view, where you do
1398 a lot of range queries and single queries. However, it is a perfect fit for
1399 cases like statistics, where you usually do full queries already, and you store
1400 lots of small states which are very easy to "compress".
1401
1402 However, it can't really be used until bug 409630 is fixed.
1403
1404 == State System Operations ==
1405
1406 TmfStateSystemOperations is a static class that implements additional
1407 statistical operations that can be performed on attributes of the state system.
1408
1409 These operations require that the attribute be one of the numerical values
1410 (int, long or double).
1411
1412 The speed of these operations can be greatly improved for large data sets if
1413 the attribute was inserted in the state system as a mipmap attribute. Refer to
1414 the [[#Mipmap feature | Mipmap feature]] section.
1415
1416 ===== queryRangeMax() =====
1417
1418 This method returns the maximum numerical value of an attribute in the
1419 specified time range. The attribute must be of type int, long or double.
1420 Null values are ignored. The returned value will be of the same state value
1421 type as the base attribute, or a null value if there is no state interval
1422 stored in the given time range.
1423
1424 ===== queryRangeMin() =====
1425
1426 This method returns the minimum numerical value of an attribute in the
1427 specified time range. The attribute must be of type int, long or double.
1428 Null values are ignored. The returned value will be of the same state value
1429 type as the base attribute, or a null value if there is no state interval
1430 stored in the given time range.
1431
1432 ===== queryRangeAverage() =====
1433
1434 This method returns the average numerical value of an attribute in the
1435 specified time range. The attribute must be of type int, long or double.
1436 Each state interval value is weighted according to time. Null values are
1437 counted as zero. The returned value will be a double primitive, which will
1438 be zero if there is no state interval stored in the given time range.
1439
1440 == Code example ==
1441
1442 Here is a small example of code that will use the state system. For this
1443 example, let's assume we want to track the state of all the CPUs in a LTTng
1444 kernel trace. To do so, we will watch for the "sched_switch" event in the state
1445 provider, and will update an attribute indicating if the associated CPU should
1446 be set to "running" or "idle".
1447
1448 We will use an attribute tree that looks like this:
1449 <pre>
1450 CPUs
1451 |--0
1452 | |--Status
1453 |
1454 |--1
1455 | |--Status
1456 |
1457 | 2
1458 | |--Status
1459 ...
1460 </pre>
1461
1462 The second-level attributes will be named from the information available in the
1463 trace events. Only the "Status" attributes will carry a state value (this means
1464 we could have just used "1", "2", "3",... directly, but we'll do it in a tree
1465 for the example's sake).
1466
1467 Also, we will use integer state values to represent "running" or "idle", instead
1468 of saving the strings that would get repeated every time. This will help in
1469 reducing the size of the history file.
1470
1471 First we will define a state provider in MyStateProvider. Then, assuming we
1472 have already implemented a custom trace type extending CtfTmfTrace, we will add
1473 a section to it to make it build a state system using the provider we defined
1474 earlier. Finally, we will show some example code that can query the state
1475 system, which would normally go in a view or analysis module.
1476
1477 === State Provider ===
1478
1479 <pre>
1480 import org.eclipse.linuxtools.tmf.core.ctfadaptor.CtfTmfEvent;
1481 import org.eclipse.linuxtools.tmf.core.event.ITmfEvent;
1482 import org.eclipse.linuxtools.tmf.core.exceptions.AttributeNotFoundException;
1483 import org.eclipse.linuxtools.tmf.core.exceptions.StateValueTypeException;
1484 import org.eclipse.linuxtools.tmf.core.exceptions.TimeRangeException;
1485 import org.eclipse.linuxtools.tmf.core.statesystem.AbstractTmfStateProvider;
1486 import org.eclipse.linuxtools.tmf.core.statevalue.ITmfStateValue;
1487 import org.eclipse.linuxtools.tmf.core.statevalue.TmfStateValue;
1488 import org.eclipse.linuxtools.tmf.core.trace.ITmfTrace;
1489
1490 /**
1491 * Example state system provider.
1492 *
1493 * @author Alexandre Montplaisir
1494 */
1495 public class MyStateProvider extends AbstractTmfStateProvider {
1496
1497 /** State value representing the idle state */
1498 public static ITmfStateValue IDLE = TmfStateValue.newValueInt(0);
1499
1500 /** State value representing the running state */
1501 public static ITmfStateValue RUNNING = TmfStateValue.newValueInt(1);
1502
1503 /**
1504 * Constructor
1505 *
1506 * @param trace
1507 * The trace to which this state provider is associated
1508 */
1509 public MyStateProvider(ITmfTrace trace) {
1510 super(trace, CtfTmfEvent.class, "Example"); //$NON-NLS-1$
1511 /*
1512 * The third parameter here is not important, it's only used to name a
1513 * thread internally.
1514 */
1515 }
1516
1517 @Override
1518 public int getVersion() {
1519 /*
1520 * If the version of an existing file doesn't match the version supplied
1521 * in the provider, a rebuild of the history will be forced.
1522 */
1523 return 1;
1524 }
1525
1526 @Override
1527 public MyStateProvider getNewInstance() {
1528 return new MyStateProvider(getTrace());
1529 }
1530
1531 @Override
1532 protected void eventHandle(ITmfEvent ev) {
1533 /*
1534 * AbstractStateChangeInput should have already checked for the correct
1535 * class type.
1536 */
1537 CtfTmfEvent event = (CtfTmfEvent) ev;
1538
1539 final long ts = event.getTimestamp().getValue();
1540 Integer nextTid = ((Long) event.getContent().getField("next_tid").getValue()).intValue();
1541
1542 try {
1543
1544 if (event.getEventName().equals("sched_switch")) {
1545 int quark = ss.getQuarkAbsoluteAndAdd("CPUs", String.valueOf(event.getCPU()), "Status");
1546 ITmfStateValue value;
1547 if (nextTid > 0) {
1548 value = RUNNING;
1549 } else {
1550 value = IDLE;
1551 }
1552 ss.modifyAttribute(ts, value, quark);
1553 }
1554
1555 } catch (TimeRangeException e) {
1556 /*
1557 * This should not happen, since the timestamp comes from a trace
1558 * event.
1559 */
1560 throw new IllegalStateException(e);
1561 } catch (AttributeNotFoundException e) {
1562 /*
1563 * This should not happen either, since we're only accessing a quark
1564 * we just created.
1565 */
1566 throw new IllegalStateException(e);
1567 } catch (StateValueTypeException e) {
1568 /*
1569 * This wouldn't happen here, but could potentially happen if we try
1570 * to insert mismatching state value types in the same attribute.
1571 */
1572 e.printStackTrace();
1573 }
1574
1575 }
1576
1577 }
1578 </pre>
1579
1580 === Trace type definition ===
1581
1582 <pre>
1583 import java.io.File;
1584
1585 import org.eclipse.core.resources.IProject;
1586 import org.eclipse.core.runtime.IStatus;
1587 import org.eclipse.core.runtime.Status;
1588 import org.eclipse.linuxtools.tmf.core.ctfadaptor.CtfTmfTrace;
1589 import org.eclipse.linuxtools.tmf.core.exceptions.TmfTraceException;
1590 import org.eclipse.linuxtools.tmf.core.statesystem.ITmfStateProvider;
1591 import org.eclipse.linuxtools.tmf.core.statesystem.ITmfStateSystem;
1592 import org.eclipse.linuxtools.tmf.core.statesystem.TmfStateSystemFactory;
1593 import org.eclipse.linuxtools.tmf.core.trace.TmfTraceManager;
1594
1595 /**
1596 * Example of a custom trace type using a custom state provider.
1597 *
1598 * @author Alexandre Montplaisir
1599 */
1600 public class MyTraceType extends CtfTmfTrace {
1601
1602 /** The file name of the history file */
1603 public final static String HISTORY_FILE_NAME = "mystatefile.ht";
1604
1605 /** ID of the state system we will build */
1606 public static final String STATE_ID = "org.eclipse.linuxtools.lttng2.example";
1607
1608 /**
1609 * Default constructor
1610 */
1611 public MyTraceType() {
1612 super();
1613 }
1614
1615 @Override
1616 public IStatus validate(final IProject project, final String path) {
1617 /*
1618 * Add additional validation code here, and return a IStatus.ERROR if
1619 * validation fails.
1620 */
1621 return Status.OK_STATUS;
1622 }
1623
1624 @Override
1625 protected void buildStateSystem() throws TmfTraceException {
1626 super.buildStateSystem();
1627
1628 /* Build the custom state system for this trace */
1629 String directory = TmfTraceManager.getSupplementaryFileDir(this);
1630 final File htFile = new File(directory + HISTORY_FILE_NAME);
1631 final ITmfStateProvider htInput = new MyStateProvider(this);
1632
1633 ITmfStateSystem ss = TmfStateSystemFactory.newFullHistory(htFile, htInput, false);
1634 fStateSystems.put(STATE_ID, ss);
1635 }
1636
1637 }
1638 </pre>
1639
1640 === Query code ===
1641
1642 <pre>
1643 import java.util.List;
1644
1645 import org.eclipse.linuxtools.tmf.core.exceptions.AttributeNotFoundException;
1646 import org.eclipse.linuxtools.tmf.core.exceptions.StateSystemDisposedException;
1647 import org.eclipse.linuxtools.tmf.core.exceptions.TimeRangeException;
1648 import org.eclipse.linuxtools.tmf.core.interval.ITmfStateInterval;
1649 import org.eclipse.linuxtools.tmf.core.statesystem.ITmfStateSystem;
1650 import org.eclipse.linuxtools.tmf.core.statevalue.ITmfStateValue;
1651 import org.eclipse.linuxtools.tmf.core.trace.ITmfTrace;
1652
1653 /**
1654 * Class showing examples of state system queries.
1655 *
1656 * @author Alexandre Montplaisir
1657 */
1658 public class QueryExample {
1659
1660 private final ITmfStateSystem ss;
1661
1662 /**
1663 * Constructor
1664 *
1665 * @param trace
1666 * Trace that this "view" will display.
1667 */
1668 public QueryExample(ITmfTrace trace) {
1669 ss = trace.getStateSystems().get(MyTraceType.STATE_ID);
1670 }
1671
1672 /**
1673 * Example method of querying one attribute in the state system.
1674 *
1675 * We pass it a cpu and a timestamp, and it returns us if that cpu was
1676 * executing a process (true/false) at that time.
1677 *
1678 * @param cpu
1679 * The CPU to check
1680 * @param timestamp
1681 * The timestamp of the query
1682 * @return True if the CPU was running, false otherwise
1683 */
1684 public boolean cpuIsRunning(int cpu, long timestamp) {
1685 try {
1686 int quark = ss.getQuarkAbsolute("CPUs", String.valueOf(cpu), "Status");
1687 ITmfStateValue value = ss.querySingleState(timestamp, quark).getStateValue();
1688
1689 if (value.equals(MyStateProvider.RUNNING)) {
1690 return true;
1691 }
1692
1693 /*
1694 * Since at this level we have no guarantee on the contents of the state
1695 * system, it's important to handle these cases correctly.
1696 */
1697 } catch (AttributeNotFoundException e) {
1698 /*
1699 * Handle the case where the attribute does not exist in the state
1700 * system (no CPU with this number, etc.)
1701 */
1702 ...
1703 } catch (TimeRangeException e) {
1704 /*
1705 * Handle the case where 'timestamp' is outside of the range of the
1706 * history.
1707 */
1708 ...
1709 } catch (StateSystemDisposedException e) {
1710 /*
1711 * Handle the case where the state system is being disposed. If this
1712 * happens, it's normally when shutting down, so the view can just
1713 * return immediately and wait it out.
1714 */
1715 }
1716 return false;
1717 }
1718
1719
1720 /**
1721 * Example method of using a full query.
1722 *
1723 * We pass it a timestamp, and it returns us how many CPUs were executing a
1724 * process at that moment.
1725 *
1726 * @param timestamp
1727 * The target timestamp
1728 * @return The amount of CPUs that were running at that time
1729 */
1730 public int getNbRunningCpus(long timestamp) {
1731 int count = 0;
1732
1733 try {
1734 /* Get the list of the quarks we are interested in. */
1735 List<Integer> quarks = ss.getQuarks("CPUs", "*", "Status");
1736
1737 /*
1738 * Get the full state at our target timestamp (it's better than
1739 * doing an arbitrary number of single queries).
1740 */
1741 List<ITmfStateInterval> state = ss.queryFullState(timestamp);
1742
1743 /* Look at the value of the state for each quark */
1744 for (Integer quark : quarks) {
1745 ITmfStateValue value = state.get(quark).getStateValue();
1746 if (value.equals(MyStateProvider.RUNNING)) {
1747 count++;
1748 }
1749 }
1750
1751 } catch (TimeRangeException e) {
1752 /*
1753 * Handle the case where 'timestamp' is outside of the range of the
1754 * history.
1755 */
1756 ...
1757 } catch (StateSystemDisposedException e) {
1758 /* Handle the case where the state system is being disposed. */
1759 ...
1760 }
1761 return count;
1762 }
1763 }
1764 </pre>
1765
1766 == Mipmap feature ==
1767
1768 The mipmap feature allows attributes to be inserted into the state system with
1769 additional computations performed to automatically store sub-attributes that
1770 can later be used for statistical operations. The mipmap has a resolution which
1771 represents the number of state attribute changes that are used to compute the
1772 value at the next mipmap level.
1773
1774 The supported mipmap features are: max, min, and average. Each one of these
1775 features requires that the base attribute be a numerical state value (int, long
1776 or double). An attribute can be mipmapped for one or more of the features at
1777 the same time.
1778
1779 To use a mipmapped attribute in queries, call the corresponding methods of the
1780 static class [[#State System Operations | TmfStateSystemOperations]].
1781
1782 === AbstractTmfMipmapStateProvider ===
1783
1784 AbstractTmfMipmapStateProvider is an abstract provider class that allows adding
1785 features to a specific attribute into a mipmap tree. It extends AbstractTmfStateProvider.
1786
1787 If a provider wants to add mipmapped attributes to its tree, it must extend
1788 AbstractTmfMipmapStateProvider and call modifyMipmapAttribute() in the event
1789 handler, specifying one or more mipmap features to compute. Then the structure
1790 of the attribute tree will be :
1791
1792 <pre>
1793 |- <attribute>
1794 | |- <mipmapFeature> (min/max/avg)
1795 | | |- 1
1796 | | |- 2
1797 | | |- 3
1798 | | ...
1799 | | |- n (maximum mipmap level)
1800 | |- <mipmapFeature> (min/max/avg)
1801 | | |- 1
1802 | | |- 2
1803 | | |- 3
1804 | | ...
1805 | | |- n (maximum mipmap level)
1806 | ...
1807 </pre>
1808
1809 = UML2 Sequence Diagram Framework =
1810
1811 The purpose of the UML2 Sequence Diagram Framework of TMF is to provide a framework for generation of UML2 sequence diagrams. It provides
1812 *UML2 Sequence diagram drawing capabilities (i.e. lifelines, messages, activations, object creation and deletion)
1813 *a generic, re-usable Sequence Diagram View
1814 *Eclipse Extension Point for the creation of sequence diagrams
1815 *callback hooks for searching and filtering within the Sequence Diagram View
1816 *scalability<br>
1817 The following chapters describe the Sequence Diagram Framework as well as a reference implementation and its usage.
1818
1819 == TMF UML2 Sequence Diagram Extensions ==
1820
1821 In the UML2 Sequence Diagram Framework an Eclipse extension point is defined so that other plug-ins can contribute code to create sequence diagram.
1822
1823 '''Identifier''': org.eclipse.linuxtools.tmf.ui.uml2SDLoader<br>
1824 '''Since''': 1.0<br>
1825 '''Description''': This extension point aims to list and connect any UML2 Sequence Diagram loader.<br>
1826 '''Configuration Markup''':<br>
1827
1828 <pre>
1829 <!ELEMENT extension (uml2SDLoader)+>
1830 <!ATTLIST extension
1831 point CDATA #REQUIRED
1832 id CDATA #IMPLIED
1833 name CDATA #IMPLIED
1834 >
1835 </pre>
1836
1837 *point - A fully qualified identifier of the target extension point.
1838 *id - An optional identifier of the extension instance.
1839 *name - An optional name of the extension instance.
1840
1841 <pre>
1842 <!ELEMENT uml2SDLoader EMPTY>
1843 <!ATTLIST uml2SDLoader
1844 id CDATA #REQUIRED
1845 name CDATA #REQUIRED
1846 class CDATA #REQUIRED
1847 view CDATA #REQUIRED
1848 default (true | false)
1849 </pre>
1850
1851 *id - A unique identifier for this uml2SDLoader. This is not mandatory as long as the id attribute cannot be retrieved by the provider plug-in. The class attribute is the one on which the underlying algorithm relies.
1852 *name - An name of the extension instance.
1853 *class - The implementation of this UML2 SD viewer loader. The class must implement org.eclipse.linuxtools.tmf.ui.views.uml2sd.load.IUml2SDLoader.
1854 *view - The view ID of the view that this loader aims to populate. Either org.eclipse.linuxtools.tmf.ui.views.uml2sd.SDView itself or a extension of org.eclipse.linuxtools.tmf.ui.views.uml2sd.SDView.
1855 *default - Set to true to make this loader the default one for the view; in case of several default loaders, first one coming from extensions list is taken.
1856
1857
1858 == Management of the Extension Point ==
1859
1860 The TMF UI plug-in is responsible for evaluating each contribution to the extension point.
1861 <br>
1862 <br>
1863 With this extension point, a loader class is associated with a Sequence Diagram View. Multiple loaders can be associated to a single Sequence Diagram View. However, additional means have to be implemented to specify which loader should be used when opening the view. For example, an eclipse action or command could be used for that. This additional code is not necessary if there is only one loader for a given Sequence Diagram View associated and this loader has the attribute "default" set to "true". (see also [[#Using one Sequence Diagram View with Multiple Loaders | Using one Sequence Diagram View with Multiple Loaders]])
1864
1865 == Sequence Diagram View ==
1866
1867 For this extension point a Sequence Diagram View has to be defined as well. The Sequence Diagram View class implementation is provided by the plug-in ''org.eclipse.linuxtools.tmf.ui'' (''org.eclipse.linuxtools.tmf.ui.views.uml2sd.SDView'') and can be used as is or can also be sub-classed. For that, a view extension has to be added to the ''plugin.xml''.
1868
1869 === Supported Widgets ===
1870
1871 The loader class provides a frame containing all the UML2 widgets to be displayed. The following widgets exist:
1872
1873 *Lifeline
1874 *Activation
1875 *Synchronous Message
1876 *Asynchronous Message
1877 *Synchronous Message Return
1878 *Asynchronous Message Return
1879 *Stop
1880
1881 For a lifeline, a category can be defined. The lifeline category defines icons, which are displayed in the lifeline header.
1882
1883 === Zooming ===
1884
1885 The Sequence Diagram View allows the user to zoom in, zoom out and reset the zoom factor.
1886
1887 === Printing ===
1888
1889 It is possible to print the whole sequence diagram as well as part of it.
1890
1891 === Key Bindings ===
1892
1893 *SHIFT+ALT+ARROW-DOWN - to scroll down within sequence diagram one view page at a time
1894 *SHIFT+ALT+ARROW-UP - to scroll up within sequence diagram one view page at a time
1895 *SHIFT+ALT+ARROW-RIGHT - to scroll right within sequence diagram one view page at a time
1896 *SHIFT+ALT+ARROW-LEFT - to scroll left within sequence diagram one view page at a time
1897 *SHIFT+ALT+ARROW-HOME - to jump to the beginning of the selected message if not already visible in page
1898 *SHIFT+ALT+ARROW-END - to jump to the end of the selected message if not already visible in page
1899 *CTRL+F - to open find dialog if either the basic or extended find provider is defined (see [[#Using the Find Provider Interface | Using the Find Provider Interface]])
1900 *CTRL+P - to open print dialog
1901
1902 === Preferences ===
1903
1904 The UML2 Sequence Diagram Framework provides preferences to customize the appearance of the Sequence Diagram View. The color of all widgets and text as well as the fonts of the text of all widget can be adjust. Amongst others the default lifeline width can be alternated. To change preferences select '''Windows->Preferences->Tracing->UML2 Sequence Diagrams'''. The following preference page will show:<br>
1905 [[Image:images/SeqDiagramPref.png]] <br>
1906 After changing the preferences select '''OK'''.
1907
1908 === Callback hooks ===
1909
1910 The Sequence Diagram View provides several callback hooks so that extension can provide application specific functionality. The following interfaces can be provided:
1911 * Basic find provider or extended find Provider<br> For finding within the sequence diagram
1912 * Basic filter provider and extended Filter Provider<br> For filtering within the sequnce diagram.
1913 * Basic paging provider or advanced paging provider<br> For scalability reasons, used to limit number of displayed messages
1914 * Properies provider<br> To provide properties of selected elements
1915 * Collapse provider <br> To collapse areas of the sequence diagram
1916
1917 == Tutorial ==
1918
1919 This tutorial describes how to create a UML2 Sequence Diagram Loader extension and use this loader in the in Eclipse.
1920
1921 === Prerequisites ===
1922
1923 The tutorial is based on Eclipse 4.4 (Eclipse Luna) and TMF 3.0.0.
1924
1925 === Creating an Eclipse UI Plug-in ===
1926
1927 To create a new project with name org.eclipse.linuxtools.tmf.sample.ui select '''File -> New -> Project -> Plug-in Development -> Plug-in Project'''. <br>
1928 [[Image:images/Screenshot-NewPlug-inProject1.png]]<br>
1929
1930 [[Image:images/Screenshot-NewPlug-inProject2.png]]<br>
1931
1932 [[Image:images/Screenshot-NewPlug-inProject3.png]]<br>
1933
1934 === Creating a Sequence Diagram View ===
1935
1936 To open the plug-in manifest, double-click on the MANIFEST.MF file. <br>
1937 [[Image:images/SelectManifest.png]]<br>
1938
1939 Change to the Dependencies tab and select '''Add...''' of the ''Required Plug-ins'' section. A new dialog box will open. Next find plug-ins ''org.eclipse.linuxtools.tmf.ui'' and ''org.eclipse.linuxtools.tmf.core'' and then press '''OK'''<br>
1940 [[Image:images/AddDependencyTmfUi.png]]<br>
1941
1942 Change to the Extensions tab and select '''Add...''' of the ''All Extension'' section. A new dialog box will open. Find the view extension ''org.eclipse.ui.views'' and press '''Finish'''.<br>
1943 [[Image:images/AddViewExtension1.png]]<br>
1944
1945 To create a Sequence Diagram View, click the right mouse button. Then select '''New -> view'''<br>
1946 [[Image:images/AddViewExtension2.png]]<br>
1947
1948 A new view entry has been created. Fill in the fields ''id'', ''name'' and ''class''. Note that for ''class'' the SD view implementation (''org.eclipse.linuxtools.tmf.ui.views.SDView'') of the TMF UI plug-in is used.<br>
1949 [[Image:images/FillSampleSeqDiagram.png]]<br>
1950
1951 The view is prepared. Run the Example. To launch the an Eclipse Application select the ''Overview'' tab and click on '''Launch an Eclipse Application'''<br>
1952 [[Image:images/RunEclipseApplication.png]]<br>
1953
1954 A new Eclipse application window will show. In the new window go to '''Windows -> Show View -> Other... -> Other -> Sample Sequence Diagram'''.<br>
1955 [[Image:images/ShowViewOther.png]]<br>
1956
1957 The Sequence Diagram View will open with an blank page.<br>
1958 [[Image:images/BlankSampleSeqDiagram.png]]<br>
1959
1960 Close the Example Application.
1961
1962 === Defining the uml2SDLoader Extension ===
1963
1964 After defining the Sequence Diagram View it's time to create the ''uml2SDLoader'' Extension. <br>
1965
1966 Before doing that add a dependency to TMF. For that select '''Add...''' of the ''Required Plug-ins'' section. A new dialog box will open. Next find plug-in ''org.eclipse.linuxtools.tmf'' and press '''OK'''<br>
1967 [[Image:images/AddDependencyTmf.png]]<br>
1968
1969 To create the loader extension, change to the Extensions tab and select '''Add...''' of the ''All Extension'' section. A new dialog box will open. Find the extension ''org.eclipse.linuxtools.tmf.ui.uml2SDLoader'' and press '''Finish'''.<br>
1970 [[Image:images/AddTmfUml2SDLoader.png]]<br>
1971
1972 A new 'uml2SDLoader'' extension has been created. Fill in fields ''id'', ''name'', ''class'', ''view'' and ''default''. Use ''default'' equal true for this example. For the view add the id of the Sequence Diagram View of chapter [[#Creating a Sequence Diagram View | Creating a Sequence Diagram View]]. <br>
1973 [[Image:images/FillSampleLoader.png]]<br>
1974
1975 Then click on ''class'' (see above) to open the new class dialog box. Fill in the relevant fields and select '''Finish'''. <br>
1976 [[Image:images/NewSampleLoaderClass.png]]<br>
1977
1978 A new Java class will be created which implements the interface ''org.eclipse.linuxtools.tmf.ui.views.uml2sd.load.IUml2SDLoader''.<br>
1979
1980 <pre>
1981 package org.eclipse.linuxtools.tmf.sample.ui;
1982
1983 import org.eclipse.linuxtools.tmf.ui.views.uml2sd.SDView;
1984 import org.eclipse.linuxtools.tmf.ui.views.uml2sd.load.IUml2SDLoader;
1985
1986 public class SampleLoader implements IUml2SDLoader {
1987
1988 public SampleLoader() {
1989 // TODO Auto-generated constructor stub
1990 }
1991
1992 @Override
1993 public void dispose() {
1994 // TODO Auto-generated method stub
1995
1996 }
1997
1998 @Override
1999 public String getTitleString() {
2000 // TODO Auto-generated method stub
2001 return null;
2002 }
2003
2004 @Override
2005 public void setViewer(SDView arg0) {
2006 // TODO Auto-generated method stub
2007
2008 }
2009 </pre>
2010
2011 === Implementing the Loader Class ===
2012
2013 Next is to implement the methods of the IUml2SDLoader interface method. The following code snippet shows how to create the major sequence diagram elements. Please note that no time information is stored.<br>
2014
2015 <pre>
2016 package org.eclipse.linuxtools.tmf.sample.ui;
2017
2018 import org.eclipse.linuxtools.tmf.ui.views.uml2sd.SDView;
2019 import org.eclipse.linuxtools.tmf.ui.views.uml2sd.core.AsyncMessage;
2020 import org.eclipse.linuxtools.tmf.ui.views.uml2sd.core.AsyncMessageReturn;
2021 import org.eclipse.linuxtools.tmf.ui.views.uml2sd.core.ExecutionOccurrence;
2022 import org.eclipse.linuxtools.tmf.ui.views.uml2sd.core.Frame;
2023 import org.eclipse.linuxtools.tmf.ui.views.uml2sd.core.Lifeline;
2024 import org.eclipse.linuxtools.tmf.ui.views.uml2sd.core.Stop;
2025 import org.eclipse.linuxtools.tmf.ui.views.uml2sd.core.SyncMessage;
2026 import org.eclipse.linuxtools.tmf.ui.views.uml2sd.core.SyncMessageReturn;
2027 import org.eclipse.linuxtools.tmf.ui.views.uml2sd.load.IUml2SDLoader;
2028
2029 public class SampleLoader implements IUml2SDLoader {
2030
2031 private SDView fSdView;
2032
2033 public SampleLoader() {
2034 }
2035
2036 @Override
2037 public void dispose() {
2038 }
2039
2040 @Override
2041 public String getTitleString() {
2042 return "Sample Diagram";
2043 }
2044
2045 @Override
2046 public void setViewer(SDView arg0) {
2047 fSdView = arg0;
2048 createFrame();
2049 }
2050
2051 private void createFrame() {
2052
2053 Frame testFrame = new Frame();
2054 testFrame.setName("Sample Frame");
2055
2056 /*
2057 * Create lifelines
2058 */
2059
2060 Lifeline lifeLine1 = new Lifeline();
2061 lifeLine1.setName("Object1");
2062 testFrame.addLifeLine(lifeLine1);
2063
2064 Lifeline lifeLine2 = new Lifeline();
2065 lifeLine2.setName("Object2");
2066 testFrame.addLifeLine(lifeLine2);
2067
2068
2069 /*
2070 * Create Sync Message
2071 */
2072 // Get new occurrence on lifelines
2073 lifeLine1.getNewEventOccurrence();
2074
2075 // Get Sync message instances
2076 SyncMessage start = new SyncMessage();
2077 start.setName("Start");
2078 start.setEndLifeline(lifeLine1);
2079 testFrame.addMessage(start);
2080
2081 /*
2082 * Create Sync Message
2083 */
2084 // Get new occurrence on lifelines
2085 lifeLine1.getNewEventOccurrence();
2086 lifeLine2.getNewEventOccurrence();
2087
2088 // Get Sync message instances
2089 SyncMessage syn1 = new SyncMessage();
2090 syn1.setName("Sync Message 1");
2091 syn1.setStartLifeline(lifeLine1);
2092 syn1.setEndLifeline(lifeLine2);
2093 testFrame.addMessage(syn1);
2094
2095 /*
2096 * Create corresponding Sync Message Return
2097 */
2098
2099 // Get new occurrence on lifelines
2100 lifeLine1.getNewEventOccurrence();
2101 lifeLine2.getNewEventOccurrence();
2102
2103 SyncMessageReturn synReturn1 = new SyncMessageReturn();
2104 synReturn1.setName("Sync Message Return 1");
2105 synReturn1.setStartLifeline(lifeLine2);
2106 synReturn1.setEndLifeline(lifeLine1);
2107 synReturn1.setMessage(syn1);
2108 testFrame.addMessage(synReturn1);
2109
2110 /*
2111 * Create Activations (Execution Occurrence)
2112 */
2113 ExecutionOccurrence occ1 = new ExecutionOccurrence();
2114 occ1.setStartOccurrence(start.getEventOccurrence());
2115 occ1.setEndOccurrence(synReturn1.getEventOccurrence());
2116 lifeLine1.addExecution(occ1);
2117 occ1.setName("Activation 1");
2118
2119 ExecutionOccurrence occ2 = new ExecutionOccurrence();
2120 occ2.setStartOccurrence(syn1.getEventOccurrence());
2121 occ2.setEndOccurrence(synReturn1.getEventOccurrence());
2122 lifeLine2.addExecution(occ2);
2123 occ2.setName("Activation 2");
2124
2125 /*
2126 * Create Sync Message
2127 */
2128 // Get new occurrence on lifelines
2129 lifeLine1.getNewEventOccurrence();
2130 lifeLine2.getNewEventOccurrence();
2131
2132 // Get Sync message instances
2133 AsyncMessage asyn1 = new AsyncMessage();
2134 asyn1.setName("Async Message 1");
2135 asyn1.setStartLifeline(lifeLine1);
2136 asyn1.setEndLifeline(lifeLine2);
2137 testFrame.addMessage(asyn1);
2138
2139 /*
2140 * Create corresponding Sync Message Return
2141 */
2142
2143 // Get new occurrence on lifelines
2144 lifeLine1.getNewEventOccurrence();
2145 lifeLine2.getNewEventOccurrence();
2146
2147 AsyncMessageReturn asynReturn1 = new AsyncMessageReturn();
2148 asynReturn1.setName("Async Message Return 1");
2149 asynReturn1.setStartLifeline(lifeLine2);
2150 asynReturn1.setEndLifeline(lifeLine1);
2151 asynReturn1.setMessage(asyn1);
2152 testFrame.addMessage(asynReturn1);
2153
2154 /*
2155 * Create a note
2156 */
2157
2158 // Get new occurrence on lifelines
2159 lifeLine1.getNewEventOccurrence();
2160
2161 EllipsisMessage info = new EllipsisMessage();
2162 info.setName("Object deletion");
2163 info.setStartLifeline(lifeLine2);
2164 testFrame.addNode(info);
2165
2166 /*
2167 * Create a Stop
2168 */
2169 Stop stop = new Stop();
2170 stop.setLifeline(lifeLine2);
2171 stop.setEventOccurrence(lifeLine2.getNewEventOccurrence());
2172 lifeLine2.addNode(stop);
2173
2174 fSdView.setFrame(testFrame);
2175 }
2176 }
2177 </pre>
2178
2179 Now it's time to run the example application. To launch the Example Application select the ''Overview'' tab and click on '''Launch an Eclipse Application'''<br>
2180 [[Image:images/SampleDiagram1.png]] <br>
2181
2182 === Adding time information ===
2183
2184 To add time information in sequence diagram the timestamp has to be set for each message. The sequence diagram framework uses the ''TmfTimestamp'' class of plug-in ''org.eclipse.linuxtools.tmf.core''. Use ''setTime()'' on each message ''SyncMessage'' since start and end time are the same. For each ''AsyncMessage'' set start and end time separately by using methods ''setStartTime'' and ''setEndTime''. For example: <br>
2185
2186 <pre>
2187 private void createFrame() {
2188 //...
2189 start.setTime(new TmfTimestamp(1000, -3));
2190 syn1.setTime(new TmfTimestamp(1005, -3));
2191 synReturn1.setTime(new TmfTimestamp(1050, -3));
2192 asyn1.setStartTime(new TmfTimestamp(1060, -3));
2193 asyn1.setEndTime(new TmfTimestamp(1070, -3));
2194 asynReturn1.setStartTime(new TmfTimestamp(1060, -3));
2195 asynReturn1.setEndTime(new TmfTimestamp(1070, -3));
2196 //...
2197 }
2198 </pre>
2199
2200 When running the example application, a time compression bar on the left appears which indicates the time elapsed between consecutive events. The time compression scale shows where the time falls between the minimum and maximum delta times. The intensity of the color is used to indicate the length of time, namely, the deeper the intensity, the higher the delta time. The minimum and maximum delta times are configurable through the collbar menu ''Configure Min Max''. The time compression bar and scale may provide an indication about which events consumes the most time. By hovering over the time compression bar a tooltip appears containing more information. <br>
2201
2202 [[Image:images/SampleDiagramTimeComp.png]] <br>
2203
2204 By hovering over a message it will show the time information in the appearing tooltip. For each ''SyncMessage'' it shows its time occurrence and for each ''AsyncMessage'' it shows the start and end time.
2205
2206 [[Image:images/SampleDiagramSyncMessage.png]] <br>
2207 [[Image:images/SampleDiagramAsyncMessage.png]] <br>
2208
2209 To see the time elapsed between 2 messages, select one message and hover over a second message. A tooltip will show with the delta in time. Note if the second message is before the first then a negative delta is displayed. Note that for ''AsyncMessage'' the end time is used for the delta calculation.<br>
2210 [[Image:images/SampleDiagramMessageDelta.png]] <br>
2211
2212 === Default Coolbar and Menu Items ===
2213
2214 The Sequence Diagram View comes with default coolbar and menu items. By default, each sequence diagram shows the following actions:
2215 * Zoom in
2216 * Zoom out
2217 * Reset Zoom Factor
2218 * Selection
2219 * Configure Min Max (drop-down menu only)
2220 * Navigation -> Show the node end (drop-down menu only)
2221 * Navigation -> Show the node start (drop-down menu only)
2222
2223 [[Image:images/DefaultCoolbarMenu.png]]<br>
2224
2225 === Implementing Optional Callbacks ===
2226
2227 The following chapters describe how to use all supported provider interfaces.
2228
2229 ==== Using the Paging Provider Interface ====
2230
2231 For scalability reasons, the paging provider interfaces exists to limit the number of messages displayed in the Sequence Diagram View at a time. For that, two interfaces exist, the basic paging provider and the advanced paging provider. When using the basic paging interface, actions for traversing page by page through the sequence diagram of a trace will be provided.
2232 <br>
2233 To use the basic paging provider, first the interface methods of the ''ISDPagingProvider'' have to be implemented by a class. (i.e. ''hasNextPage()'', ''hasPrevPage()'', ''nextPage()'', ''prevPage()'', ''firstPage()'' and ''endPage()''. Typically, this is implemented in the loader class. Secondly, the provider has to be set in the Sequence Diagram View. This will be done in the ''setViewer()'' method of the loader class. Lastly, the paging provider has to be removed from the view, when the ''dispose()'' method of the loader class is called.
2234
2235 <pre>
2236 public class SampleLoader implements IUml2SDLoader, ISDPagingProvider {
2237 //...
2238 private page = 0;
2239
2240 @Override
2241 public void dispose() {
2242 if (fSdView != null) {
2243 fSdView.resetProviders();
2244 }
2245 }
2246
2247 @Override
2248 public void setViewer(SDView arg0) {
2249 fSdView = arg0;
2250 fSdView.setSDPagingProvider(this);
2251 createFrame();
2252 }
2253
2254 private void createSecondFrame() {
2255 Frame testFrame = new Frame();
2256 testFrame.setName("SecondFrame");
2257 Lifeline lifeline = new Lifeline();
2258 lifeline.setName("LifeLine 0");
2259 testFrame.addLifeLine(lifeline);
2260 lifeline = new Lifeline();
2261 lifeline.setName("LifeLine 1");
2262 testFrame.addLifeLine(lifeline);
2263 for (int i = 1; i < 5; i++) {
2264 SyncMessage message = new SyncMessage();
2265 message.autoSetStartLifeline(testFrame.getLifeline(0));
2266 message.autoSetEndLifeline(testFrame.getLifeline(0));
2267 message.setName((new StringBuilder("Message ")).append(i).toString());
2268 testFrame.addMessage(message);
2269
2270 SyncMessageReturn messageReturn = new SyncMessageReturn();
2271 messageReturn.autoSetStartLifeline(testFrame.getLifeline(0));
2272 messageReturn.autoSetEndLifeline(testFrame.getLifeline(0));
2273
2274 testFrame.addMessage(messageReturn);
2275 messageReturn.setName((new StringBuilder("Message return ")).append(i).toString());
2276 ExecutionOccurrence occ = new ExecutionOccurrence();
2277 occ.setStartOccurrence(testFrame.getSyncMessage(i - 1).getEventOccurrence());
2278 occ.setEndOccurrence(testFrame.getSyncMessageReturn(i - 1).getEventOccurrence());
2279 testFrame.getLifeline(0).addExecution(occ);
2280 }
2281 fSdView.setFrame(testFrame);
2282 }
2283
2284 @Override
2285 public boolean hasNextPage() {
2286 return page == 0;
2287 }
2288
2289 @Override
2290 public boolean hasPrevPage() {
2291 return page == 1;
2292 }
2293
2294 @Override
2295 public void nextPage() {
2296 page = 1;
2297 createSecondFrame();
2298 }
2299
2300 @Override
2301 public void prevPage() {
2302 page = 0;
2303 createFrame();
2304 }
2305
2306 @Override
2307 public void firstPage() {
2308 page = 0;
2309 createFrame();
2310 }
2311
2312 @Override
2313 public void lastPage() {
2314 page = 1;
2315 createSecondFrame();
2316 }
2317 //...
2318 }
2319
2320 </pre>
2321
2322 When running the example application, new actions will be shown in the coolbar and the coolbar menu. <br>
2323
2324 [[Image:images/PageProviderAdded.png]]
2325
2326 <br><br>
2327 To use the advanced paging provider, the interface ''ISDAdvancePagingProvider'' has to be implemented. It extends the basic paging provider. The methods ''currentPage()'', ''pagesCount()'' and ''pageNumberChanged()'' have to be added.
2328 <br>
2329
2330 ==== Using the Find Provider Interface ====
2331
2332 For finding nodes in a sequence diagram two interfaces exists. One for basic finding and one for extended finding. The basic find comes with a dialog box for entering find criteria as regular expressions. This find criteria can be used to execute the find. Find criteria a persisted in the Eclipse workspace.
2333 <br>
2334 For the extended find provider interface a ''org.eclipse.jface.action.Action'' class has to be provided. The actual find handling has to be implemented and triggered by the action.
2335 <br>
2336 Only on at a time can be active. If the extended find provder is defined it obsoletes the basic find provider.
2337 <br>
2338 To use the basic find provider, first the interface methods of the ''ISDFindProvider'' have to be implemented by a class. Typically, this is implemented in the loader class. Add the ISDFindProvider to the list of implemented interfaces, implement the methods ''find()'' and ''cancel()'' and set the provider in the ''setViewer()'' method as well as remove the provider in the ''dispose()'' method of the loader class. Please note that the ''ISDFindProvider'' extends the interface ''ISDGraphNodeSupporter'' which methods (''isNodeSupported()'' and ''getNodeName()'') have to be implemented, too. The following shows an example implementation. Please note that only search for lifelines and SynchMessage are supported. The find itself will always find only the first occurrence the pattern to match.
2339
2340 <pre>
2341 public class SampleLoader implements IUml2SDLoader, ISDPagingProvider, ISDFindProvider {
2342
2343 //...
2344 @Override
2345 public void dispose() {
2346 if (fSdView != null) {
2347 fSdView.resetProviders();
2348 }
2349 }
2350
2351 @Override
2352 public void setViewer(SDView arg0) {
2353 fSdView = arg0;
2354 fSdView.setSDPagingProvider(this);
2355 fSdView.setSDFindProvider(this);
2356 createFrame();
2357 }
2358
2359 @Override
2360 public boolean isNodeSupported(int nodeType) {
2361 switch (nodeType) {
2362 case ISDGraphNodeSupporter.LIFELINE:
2363 case ISDGraphNodeSupporter.SYNCMESSAGE:
2364 return true;
2365
2366 default:
2367 break;
2368 }
2369 return false;
2370 }
2371
2372 @Override
2373 public String getNodeName(int nodeType, String loaderClassName) {
2374 switch (nodeType) {
2375 case ISDGraphNodeSupporter.LIFELINE:
2376 return "Lifeline";
2377 case ISDGraphNodeSupporter.SYNCMESSAGE:
2378 return "Sync Message";
2379 }
2380 return "";
2381 }
2382
2383 @Override
2384 public boolean find(Criteria criteria) {
2385 Frame frame = fSdView.getFrame();
2386 if (criteria.isLifeLineSelected()) {
2387 for (int i = 0; i < frame.lifeLinesCount(); i++) {
2388 if (criteria.matches(frame.getLifeline(i).getName())) {
2389 fSdView.getSDWidget().moveTo(frame.getLifeline(i));
2390 return true;
2391 }
2392 }
2393 }
2394 if (criteria.isSyncMessageSelected()) {
2395 for (int i = 0; i < frame.syncMessageCount(); i++) {
2396 if (criteria.matches(frame.getSyncMessage(i).getName())) {
2397 fSdView.getSDWidget().moveTo(frame.getSyncMessage(i));
2398 return true;
2399 }
2400 }
2401 }
2402 return false;
2403 }
2404
2405 @Override
2406 public void cancel() {
2407 // reset find parameters
2408 }
2409 //...
2410 }
2411 </pre>
2412
2413 When running the example application, the find action will be shown in the coolbar and the coolbar menu. <br>
2414 [[Image:images/FindProviderAdded.png]]
2415
2416 To find a sequence diagram node press on the find button of the coolbar (see above). A new dialog box will open. Enter a regular expression in the ''Matching String'' text box, select the node types (e.g. Sync Message) and press '''Find'''. If found the corresponding node will be selected. If not found the dialog box will indicate not found. <br>
2417 [[Image:images/FindDialog.png]]<br>
2418
2419 Note that the find dialog will be opened by typing the key shortcut CRTL+F.
2420
2421 ==== Using the Filter Provider Interface ====
2422
2423 For filtering of sequence diagram elements two interfaces exist. One basic for filtering and one for extended filtering. The basic filtering comes with two dialog for entering filter criteria as regular expressions and one for selecting the filter to be used. Multiple filters can be active at a time. Filter criteria are persisted in the Eclipse workspace.
2424 <br>
2425 To use the basic filter provider, first the interface method of the ''ISDFilterProvider'' has to be implemented by a class. Typically, this is implemented in the loader class. Add the ''ISDFilterProvider'' to the list of implemented interfaces, implement the method ''filter()''and set the provider in the ''setViewer()'' method as well as remove the provider in the ''dispose()'' method of the loader class. Please note that the ''ISDFindProvider'' extends the interface ''ISDGraphNodeSupporter'' which methods (''isNodeSupported()'' and ''getNodeName()'') have to be implemented, too. <br>
2426 Note that no example implementation of ''filter()'' is provided.
2427 <br>
2428
2429 <pre>
2430 public class SampleLoader implements IUml2SDLoader, ISDPagingProvider, ISDFindProvider, ISDFilterProvider {
2431
2432 //...
2433 @Override
2434 public void dispose() {
2435 if (fSdView != null) {
2436 fSdView.resetProviders();
2437 }
2438 }
2439
2440 @Override
2441 public void setViewer(SDView arg0) {
2442 fSdView = arg0;
2443 fSdView.setSDPagingProvider(this);
2444 fSdView.setSDFindProvider(this);
2445 fSdView.setSDFilterProvider(this);
2446 createFrame();
2447 }
2448
2449 @Override
2450 public boolean filter(List<?> list) {
2451 return false;
2452 }
2453 //...
2454 }
2455 </pre>
2456
2457 When running the example application, the filter action will be shown in the coolbar menu. <br>
2458 [[Image:images/HidePatternsMenuItem.png]]
2459
2460 To filter select the '''Hide Patterns...''' of the coolbar menu. A new dialog box will open. <br>
2461 [[Image:images/DialogHidePatterns.png]]
2462
2463 To Add a new filter press '''Add...'''. A new dialog box will open. Enter a regular expression in the ''Matching String'' text box, select the node types (e.g. Sync Message) and press '''Create''''. <br>
2464 [[Image:images/DialogHidePatterns.png]] <br>
2465
2466 Now back at the Hide Pattern dialog. Select one or more filter and select '''OK'''.
2467
2468 To use the extended filter provider, the interface ''ISDExtendedFilterProvider'' has to be implemented. It will provide a ''org.eclipse.jface.action.Action'' class containing the actual filter handling and filter algorithm.
2469
2470 ==== Using the Extended Action Bar Provider Interface ====
2471
2472 The extended action bar provider can be used to add customized actions to the Sequence Diagram View.
2473 To use the extended action bar provider, first the interface method of the interface ''ISDExtendedActionBarProvider'' has to be implemented by a class. Typically, this is implemented in the loader class. Add the ''ISDExtendedActionBarProvider'' to the list of implemented interfaces, implement the method ''supplementCoolbarContent()'' and set the provider in the ''setViewer()'' method as well as remove the provider in the ''dispose()'' method of the loader class. <br>
2474
2475 <pre>
2476 public class SampleLoader implements IUml2SDLoader, ISDPagingProvider, ISDFindProvider, ISDFilterProvider, ISDExtendedActionBarProvider {
2477 //...
2478
2479 @Override
2480 public void dispose() {
2481 if (fSdView != null) {
2482 fSdView.resetProviders();
2483 }
2484 }
2485
2486 @Override
2487 public void setViewer(SDView arg0) {
2488 fSdView = arg0;
2489 fSdView.setSDPagingProvider(this);
2490 fSdView.setSDFindProvider(this);
2491 fSdView.setSDFilterProvider(this);
2492 fSdView.setSDExtendedActionBarProvider(this);
2493 createFrame();
2494 }
2495
2496 @Override
2497 public void supplementCoolbarContent(IActionBars iactionbars) {
2498 Action action = new Action("Refresh") {
2499 @Override
2500 public void run() {
2501 System.out.println("Refreshing...");
2502 }
2503 };
2504 iactionbars.getMenuManager().add(action);
2505 iactionbars.getToolBarManager().add(action);
2506 }
2507 //...
2508 }
2509 </pre>
2510
2511 When running the example application, all new actions will be added to the coolbar and coolbar menu according to the implementation of ''supplementCoolbarContent()''<br>.
2512 For the example above the coolbar and coolbar menu will look as follows.
2513
2514 [[Image:images/SupplCoolbar.png]]
2515
2516 ==== Using the Properties Provider Interface====
2517
2518 This interface can be used to provide property information. A property provider which returns an ''IPropertyPageSheet'' (see ''org.eclipse.ui.views'') has to be implemented and set in the Sequence Diagram View. <br>
2519
2520 To use the property provider, first the interface method of the ''ISDPropertiesProvider'' has to be implemented by a class. Typically, this is implemented in the loader class. Add the ''ISDPropertiesProvider'' to the list of implemented interfaces, implement the method ''getPropertySheetEntry()'' and set the provider in the ''setViewer()'' method as well as remove the provider in the ''dispose()'' method of the loader class. Please note that no example is provided here.
2521
2522 Please refer to the following Eclipse articles for more information about properties and tabed properties.
2523 *[http://www.eclipse.org/articles/Article-Properties-View/properties-view.html | Take control of your properties]
2524 *[http://www.eclipse.org/articles/Article-Tabbed-Properties/tabbed_properties_view.html | The Eclipse Tabbed Properties View]
2525
2526 ==== Using the Collapse Provider Interface ====
2527
2528 This interface can be used to define a provider which responsibility is to collapse two selected lifelines. This can be used to hide a pair of lifelines.
2529
2530 To use the collapse provider, first the interface method of the ''ISDCollapseProvider'' has to be implemented by a class. Typically, this is implemented in the loader class. Add the ISDCollapseProvider to the list of implemented interfaces, implement the method ''collapseTwoLifelines()'' and set the provider in the ''setViewer()'' method as well as remove the provider in the ''dispose()'' method of the loader class. Please note that no example is provided here.
2531
2532 ==== Using the Selection Provider Service ====
2533
2534 The Sequence Diagram View comes with a build in selection provider service. To this service listeners can be added. To use the selection provider service, the interface ''ISelectionListener'' of plug-in ''org.eclipse.ui'' has to implemented. Typically this is implemented in loader class. Firstly, add the ''ISelectionListener'' interface to the list of implemented interfaces, implement the method ''selectionChanged()'' and set the listener in method ''setViewer()'' as well as remove the listener in the ''dispose()'' method of the loader class.
2535
2536 <pre>
2537 public class SampleLoader implements IUml2SDLoader, ISDPagingProvider, ISDFindProvider, ISDFilterProvider, ISDExtendedActionBarProvider, ISelectionListener {
2538
2539 //...
2540 @Override
2541 public void dispose() {
2542 if (fSdView != null) {
2543 PlatformUI.getWorkbench().getActiveWorkbenchWindow().getSelectionService().removePostSelectionListener(this);
2544 fSdView.resetProviders();
2545 }
2546 }
2547
2548 @Override
2549 public String getTitleString() {
2550 return "Sample Diagram";
2551 }
2552
2553 @Override
2554 public void setViewer(SDView arg0) {
2555 fSdView = arg0;
2556 PlatformUI.getWorkbench().getActiveWorkbenchWindow().getSelectionService().addPostSelectionListener(this);
2557 fSdView.setSDPagingProvider(this);
2558 fSdView.setSDFindProvider(this);
2559 fSdView.setSDFilterProvider(this);
2560 fSdView.setSDExtendedActionBarProvider(this);
2561
2562 createFrame();
2563 }
2564
2565 @Override
2566 public void selectionChanged(IWorkbenchPart part, ISelection selection) {
2567 ISelection sel = PlatformUI.getWorkbench().getActiveWorkbenchWindow().getSelectionService().getSelection();
2568 if (sel != null && (sel instanceof StructuredSelection)) {
2569 StructuredSelection stSel = (StructuredSelection) sel;
2570 if (stSel.getFirstElement() instanceof BaseMessage) {
2571 BaseMessage syncMsg = ((BaseMessage) stSel.getFirstElement());
2572 System.out.println("Message '" + syncMsg.getName() + "' selected.");
2573 }
2574 }
2575 }
2576
2577 //...
2578 }
2579 </pre>
2580
2581 === Printing a Sequence Diagram ===
2582
2583 To print a the whole sequence diagram or only parts of it, select the Sequence Diagram View and select '''File -> Print...''' or type the key combination ''CTRL+P''. A new print dialog will open. <br>
2584
2585 [[Image:images/PrintDialog.png]] <br>
2586
2587 Fill in all the relevant information, select '''Printer...''' to choose the printer and the press '''OK'''.
2588
2589 === Using one Sequence Diagram View with Multiple Loaders ===
2590
2591 A Sequence Diagram View definition can be used with multiple sequence diagram loaders. However, the active loader to be used when opening the view has to be set. For this define an Eclipse action or command and assign the current loader to the view. Here is a code snippet for that:
2592
2593 <pre>
2594 public class OpenSDView extends AbstractHandler {
2595 @Override
2596 public Object execute(ExecutionEvent event) throws ExecutionException {
2597 try {
2598 IWorkbenchPage persp = TmfUiPlugin.getDefault().getWorkbench().getActiveWorkbenchWindow().getActivePage();
2599 SDView view = (SDView) persp.showView("org.eclipse.linuxtools.ust.examples.ui.componentinteraction");
2600 LoadersManager.getLoadersManager().createLoader("org.eclipse.linuxtools.tmf.ui.views.uml2sd.impl.TmfUml2SDSyncLoader", view);
2601 } catch (PartInitException e) {
2602 throw new ExecutionException("PartInitException caught: ", e);
2603 }
2604 return null;
2605 }
2606 }
2607 </pre>
2608
2609 === Downloading the Tutorial ===
2610
2611 Use the following link to download the source code of the tutorial [http://wiki.eclipse.org/images/e/e6/SamplePlugin.zip Plug-in of Tutorial].
2612
2613 == Integration of Tracing and Monitoring Framework with Sequence Diagram Framework ==
2614
2615 In the previous sections the Sequence Diagram Framework has been described and a tutorial was provided. In the following sections the integration of the Sequence Diagram Framework with other features of TMF will be described. Together it is a powerful framework to analyze and visualize content of traces. The integration is explained using the reference implementation of a UML2 sequence diagram loader which part of the TMF UI delivery. The reference implementation can be used as is, can be sub-classed or simply be an example for other sequence diagram loaders to be implemented.
2616
2617 === Reference Implementation ===
2618
2619 A Sequence Diagram View Extension is defined in the plug-in TMF UI as well as a uml2SDLoader Extension with the reference loader.
2620
2621 [[Image:images/ReferenceExtensions.png]]
2622
2623 === Used Sequence Diagram Features ===
2624
2625 Besides the default features of the Sequence Diagram Framework, the reference implementation uses the following additional features:
2626 *Advanced paging
2627 *Basic finding
2628 *Basic filtering
2629 *Selection Service
2630
2631 ==== Advanced paging ====
2632
2633 The reference loader implements the interface ''ISDAdvancedPagingProvider'' interface. Please refer to section [[#Using the Paging Provider Interface | Using the Paging Provider Interface]] for more details about the advanced paging feature.
2634
2635 ==== Basic finding ====
2636
2637 The reference loader implements the interface ''ISDFindProvider'' interface. The user can search for ''Lifelines'' and ''Interactions''. The find is done across pages. If the expression to match is not on the current page a new thread is started to search on other pages. If expression is found the corresponding page is shown as well as the searched item is displayed. If not found then a message is displayed in the ''Progress View'' of Eclipse. Please refer to section [[#Using the Find Provider Interface | Using the Find Provider Interface]] for more details about the basic find feature.
2638
2639 ==== Basic filtering ====
2640
2641 The reference loader implements the interface ''ISDFilterProvider'' interface. The user can filter on ''Lifelines'' and ''Interactions''. Please refer to section [[#Using the Filter Provider Interface | Using the Filter Provider Interface]] for more details about the basic filter feature.
2642
2643 ==== Selection Service ====
2644
2645 The reference loader implements the interface ''ISelectionListener'' interface. When an interaction is selected a ''TmfTimeSynchSignal'' is broadcast (see [[#TMF Signal Framework | TMF Signal Framework]]). Please also refer to section [[#Using the Selection Provider Service | Using the Selection Provider Service]] for more details about the selection service and .
2646
2647 === Used TMF Features ===
2648
2649 The reference implementation uses the following features of TMF:
2650 *TMF Experiment and Trace for accessing traces
2651 *Event Request Framework to request TMF events from the experiment and respective traces
2652 *Signal Framework for broadcasting and receiving TMF signals for synchronization purposes
2653
2654 ==== TMF Experiment and Trace for accessing traces ====
2655
2656 The reference loader uses TMF Experiments to access traces and to request data from the traces.
2657
2658 ==== TMF Event Request Framework ====
2659
2660 The reference loader use the TMF Event Request Framework to request events from the experiment and its traces.
2661
2662 When opening a traces (which is triggered by signal ''TmfExperimentSelected'') or when opening the Sequence Diagram View after a trace had been opened previously, a TMF background request is initiated to index the trace and to fill in the first page of the sequence diagram. The purpose of the indexing is to store time ranges for pages with 10000 messages per page. This allows quickly to move to certain pages in a trace without having to re-parse from the beginning. The request is called indexing request.
2663
2664 When switching pages, the a TMF foreground event request is initiated to retrieve the corresponding events from the experiment. It uses the time range stored in the index for the respective page.
2665
2666 A third type of event request is issued for finding specific data across pages.
2667
2668 ==== TMF Signal Framework ====
2669
2670 The reference loader extends the class ''TmfComponent''. By doing that the loader is registered as a TMF signal handler for sending and receiving TMF signals. The loader implements signal handlers for the following TMF signals:
2671 *''TmfTraceSelectedSignal''
2672 This signal indicates that a trace or experiment was selected. When receiving this signal the indexing request is initiated and the first page is displayed after receiving the relevant information.
2673 *''TmfTraceClosedSignal''
2674 This signal indicates that a trace or experiment was closed. When receiving this signal the loader resets its data and a blank page is loaded in the Sequence Diagram View.
2675 *''TmfTimeSynchSignal''
2676 This signal is used to indicate that a new time or time range has been selected. It contains a begin and end time. If a single time is selected then the begin and end time are the same. When receiving this signal the corresponding message matching the begin time is selected in the Sequence Diagram View. If necessary, the page is changed.
2677 *''TmfRangeSynchSignal''
2678 This signal indicates that a new time range is in focus. When receiving this signal the loader loads the page which corresponds to the start time of the time range signal. The message with the start time will be in focus.
2679
2680 Besides acting on receiving signals, the reference loader is also sending signals. A ''TmfTimeSynchSignal'' is broadcasted with the timestamp of the message which was selected in the Sequence Diagram View. ''TmfRangeSynchSignal'' is sent when a page is changed in the Sequence Diagram View. The start timestamp of the time range sent is the timestamp of the first message. The end timestamp sent is the timestamp of the first message plus the current time range window. The current time range window is the time window that was indicated in the last received ''TmfRangeSynchSignal''.
2681
2682 === Supported Traces ===
2683
2684 The reference implementation is able to analyze traces from a single component that traces the interaction with other components. For example, a server node could have trace information about its interaction with client nodes. The server node could be traced and then analyzed using TMF and the Sequence Diagram Framework of TMF could used to visualize the interactions with the client nodes.<br>
2685
2686 Note that combined traces of multiple components, that contain the trace information about the same interactions are not supported in the reference implementation!
2687
2688 === Trace Format ===
2689
2690 The reference implementation in class ''TmfUml2SDSyncLoader'' in package ''org.eclipse.linuxtools.tmf.ui.views.uml2sd.impl'' analyzes events from type ''ITmfEvent'' and creates events type ''ITmfSyncSequenceDiagramEvent'' if the ''ITmfEvent'' contains all relevant information information. The parsing algorithm looks like as follows:
2691
2692 <pre>
2693 /**
2694 * @param tmfEvent Event to parse for sequence diagram event details
2695 * @return sequence diagram event if details are available else null
2696 */
2697 protected ITmfSyncSequenceDiagramEvent getSequenceDiagramEvent(ITmfEvent tmfEvent){
2698 //type = .*RECEIVE.* or .*SEND.*
2699 //content = sender:<sender name>:receiver:<receiver name>,signal:<signal name>
2700 String eventType = tmfEvent.getType().toString();
2701 if (eventType.contains(Messages.TmfUml2SDSyncLoader_EventTypeSend) || eventType.contains(Messages.TmfUml2SDSyncLoader_EventTypeReceive)) {
2702 Object sender = tmfEvent.getContent().getField(Messages.TmfUml2SDSyncLoader_FieldSender);
2703 Object receiver = tmfEvent.getContent().getField(Messages.TmfUml2SDSyncLoader_FieldReceiver);
2704 Object name = tmfEvent.getContent().getField(Messages.TmfUml2SDSyncLoader_FieldSignal);
2705 if ((sender instanceof ITmfEventField) && (receiver instanceof ITmfEventField) && (name instanceof ITmfEventField)) {
2706 ITmfSyncSequenceDiagramEvent sdEvent = new TmfSyncSequenceDiagramEvent(tmfEvent,
2707 ((ITmfEventField) sender).getValue().toString(),
2708 ((ITmfEventField) receiver).getValue().toString(),
2709 ((ITmfEventField) name).getValue().toString());
2710
2711 return sdEvent;
2712 }
2713 }
2714 return null;
2715 }
2716 </pre>
2717
2718 The analysis looks for event type Strings containing ''SEND'' and ''RECEIVE''. If event type matches these key words, the analyzer will look for strings ''sender'', ''receiver'' and ''signal'' in the event fields of type ''ITmfEventField''. If all the data is found a sequence diagram event can be created using this information. Note that Sync Messages are assumed, which means start and end time are the same.
2719
2720 === How to use the Reference Implementation ===
2721
2722 An example CTF (Common Trace Format) trace is provided that contains trace events with sequence diagram information. To download the reference trace, use the following link: [https://wiki.eclipse.org/images/3/35/ReferenceTrace.zip Reference Trace].
2723
2724 Run an Eclipse application with TMF 3.0 or later installed. To open the Reference Sequence Diagram View, select '''Windows -> Show View -> Other... -> TMF -> Sequence Diagram''' <br>
2725 [[Image:images/ShowTmfSDView.png]]<br>
2726
2727 A blank Sequence Diagram View will open.
2728
2729 Then import the reference trace to the '''Project Explorer''' using the '''Import Trace Package...''' menu option.<br>
2730 [[Image:images/ImportTracePackage.png]]
2731
2732 Next, open the trace by double-clicking on the trace element in the '''Project Explorer'''. The trace will be opened and the Sequence Diagram view will be filled.
2733 [[Image:images/ReferenceSeqDiagram.png]]<br>
2734
2735 Now the reference implementation can be explored. To demonstrate the view features try the following things:
2736 *Select a message in the Sequence diagram. As result the corresponding event will be selected in the Events View.
2737 *Select an event in the Events View. As result the corresponding message in the Sequence Diagram View will be selected. If necessary, the page will be changed.
2738 *In the Events View, press key ''End''. As result, the Sequence Diagram view will jump to the last page.
2739 *In the Events View, press key ''Home''. As result, the Sequence Diagram view will jump to the first page.
2740 *In the Sequence Diagram View select the find button. Enter the expression '''REGISTER.*''', select '''Search for Interaction''' and press '''Find'''. As result the corresponding message will be selected in the Sequence Diagram and the corresponding event in the Events View will be selected. Select again '''Find''' the next occurrence of will be selected. Since the second occurrence is on a different page than the first, the corresponding page will be loaded.
2741 * In the Sequence Diagram View, select menu item '''Hide Patterns...'''. Add the filter '''BALL.*''' for '''Interaction''' only and select '''OK'''. As result all messages with name ''BALL_REQUEST'' and ''BALL_REPLY'' will be hidden. To remove the filter, select menu item '''Hide Patterns...''', deselect the corresponding filter and press '''OK'''. All the messages will be shown again.<br>
2742
2743 === Extending the Reference Loader ===
2744
2745 In some case it might be necessary to change the implementation of the analysis of each ''TmfEvent'' for the generation of ''Sequence Diagram Events''. For that just extend the class ''TmfUml2SDSyncLoader'' and overwrite the method ''protected ITmfSyncSequenceDiagramEvent getSequnceDiagramEvent(TmfEvent tmfEvent)'' with your own implementation.
2746
2747 = CTF Parser =
2748
2749 == CTF Format ==
2750 CTF is a format used to store traces. It is self defining, binary and made to be easy to write to.
2751 Before going further, the full specification of the CTF file format can be found at http://www.efficios.com/ .
2752
2753 For the purpose of the reader some basic description will be given. A CTF trace typically is made of several files all in the same folder.
2754
2755 These files can be split into two types :
2756 * Metadata
2757 * Event streams
2758
2759 === Metadata ===
2760 The metadata is either raw text or packetized text. It is tsdl encoded. it contains a description of the type of data in the event streams. It can grow over time if new events are added to a trace but it will never overwrite what is already there.
2761
2762 === Event Streams ===
2763 The event streams are a file per stream per cpu. These streams are binary and packet based. The streams store events and event information (ie lost events) The event data is stored in headers and field payloads.
2764
2765 So if you have two streams (channels) "channel1" and "channel2" and 4 cores, you will have the following files in your trace directory: "channel1_0" , "channel1_1" , "channel1_2" , "channel1_3" , "channel2_0" , "channel2_1" , "channel2_2" & "channel2_3"
2766
2767 == Reading a trace ==
2768 In order to read a CTF trace, two steps must be done.
2769 * The metadata must be read to know how to read the events.
2770 * the events must be read.
2771
2772 The metadata is a written in a subset of the C language called TSDL. To read it, first it is depacketized (if it is not in plain text) then the raw text is parsed by an antlr grammer. The parsing is done in two phases. There is a lexer (CTFLexer.g) which separated the metatdata text into tokens. The tokens are then pattern matched using the parser (CTFParser.g) to form an AST. This AST is walked through using "IOStructGen.java" to populate streams and traces in trace parent object.
2773
2774 When the metadata is loaded and read, the trace object will be populated with 3 items:
2775 * the event definitions available per stream: a definition is a description of the datatype.
2776 * the event declarations available per stream: this will save declaration creation on a per event basis. They will all be created in advance, just not populated.
2777 * the beginning of a packet index.
2778
2779 Now all the trace readers for the event streams have everything they need to read a trace. They will each point to one file, and read the file from packet to packet. Everytime the trace reader changes packet, the index is updated with the new packet's information. The readers are in a priority queue and sorted by timestamp. This ensures that the events are read in a sequential order. They are also sorted by file name so that in the eventuality that two events occur at the same time, they stay in the same order.
2780
2781 == Seeking in a trace ==
2782 The reason for maintaining an index is to speed up seeks. In the case that a user wishes to seek to a certain timestamp, they just have to find the index entry that contains the timestamp, and go there to iterate in that packet until the proper event is found. this will reduce the searches time by an order of 8000 for a 256k paket size (kernel default).
2783
2784 == Interfacing to TMF ==
2785 The trace can be read easily now but the data is still awkward to extract.
2786
2787 === CtfLocation ===
2788 A location in a given trace, it is currently the timestamp of a trace and the index of the event. The index shows for a given timestamp if it is the first second or nth element.
2789
2790 === CtfTmfTrace ===
2791 The CtfTmfTrace is a wrapper for the standard CTF trace that allows it to perform the following actions:
2792 * '''initTrace()''' create a trace
2793 * '''validateTrace()''' is the trace a CTF trace?
2794 * '''getLocationRatio()''' how far in the trace is my location?
2795 * '''seekEvent()''' sets the cursor to a certain point in a trace.
2796 * '''readNextEvent()''' reads the next event and then advances the cursor
2797 * '''getTraceProperties()''' gets the 'env' structures of the metadata
2798
2799 === CtfIterator ===
2800 The CtfIterator is a wrapper to the CTF file reader. It behaves like an iterator on a trace. However, it contains a file pointer and thus cannot be duplicated too often or the system will run out of file handles. To alleviate the situation, a pool of iterators is created at the very beginning and stored in the CtfTmfTrace. They can be retried by calling the GetIterator() method.
2801
2802 === CtfIteratorManager ===
2803 Since each CtfIterator will have a file reader, the OS will run out of handles if too many iterators are spawned. The solution is to use the iterator manager. This will allow the user to get an iterator. If there is a context at the requested position, the manager will return that one, if not, a context will be selected at random and set to the correct location. Using random replacement minimizes contention as it will settle quickly at a new balance point.
2804
2805 === CtfTmfContext ===
2806 The CtfTmfContext implements the ITmfContext type. It is the CTF equivalent of TmfContext. It has a CtfLocation and points to an iterator in the CtfTmfTrace iterator pool as well as the parent trace. it is made to be cloned easily and not affect system resources much. Contexts behave much like C file pointers (FILE*) but they can be copied until one runs out of RAM.
2807
2808 === CtfTmfTimestamp ===
2809 The CtfTmfTimestamp take a CTF time (normally a long int) and outputs the time formats it as a TmfTimestamp, allowing it to be compared to other timestamps. The time is stored with the UTC offset already applied. It also features a simple toString() function that allows it to output the time in more Human readable ways: "yyyy/mm/dd/hh:mm:ss.nnnnnnnnn ns" for example. An additional feature is the getDelta() function that allows two timestamps to be substracted, showing the time difference between A and B.
2810
2811 === CtfTmfEvent ===
2812 The CtfTmfEvent is an ITmfEvent that is used to wrap event declarations and event definitions from the CTF side into easier to read and parse chunks of information. It is a final class with final fields made to be newed very often without incurring performance costs. Most of the information is already available. It should be noted that one type of event can appear called "lost event" these are synthetic events that do not exist in the trace. They will not appear in other trace readers such as babeltrace.
2813
2814 === Other ===
2815 There are other helper files that format given events for views, they are simpler and the architecture does not depend on them.
2816
2817 === Limitations ===
2818 For the moment live trace reading is not supported, there are no sources of traces to test on.
2819
2820 = Event matching and trace synchronization =
2821
2822 Event matching consists in taking an event from a trace and linking it to another event in a possibly different trace. The example that comes to mind is matching network packets sent from one traced machine to another traced machine. These matches can be used to synchronize traces.
2823
2824 Trace synchronization consists in taking traces, taken on different machines, with a different time reference, and finding the formula to transform the timestamps of some of the traces, so that they all have the same time reference.
2825
2826 == Event matching interfaces ==
2827
2828 Here's a description of the major parts involved in event matching. These classes are all in the ''org.eclipse.linuxtools.tmf.core.event.matching'' package:
2829
2830 * '''ITmfEventMatching''': Controls the event matching process
2831 * '''ITmfMatchEventDefinition''': Describes how events are matched
2832 * '''IMatchProcessingUnit''': Processes the matched events
2833
2834 == Implementation details and how to extend it ==
2835
2836 === ITmfEventMatching interface and derived classes ===
2837
2838 This interface and its default abstract implementation '''TmfEventMatching''' control the event matching itself. Their only public method is ''matchEvents''. The class needs to manage how to setup the traces, and any initialization or finalization procedures.
2839
2840 The abstract class generates an event request for each trace from which events are matched and waits for the request to complete before calling the one from another trace. The ''handleData'' method from the request calls the ''matchEvent'' method that needs to be implemented in children classes.
2841
2842 Class '''TmfNetworkEventMatching''' is a concrete implementation of this interface. It applies to all use cases where a ''in'' event can be matched with a ''out' event (''in'' and ''out'' can be the same event, with different data). It creates a '''TmfEventDependency''' between the source and destination events. The dependency is added to the processing unit.
2843
2844 To match events requiring other mechanisms (for instance, a series of events can be matched with another series of events), one would need to implement another class either extending '''TmfEventMatching''' or implementing '''ITmfEventMatching'''. It would most probably also require a new '''ITmfMatchEventDefinition''' implementation.
2845
2846 === ITmfMatchEventDefinition interface and its derived classes ===
2847
2848 These are the classes that describe how to actually match specific events together.
2849
2850 The '''canMatchTrace''' method will tell if a definition is compatible with a given trace.
2851
2852 The '''getUniqueField''' method will return a list of field values that uniquely identify this event and can be used to find a previous event to match with.
2853
2854 Typically, there would be a match definition abstract class/interface per event matching type.
2855
2856 The interface '''ITmfNetworkMatchDefinition''' adds the ''getDirection'' method to indicate whether this event is a ''in'' or ''out'' event to be matched with one from the opposite direction.
2857
2858 As examples, two concrete network match definitions have been implemented in the ''org.eclipse.linuxtools.lttng2.kernel.core.event.matching'' package for two compatible methods of matching TCP packets (See the LTTng User Guide on ''trace synchronization'' for information on those matching methods). Each one tells which events need to be present in the metadata of a CTF trace for this matching method to be applicable. It also returns the field values from each event that will uniquely match 2 events together.
2859
2860 === IMatchProcessingUnit interface and derived classes ===
2861
2862 While matching events is an exercice in itself, it's what to do with the match that really makes this functionality interesting. This is the job of the '''IMatchProcessingUnit''' interface.
2863
2864 '''TmfEventMatches''' provides a default implementation that only stores the matches to count them. When a new match is obtained, the ''addMatch'' is called with the match and the processing unit can do whatever needs to be done with it.
2865
2866 A match processing unit can be an analysis in itself. For example, trace synchronization is done through such a processing unit. One just needs to set the processing unit in the TmfEventMatching constructor.
2867
2868 == Code examples ==
2869
2870 === Using network packets matching in an analysis ===
2871
2872 This example shows how one can create a processing unit inline to create a link between two events. In this example, the code already uses an event request, so there is no need here to call the ''matchEvents'' method, that will only create another request.
2873
2874 <pre>
2875 class MyAnalysis extends TmfAbstractAnalysisModule {
2876
2877 private TmfNetworkEventMatching tcpMatching;
2878
2879 ...
2880
2881 protected void executeAnalysis() {
2882
2883 IMatchProcessingUnit matchProcessing = new IMatchProcessingUnit() {
2884 @Override
2885 public void matchingEnded() {
2886 }
2887
2888 @Override
2889 public void init(ITmfTrace[] fTraces) {
2890 }
2891
2892 @Override
2893 public int countMatches() {
2894 return 0;
2895 }
2896
2897 @Override
2898 public void addMatch(TmfEventDependency match) {
2899 log.debug("we got a tcp match! " + match.getSourceEvent().getContent() + " " + match.getDestinationEvent().getContent());
2900 TmfEvent source = match.getSourceEvent();
2901 TmfEvent destination = match.getDestinationEvent();
2902 /* Create a link between the two events */
2903 }
2904 };
2905
2906 ITmfTrace[] traces = { getTrace() };
2907 tcpMatching = new TmfNetworkEventMatching(traces, matchProcessing);
2908 tcpMatching.initMatching();
2909
2910 MyEventRequest request = new MyEventRequest(this, i);
2911 getTrace().sendRequest(request);
2912 }
2913
2914 public void analyzeEvent(TmfEvent event) {
2915 ...
2916 tcpMatching.matchEvent(event, 0);
2917 ...
2918 }
2919
2920 ...
2921
2922 }
2923
2924 class MyEventRequest extends TmfEventRequest {
2925
2926 private final MyAnalysis analysis;
2927
2928 MyEventRequest(MyAnalysis analysis, int traceno) {
2929 super(CtfTmfEvent.class,
2930 TmfTimeRange.ETERNITY,
2931 0,
2932 TmfDataRequest.ALL_DATA,
2933 ITmfDataRequest.ExecutionType.FOREGROUND);
2934 this.analysis = analysis;
2935 }
2936
2937 @Override
2938 public void handleData(final ITmfEvent event) {
2939 super.handleData(event);
2940 if (event != null) {
2941 analysis.analyzeEvent(event);
2942 }
2943 }
2944 }
2945 </pre>
2946
2947 === Match network events from UST traces ===
2948
2949 Suppose a client-server application is instrumented using LTTng-UST. Traces are collected on the server and some clients on different machines. The traces can be synchronized using network event matching.
2950
2951 The following metadata describes the events:
2952
2953 <pre>
2954 event {
2955 name = "myapp:send";
2956 id = 0;
2957 stream_id = 0;
2958 loglevel = 13;
2959 fields := struct {
2960 integer { size = 32; align = 8; signed = 1; encoding = none; base = 10; } _sendto;
2961 integer { size = 64; align = 8; signed = 1; encoding = none; base = 10; } _messageid;
2962 integer { size = 64; align = 8; signed = 1; encoding = none; base = 10; } _data;
2963 };
2964 };
2965
2966 event {
2967 name = "myapp:receive";
2968 id = 1;
2969 stream_id = 0;
2970 loglevel = 13;
2971 fields := struct {
2972 integer { size = 32; align = 8; signed = 1; encoding = none; base = 10; } _from;
2973 integer { size = 64; align = 8; signed = 1; encoding = none; base = 10; } _messageid;
2974 integer { size = 64; align = 8; signed = 1; encoding = none; base = 10; } _data;
2975 };
2976 };
2977 </pre>
2978
2979 One would need to write an event match definition for those 2 events as follows:
2980
2981 <pre>
2982 public class MyAppUstEventMatching implements ITmfNetworkMatchDefinition {
2983
2984 @Override
2985 public Direction getDirection(ITmfEvent event) {
2986 String evname = event.getType().getName();
2987 if (evname.equals("myapp:receive")) {
2988 return Direction.IN;
2989 } else if (evname.equals("myapp:send")) {
2990 return Direction.OUT;
2991 }
2992 return null;
2993 }
2994
2995 @Override
2996 public List<Object> getUniqueField(ITmfEvent event) {
2997 List<Object> keys = new ArrayList<Object>();
2998
2999 if (evname.equals("myapp:receive")) {
3000 keys.add(event.getContent().getField("from").getValue());
3001 keys.add(event.getContent().getField("messageid").getValue());
3002 } else {
3003 keys.add(event.getContent().getField("sendto").getValue());
3004 keys.add(event.getContent().getField("messageid").getValue());
3005 }
3006
3007 return keys;
3008 }
3009
3010 @Override
3011 public boolean canMatchTrace(ITmfTrace trace) {
3012 if (!(trace instanceof CtfTmfTrace)) {
3013 return false;
3014 }
3015 CtfTmfTrace ktrace = (CtfTmfTrace) trace;
3016 String[] events = { "myapp:receive", "myapp:send" };
3017 return ktrace.hasAtLeastOneOfEvents(events);
3018 }
3019
3020 @Override
3021 public MatchingType[] getApplicableMatchingTypes() {
3022 MatchingType[] types = { MatchingType.NETWORK };
3023 return types;
3024 }
3025
3026 }
3027 </pre>
3028
3029 Somewhere in code that will be executed at the start of the plugin (like in the Activator), the following code will have to be run:
3030
3031 <pre>
3032 TmfEventMatching.registerMatchObject(new MyAppUstEventMatching());
3033 </pre>
3034
3035 Now, only adding the traces in an experiment and clicking the '''Synchronize traces''' menu element would synchronize the traces using the new definition for event matching.
3036
3037 == Trace synchronization ==
3038
3039 Trace synchronization classes and interfaces are located in the ''org.eclipse.linuxtools.tmf.core.synchronization'' package.
3040
3041 === Synchronization algorithm ===
3042
3043 Synchronization algorithms are used to synchronize traces from events matched between traces. After synchronization, traces taken on different machines with different time references see their timestamps modified such that they all use the same time reference (typically, the time of at least one of the traces). With traces from different machines, it is impossible to have perfect synchronization, so the result is a best approximation that takes network latency into account.
3044
3045 The abstract class '''SynchronizationAlgorithm''' is a processing unit for matches. New synchronization algorithms must extend this one, it already contains the functions to get the timestamp transforms for different traces.
3046
3047 The ''fully incremental convex hull'' synchronization algorithm is the default synchronization algorithm.
3048
3049 While the synchronization system provisions for more synchronization algorithms, there is not yet a way to select one, the experiment's trace synchronization uses the default algorithm. To test a new synchronization algorithm, the synchronization should be called directly like this:
3050
3051 <pre>
3052 SynchronizationAlgorithm syncAlgo = new MyNewSynchronizationAlgorithm();
3053 syncAlgo = SynchronizationManager.synchronizeTraces(syncFile, traces, syncAlgo, true);
3054 </pre>
3055
3056 === Timestamp transforms ===
3057
3058 Timestamp transforms are the formulae used to transform the timestamps from a trace into the reference time. The '''ITmfTimestampTransform''' is the interface to implement to add a new transform.
3059
3060 The following classes implement this interface:
3061
3062 * '''TmfTimestampTransform''': default transform. It cannot be instantiated, it has a single static object TmfTimestampTransform.IDENTITY, which returns the original timestamp.
3063 * '''TmfTimestampTransformLinear''': transforms the timestamp using a linear formula: ''f(t) = at + b'', where ''a'' and ''b'' are computed by the synchronization algorithm.
3064
3065 One could extend the interface for other timestamp transforms, for instance to have a transform where the formula would change over the course of the trace.
3066
3067 == Todo ==
3068
3069 Here's a list of features not yet implemented that would enhance trace synchronization and event matching:
3070
3071 * Ability to select a synchronization algorithm
3072 * Implement a better way to select the reference trace instead of arbitrarily taking the first in alphabetical order (for instance, the minimum spanning tree algorithm by Masoume Jabbarifar (article on the subject not published yet))
3073 * Ability to join traces from the same host so that even if one of the traces is not synchronized with the reference trace, it will take the same timestamp transform as the one on the same machine.
3074 * Instead of having the timestamp transforms per trace, have the timestamp transform as part of an experiment context, so that the trace's specific analysis, like the state system, are in the original trace, but are transformed only when needed for an experiment analysis.
3075 * Add more views to display the synchronization information (only textual statistics are available for now)
3076
3077 = Analysis Framework =
3078
3079 Analysis modules are useful to tell the user exactly what can be done with a trace. The analysis framework provides an easy way to access and execute the modules and open the various outputs available.
3080
3081 Analyses can have parameters they can use in their code. They also have outputs registered to them to display the results from their execution.
3082
3083 == Creating a new module ==
3084
3085 All analysis modules must implement the '''IAnalysisModule''' interface from the o.e.l.tmf.core project. An abstract class, '''TmfAbstractAnalysisModule''', provides a good base implementation. It is strongly suggested to use it as a superclass of any new analysis.
3086
3087 === Example ===
3088
3089 This example shows how to add a simple analysis module for an LTTng kernel trace with two parameters.
3090
3091 <pre>
3092 public class MyLttngKernelAnalysis extends TmfAbstractAnalysisModule {
3093
3094 public static final String PARAM1 = "myparam";
3095 public static final String PARAM2 = "myotherparam";
3096
3097 @Override
3098 public boolean canExecute(ITmfTrace trace) {
3099 /* This just makes sure the trace is an Lttng kernel trace, though
3100 usually that should have been done by specifying the trace type
3101 this analysis module applies to */
3102 if (!LttngKernelTrace.class.isAssignableFrom(trace.getClass())) {
3103 return false;
3104 }
3105
3106 /* Does the trace contain the appropriate events? */
3107 String[] events = { "sched_switch", "sched_wakeup" };
3108 return ((LttngKernelTrace) trace).hasAllEvents(events);
3109 }
3110
3111 @Override
3112 protected void canceling() {
3113 /* The job I am running in is being cancelled, let's clean up */
3114 }
3115
3116 @Override
3117 protected boolean executeAnalysis(final IProgressMonitor monitor) {
3118 /*
3119 * I am running in an Eclipse job, and I already know I can execute
3120 * on a given trace.
3121 *
3122 * In the end, I will return true if I was successfully completed or
3123 * false if I was either interrupted or something wrong occurred.
3124 */
3125 Object param1 = getParameter(PARAM1);
3126 int param2 = (Integer) getParameter(PARAM2);
3127 }
3128
3129 @Override
3130 public Object getParameter(String name) {
3131 Object value = super.getParameter(name);
3132 /* Make sure the value of param2 is of the right type. For sake of
3133 simplicity, the full parameter format validation is not presented
3134 here */
3135 if ((value != null) && name.equals(PARAM2) && (value instanceof String)) {
3136 return Integer.parseInt((String) value);
3137 }
3138 return value;
3139 }
3140
3141 }
3142 </pre>
3143
3144 === Available base analysis classes and interfaces ===
3145
3146 The following are available as base classes for analysis modules. They also extend the abstract '''TmfAbstractAnalysisModule'''
3147
3148 * '''TmfStateSystemAnalysisModule''': A base analysis module that builds one state system. A module extending this class only needs to provide a state provider and the type of state system backend to use. All state systems should now use this base class as it also contains all the methods to actually create the state sytem with a given backend.
3149
3150 The following interfaces can optionally be implemented by analysis modules if they use their functionalities. For instance, some utility views, like the State System Explorer, may have access to the module's data through these interfaces.
3151
3152 * '''ITmfAnalysisModuleWithStateSystems''': Modules implementing this have one or more state systems included in them. For example, a module may "hide" 2 state system modules for its internal workings. By implementing this interface, it tells that it has state systems and can return them if required.
3153
3154 === How it works ===
3155
3156 Analyses are managed through the '''TmfAnalysisManager'''. The analysis manager is a singleton in the application and keeps track of all available analysis modules, with the help of '''IAnalysisModuleHelper'''. It can be queried to get the available analysis modules, either all of them or only those for a given tracetype. The helpers contain the non-trace specific information on an analysis module: its id, its name, the tracetypes it applies to, etc.
3157
3158 When a trace is opened, the helpers for the applicable analysis create new instances of the analysis modules. The analysis are then kept in a field of the trace and can be executed automatically or on demand.
3159
3160 The analysis is executed by calling the '''IAnalysisModule#schedule()''' method. This method makes sure the analysis is executed only once and, if it is already running, it won't start again. The analysis itself is run inside an Eclipse job that can be cancelled by the user or the application. The developer must consider the progress monitor that comes as a parameter of the '''executeAnalysis()''' method, to handle the proper cancellation of the processing. The '''IAnalysisModule#waitForCompletion()''' method will block the calling thread until the analysis is completed. The method will return whether the analysis was successfully completed or if it was cancelled.
3161
3162 A running analysis can be cancelled by calling the '''IAnalysisModule#cancel()''' method. This will set the analysis as done, so it cannot start again unless it is explicitly reset. This is done by calling the protected method '''resetAnalysis'''.
3163
3164 == Telling TMF about the analysis module ==
3165
3166 Now that the analysis module class exists, it is time to hook it to the rest of TMF so that it appears under the traces in the project explorer. The way to do so is to add an extension of type ''org.eclipse.linuxtools.tmf.core.analysis'' to a plugin, either through the ''Extensions'' tab of the Plug-in Manifest Editor or by editing directly the plugin.xml file.
3167
3168 The following code shows what the resulting plugin.xml file should look like.
3169
3170 <pre>
3171 <extension
3172 point="org.eclipse.linuxtools.tmf.core.analysis">
3173 <module
3174 id="my.lttng.kernel.analysis.id"
3175 name="My LTTng Kernel Analysis"
3176 analysis_module="my.plugin.package.MyLttngKernelAnalysis"
3177 automatic="true">
3178 <parameter
3179 name="myparam">
3180 </parameter>
3181 <parameter
3182 default_value="3"
3183 name="myotherparam">
3184 <tracetype
3185 class="org.eclipse.linuxtools.lttng2.kernel.core.trace.LttngKernelTrace">
3186 </tracetype>
3187 </module>
3188 </extension>
3189 </pre>
3190
3191 This defines an analysis module where the ''analysis_module'' attribute corresponds to the module class and must implement IAnalysisModule. This module has 2 parameters: ''myparam'' and ''myotherparam'' which has default value of 3. The ''tracetype'' element tells which tracetypes this analysis applies to. There can be many tracetypes. Also, the ''automatic'' attribute of the module indicates whether this analysis should be run when the trace is opened, or wait for the user's explicit request.
3192
3193 Note that with these extension points, it is possible to use the same module class for more than one analysis (with different ids and names). That is a desirable behavior. For instance, a third party plugin may add a new tracetype different from the one the module is meant for, but on which the analysis can run. Also, different analyses could provide different results with the same module class but with different default values of parameters.
3194
3195 == Attaching outputs and views to the analysis module ==
3196
3197 Analyses will typically produce outputs the user can examine. Outputs can be a text dump, a .dot file, an XML file, a view, etc. All output types must implement the '''IAnalysisOutput''' interface.
3198
3199 An output can be registered to an analysis module at any moment by calling the '''IAnalysisModule#registerOutput()''' method. Analyses themselves may know what outputs are available and may register them in the analysis constructor or after analysis completion.
3200
3201 The various concrete output types are:
3202
3203 * '''TmfAnalysisViewOutput''': It takes a view ID as parameter and, when selected, opens the view.
3204
3205 === Using the extension point to add outputs ===
3206
3207 Analysis outputs can also be hooked to an analysis using the same extension point ''org.eclipse.linuxtools.tmf.core.analysis'' in the plugin.xml file. Outputs can be matched either to a specific analysis identified by an ID, or to all analysis modules extending or implementing a given class or interface.
3208
3209 The following code shows how to add a view output to the analysis defined above directly in the plugin.xml file. This extension does not have to be in the same plugin as the extension defining the analysis. Typically, an analysis module can be defined in a core plugin, along with some outputs that do not require UI elements. Other outputs, like views, who need UI elements, will be defined in a ui plugin.
3210
3211 <pre>
3212 <extension
3213 point="org.eclipse.linuxtools.tmf.core.analysis">
3214 <output
3215 class="org.eclipse.linuxtools.tmf.ui.analysis.TmfAnalysisViewOutput"
3216 id="my.plugin.package.ui.views.myView">
3217 <analysisId
3218 id="my.lttng.kernel.analysis.id">
3219 </analysisId>
3220 </output>
3221 <output
3222 class="org.eclipse.linuxtools.tmf.ui.analysis.TmfAnalysisViewOutput"
3223 id="my.plugin.package.ui.views.myMoreGenericView">
3224 <analysisModuleClass
3225 class="my.plugin.package.core.MyAnalysisModuleClass">
3226 </analysisModuleClass>
3227 </output>
3228 </extension>
3229 </pre>
3230
3231 == Providing help for the module ==
3232
3233 For now, the only way to provide a meaningful help message to the user is by overriding the '''IAnalysisModule#getHelpText()''' method and return a string that will be displayed in a message box.
3234
3235 What still needs to be implemented is for a way to add a full user/developer documentation with mediawiki text file for each module and automatically add it to Eclipse Help. Clicking on the Help menu item of an analysis module would open the corresponding page in the help.
3236
3237 == Using analysis parameter providers ==
3238
3239 An analysis may have parameters that can be used during its execution. Default values can be set when describing the analysis module in the plugin.xml file, or they can use the '''IAnalysisParameterProvider''' interface to provide values for parameters. '''TmfAbstractAnalysisParamProvider''' provides an abstract implementation of this interface, that automatically notifies the module of a parameter change.
3240
3241 === Example parameter provider ===
3242
3243 The following example shows how to have a parameter provider listen to a selection in the LTTng kernel Control Flow view and send the thread id to the analysis.
3244
3245 <pre>
3246 public class MyLttngKernelParameterProvider extends TmfAbstractAnalysisParamProvider {
3247
3248 private ControlFlowEntry fCurrentEntry = null;
3249
3250 private static final String NAME = "My Lttng kernel parameter provider"; //$NON-NLS-1$
3251
3252 private ISelectionListener selListener = new ISelectionListener() {
3253 @Override
3254 public void selectionChanged(IWorkbenchPart part, ISelection selection) {
3255 if (selection instanceof IStructuredSelection) {
3256 Object element = ((IStructuredSelection) selection).getFirstElement();
3257 if (element instanceof ControlFlowEntry) {
3258 ControlFlowEntry entry = (ControlFlowEntry) element;
3259 setCurrentThreadEntry(entry);
3260 }
3261 }
3262 }
3263 };
3264
3265 /*
3266 * Constructor
3267 */
3268 public CriticalPathParameterProvider() {
3269 super();
3270 registerListener();
3271 }
3272
3273 @Override
3274 public String getName() {
3275 return NAME;
3276 }
3277
3278 @Override
3279 public Object getParameter(String name) {
3280 if (fCurrentEntry == null) {
3281 return null;
3282 }
3283 if (name.equals(MyLttngKernelAnalysis.PARAM1)) {
3284 return fCurrentEntry.getThreadId()
3285 }
3286 return null;
3287 }
3288
3289 @Override
3290 public boolean appliesToTrace(ITmfTrace trace) {
3291 return (trace instanceof LttngKernelTrace);
3292 }
3293
3294 private void setCurrentThreadEntry(ControlFlowEntry entry) {
3295 if (!entry.equals(fCurrentEntry)) {
3296 fCurrentEntry = entry;
3297 this.notifyParameterChanged(MyLttngKernelAnalysis.PARAM1);
3298 }
3299 }
3300
3301 private void registerListener() {
3302 final IWorkbench wb = PlatformUI.getWorkbench();
3303
3304 final IWorkbenchPage activePage = wb.getActiveWorkbenchWindow().getActivePage();
3305
3306 /* Add the listener to the control flow view */
3307 view = activePage.findView(ControlFlowView.ID);
3308 if (view != null) {
3309 view.getSite().getWorkbenchWindow().getSelectionService().addPostSelectionListener(selListener);
3310 view.getSite().getWorkbenchWindow().getPartService().addPartListener(partListener);
3311 }
3312 }
3313
3314 }
3315 </pre>
3316
3317 === Register the parameter provider to the analysis ===
3318
3319 To have the parameter provider class register to analysis modules, it must first register through the analysis manager. It can be done in a plugin's activator as follows:
3320
3321 <pre>
3322 @Override
3323 public void start(BundleContext context) throws Exception {
3324 /* ... */
3325 TmfAnalysisManager.registerParameterProvider("my.lttng.kernel.analysis.id", MyLttngKernelParameterProvider.class)
3326 }
3327 </pre>
3328
3329 where '''MyLttngKernelParameterProvider''' will be registered to analysis ''"my.lttng.kernel.analysis.id"''. When the analysis module is created, the new module will register automatically to the singleton parameter provider instance. Only one module is registered to a parameter provider at a given time, the one corresponding to the currently selected trace.
3330
3331 == Providing requirements to analyses ==
3332
3333 === Analysis requirement provider API ===
3334
3335 A requirement defines the needs of an analysis. For example, an analysis could need an event named ''"sched_switch"'' in order to be properly executed. The requirements are represented by the class '''TmfAnalysisRequirement'''. Since '''IAnalysisModule''' extends the '''IAnalysisRequirementProvider''' interface, all analysis modules must provide their requirements. If the analysis module extends '''TmfAbstractAnalysisModule''', it has the choice between overriding the requirements getter ('''IAnalysisRequirementProvider#getAnalysisRequirements()''') or not, since the abstract class returns an empty collection by default (no requirements).
3336
3337 === Requirement values ===
3338
3339 When instantiating a requirement, the developer needs to specify a type to which all the values added to the requirement will be linked. In the earlier example, there would be an ''"event"'' or ''"eventName"'' type. The type is represented by a string, like all values added to the requirement object. With an 'event' type requirement, a trace generator like the LTTng Control could automatically enable the required events. This is possible by calling the '''TmfAnalysisRequirementHelper''' class. Another point we have to take into consideration is the priority level of each value added to the requirement object. The enum '''TmfAnalysisRequirement#ValuePriorityLevel''' gives the choice between '''ValuePriorityLevel#MANDATORY''' and '''ValuePriorityLevel#OPTIONAL'''. That way, we can tell if an analysis can run without a value or not. To add values, one must call '''TmfAnalysisRequirement#addValue()'''.
3340
3341 Moreover, information can be added to requirements. That way, the developer can explicitly give help details at the requirement level instead of at the analysis level (which would just be a general help text). To add information to a requirement, the method '''TmfAnalysisRequirement#addInformation()''' must be called. Adding information is not mandatory.
3342
3343 === Example of providing requirements ===
3344
3345 In this example, we will implement a method that initializes a requirement object and return it in the '''IAnalysisRequirementProvider#getAnalysisRequirements()''' getter. The example method will return a set with two requirements. The first one will indicate the events needed by a specific analysis and the last one will tell on what domain type the analysis applies. In the event type requirement, we will indicate that the analysis needs a mandatory event and an optional one.
3346
3347 <pre>
3348 @Override
3349 public Iterable<TmfAnalysisRequirement> getAnalysisRequirements() {
3350 Set<TmfAnalysisRequirement> requirements = new HashSet<>();
3351
3352 /* Create requirements of type 'event' and 'domain' */
3353 TmfAnalysisRequirement eventRequirement = new TmfAnalysisRequirement("event");
3354 TmfAnalysisRequirement domainRequirement = new TmfAnalysisRequirement("domain");
3355
3356 /* Add the values */
3357 domainRequirement.addValue("kernel", TmfAnalysisRequirement.ValuePriorityLevel.MANDATORY);
3358 eventRequirement.addValue("sched_switch", TmfAnalysisRequirement.ValuePriorityLevel.MANDATORY);
3359 eventRequirement.addValue("sched_wakeup", TmfAnalysisRequirement.ValuePriorityLevel.OPTIONAL);
3360
3361 /* An information about the events */
3362 eventRequirement.addInformation("The event sched_wakeup is optional because it's not properly handled by this analysis yet.");
3363
3364 /* Add them to the set */
3365 requirements.add(domainRequirement);
3366 requirements.add(eventRequirement);
3367
3368 return requirements;
3369 }
3370 </pre>
3371
3372
3373 == TODO ==
3374
3375 Here's a list of features not yet implemented that would improve the analysis module user experience:
3376
3377 * Implement help using the Eclipse Help facility (without forgetting an eventual command line request)
3378 * The abstract class '''TmfAbstractAnalysisModule''' executes an analysis as a job, but nothing compels a developer to do so for an analysis implementing the '''IAnalysisModule''' interface. We should force the execution of the analysis as a job, either from the trace itself or using the TmfAnalysisManager or by some other mean.
3379 * Views and outputs are often registered by the analysis themselves (forcing them often to be in the .ui packages because of the views), because there is no other easy way to do so. We should extend the analysis extension point so that .ui plugins or other third-party plugins can add outputs to a given analysis that resides in the core.
3380 * Improve the user experience with the analysis:
3381 ** Allow the user to select which analyses should be available, per trace or per project.
3382 ** Allow the user to view all available analyses even though he has no imported traces.
3383 ** Allow the user to generate traces for a given analysis, or generate a template to generate the trace that can be sent as parameter to the tracer.
3384 ** Give the user a visual status of the analysis: not executed, in progress, completed, error.
3385 ** Give a small screenshot of the output as icon for it.
3386 ** Allow to specify parameter values from the GUI.
3387 * Add the possibility for an analysis requirement to be composed of another requirement.
3388 * Generate a trace session from analysis requirements.
3389
3390
3391 = Performance Tests =
3392
3393 Performance testing allows to calculate some metrics (CPU time, Memory Usage, etc) that some part of the code takes during its execution. These metrics can then be used as is for information on the system's execution, or they can be compared either with other execution scenarios, or previous runs of the same scenario, for instance, after some optimization has been done on the code.
3394
3395 For automatic performance metric computation, we use the ''org.eclipse.test.performance'' plugin, provided by the Eclipse Test Feature.
3396
3397 == Add performance tests ==
3398
3399 === Where ===
3400
3401 Performance tests are unit tests and they are added to the corresponding unit tests plugin. To separate performance tests from unit tests, a separate source folder, typically named ''perf'', is added to the plug-in.
3402
3403 Tests are to be added to a package under the ''perf'' directory, the package name would typically match the name of the package it is testing. For each package, a class named '''AllPerfTests''' would list all the performance tests classes inside this package. And like for unit tests, a class named '''AllPerfTests''' for the plug-in would list all the packages' '''AllPerfTests''' classes.
3404
3405 When adding performance tests for the first time in a plug-in, the plug-in's '''AllPerfTests''' class should be added to the global list of performance tests, found in package ''org.eclipse.linuxtools.lttng.alltests'', in class '''RunAllPerfTests'''. This will ensure that performance tests for the plug-in are run along with the other performance tests
3406
3407 === How ===
3408
3409 TMF is using the org.eclipse.test.performance framework for performance tests. Using this, performance metrics are automatically taken and, if many runs of the tests are run, average and standard deviation are automatically computed. Results can optionally be stored to a database for later use.
3410
3411 Here is an example of how to use the test framework in a performance test:
3412
3413 <pre>
3414 public class AnalysisBenchmark {
3415
3416 private static final String TEST_ID = "org.eclipse.linuxtools#LTTng kernel analysis";
3417 private static final CtfTmfTestTrace testTrace = CtfTmfTestTrace.TRACE2;
3418 private static final int LOOP_COUNT = 10;
3419
3420 /**
3421 * Performance test
3422 */
3423 @Test
3424 public void testTrace() {
3425 assumeTrue(testTrace.exists());
3426
3427 /** Create a new performance meter for this scenario */
3428 Performance perf = Performance.getDefault();
3429 PerformanceMeter pm = perf.createPerformanceMeter(TEST_ID);
3430
3431 /** Optionally, tag this test for summary or global summary on a given dimension */
3432 perf.tagAsSummary(pm, "LTTng Kernel Analysis", Dimension.CPU_TIME);
3433 perf.tagAsGlobalSummary(pm, "LTTng Kernel Analysis", Dimension.CPU_TIME);
3434
3435 /** The test will be run LOOP_COUNT times */
3436 for (int i = 0; i < LOOP_COUNT; i++) {
3437
3438 /** Start each run of the test with new objects to avoid different code paths */
3439 try (IAnalysisModule module = new LttngKernelAnalysisModule();
3440 LttngKernelTrace trace = new LttngKernelTrace()) {
3441 module.setId("test");
3442 trace.initTrace(null, testTrace.getPath(), CtfTmfEvent.class);
3443 module.setTrace(trace);
3444
3445 /** The analysis execution is being tested, so performance metrics
3446 * are taken before and after the execution */
3447 pm.start();
3448 TmfTestHelper.executeAnalysis(module);
3449 pm.stop();
3450
3451 /*
3452 * Delete the supplementary files, so next iteration rebuilds
3453 * the state system.
3454 */
3455 File suppDir = new File(TmfTraceManager.getSupplementaryFileDir(trace));
3456 for (File file : suppDir.listFiles()) {
3457 file.delete();
3458 }
3459
3460 } catch (TmfAnalysisException | TmfTraceException e) {
3461 fail(e.getMessage());
3462 }
3463 }
3464
3465 /** Once the test has been run many times, committing the results will
3466 * calculate average, standard deviation, and, if configured, save the
3467 * data to a database */
3468 pm.commit();
3469 }
3470 }
3471
3472 </pre>
3473
3474 For more information, see [http://wiki.eclipse.org/Performance/Automated_Tests The Eclipse Performance Test How-to]
3475
3476 Some rules to help write performance tests are explained in section [[#ABC of performance testing | ABC of performance testing]].
3477
3478 === Run a performance test ===
3479
3480 Performance tests are unit tests, so, just like unit tests, they can be run by right-clicking on a performance test class and selecting ''Run As'' -> ''Junit Plug-in Test''.
3481
3482 By default, if no database has been configured, results will be displayed in the Console at the end of the test.
3483
3484 Here is the sample output from the test described in the previous section. It shows all the metrics that have been calculated during the test.
3485
3486 <pre>
3487 Scenario 'org.eclipse.linuxtools#LTTng kernel analysis' (average over 10 samples):
3488 System Time: 3.04s (95% in [2.77s, 3.3s]) Measurable effect: 464ms (1.3 SDs) (required sample size for an effect of 5% of mean: 94)
3489 Used Java Heap: -1.43M (95% in [-33.67M, 30.81M]) Measurable effect: 57.01M (1.3 SDs) (required sample size for an effect of 5% of stdev: 6401)
3490 Working Set: 14.43M (95% in [-966.01K, 29.81M]) Measurable effect: 27.19M (1.3 SDs) (required sample size for an effect of 5% of stdev: 6400)
3491 Elapsed Process: 3.04s (95% in [2.77s, 3.3s]) Measurable effect: 464ms (1.3 SDs) (required sample size for an effect of 5% of mean: 94)
3492 Kernel time: 621ms (95% in [586ms, 655ms]) Measurable effect: 60ms (1.3 SDs) (required sample size for an effect of 5% of mean: 39)
3493 CPU Time: 6.06s (95% in [5.02s, 7.09s]) Measurable effect: 1.83s (1.3 SDs) (required sample size for an effect of 5% of mean: 365)
3494 Hard Page Faults: 0 (95% in [0, 0]) Measurable effect: 0 (1.3 SDs) (required sample size for an effect of 5% of stdev: 6400)
3495 Soft Page Faults: 9.27K (95% in [3.28K, 15.27K]) Measurable effect: 10.6K (1.3 SDs) (required sample size for an effect of 5% of mean: 5224)
3496 Text Size: 0 (95% in [0, 0])
3497 Data Size: 0 (95% in [0, 0])
3498 Library Size: 32.5M (95% in [-12.69M, 77.69M]) Measurable effect: 79.91M (1.3 SDs) (required sample size for an effect of 5% of stdev: 6401)
3499 </pre>
3500
3501 Results from performance tests can be saved automatically to a derby database. Derby can be run either in embedded mode, locally on a machine, or on a server. More information on setting up derby for performance tests can be found here: [http://wiki.eclipse.org/Performance/Automated_Tests The Eclipse Performance Test How-to]. The following documentation will show how to configure an Eclipse run configuration to store results on a derby database located on a server.
3502
3503 Note that to store results in a derby database, the ''org.apache.derby'' plug-in must be available within your Eclipse. Since it is an optional dependency, it is not included in the target definition. It can be installed via the '''Orbit''' repository, in ''Help'' -> ''Install new software...''. If the '''Orbit''' repository is not listed, click on the latest one from [http://download.eclipse.org/tools/orbit/downloads/] and copy the link under ''Orbit Build Repository''.
3504
3505 To store the data to a database, it needs to be configured in the run configuration. In ''Run'' -> ''Run configurations..'', under ''Junit Plug-in Test'', find the run configuration that corresponds to the test you wish to run, or create one if it is not present yet.
3506
3507 In the ''Arguments'' tab, in the box under ''VM Arguments'', add on separate lines the following information
3508
3509 <pre>
3510 -Declipse.perf.dbloc=//javaderby.dorsal.polymtl.ca
3511 -Declipse.perf.config=build=mybuild;host=myhost;config=linux;jvm=1.7
3512 </pre>
3513
3514 The ''eclipse.perf.dbloc'' parameter is the url (or filename) of the derby database. The database is by default named ''perfDB'', with username and password ''guest''/''guest''. If the database does not exist, it will be created, initialized and populated.
3515
3516 The ''eclipse.perf.config'' parameter identifies a '''variation''': It typically identifies the build on which is it run (commitId and/or build date, etc), the machine (host) on which it is run, the configuration of the system (for example Linux or Windows), the jvm etc. That parameter is a list of ';' separated key-value pairs. To be backward-compatible with the Eclipse Performance Tests Framework, the 4 keys mentioned above are mandatory, but any key-value pairs can be used.
3517
3518 == ABC of performance testing ==
3519
3520 Here follow some rules to help design good and meaningful performance tests.
3521
3522 === Determine what to test ===
3523
3524 For tests to be significant, it is important to choose what exactly is to be tested and make sure it is reproducible every run. To limit the amount of noise caused by the TMF framework, the performance test code should be tweaked so that only the method under test is run. For instance, a trace should not be "opened" (by calling the ''traceOpened()'' method) to test an analysis, since the ''traceOpened'' method will also trigger the indexing and the execution of all applicable automatic analysis.
3525
3526 For each code path to test, multiple scenarios can be defined. For instance, an analysis could be run on different traces, with different sizes. The results will show how the system scales and/or varies depending on the objects it is executed on.
3527
3528 The number of '''samples''' used to compute the results is also important. The code to test will typically be inside a '''for''' loop that runs exactly the same code each time for a given number of times. All objects used for the test must start in the same state at each iteration of the loop. For instance, any trace used during an execution should be disposed of at the end of the loop, and any supplementary file that may have been generated in the run should be deleted.
3529
3530 Before submitting a performance test to the code review, you should run it a few times (with results in the Console) and see if the standard deviation is not too large and if the results are reproducible.
3531
3532 === Metrics descriptions and considerations ===
3533
3534 CPU time: CPU time represent the total time spent on CPU by the current process, for the time of the test execution. It is the sum of the time spent by all threads. On one hand, it is more significant than the elapsed time, since it should be the same no matter how many CPU cores the computer has. But since it calculates the time of every thread, one has to make sure that only threads related to what is being tested are executed during that time, or else the results will include the times of those other threads. For an application like TMF, it is hard to control all the threads, and empirically, it is found to vary a lot more than the system time from one run to the other.
3535
3536 System time (Elapsed time): The time between the start and the end of the execution. It will vary depending on the parallelisation of the threads and the load of the machine.
3537
3538 Kernel time: Time spent in kernel mode
3539
3540 Used Java Heap: It is the difference between the memory used at the beginning of the execution and at the end. This metric may be useful to calculate the overall size occupied by the data generated by the test run, by forcing a garbage collection before taking the metrics at the beginning and at the end of the execution. But it will not show the memory used throughout the execution. There can be a large standard deviation. The reason for this is that when benchmarking methods that trigger tasks in different threads, like signals and/or analysis, these other threads might be in various states at each run of the test, which will impact the memory usage calculated. When using this metric, either make sure the method to test does not trigger external threads or make sure you wait for them to finish.
3541
3542 = Network Tracing =
3543
3544 == Adding a protocol ==
3545
3546 Supporting a new network protocol in TMF is straightforward. Minimal effort is required to support new protocols. In this tutorial, the UDP protocol will be added to the list of supported protocols.
3547
3548 === Architecture ===
3549
3550 All the TMF pcap-related code is divided in three projects (not considering the tests plugins):
3551 * '''org.eclipse.linuxtools.pcap.core''', which contains the parser that will read pcap files and constructs the different packets from a ByteBuffer. It also contains means to build packet streams, which are conversation (list of packets) between two endpoints. To add a protocol, almost all of the work will be in that project.
3552 * '''org.eclipse.linuxtools.tmf.pcap.core''', which contains TMF-specific concepts and act as a wrapper between TMF and the pcap parsing library. It only depends on org.eclipse.linuxtools.tmf.core and org.eclipse.pcap.core. To add a protocol, one file must be edited in this project.
3553 * '''org.eclipse.linuxtools.tmf.pcap.ui''', which contains all TMF pcap UI-specific concepts, such as the views and perspectives. No work is needed in that project.
3554
3555 === UDP Packet Structure ===
3556
3557 The UDP is a transport-layer protocol that does not guarantee message delivery nor in-order message reception. A UDP packet (datagram) has the following [http://en.wikipedia.org/wiki/User_Datagram_Protocol#Packet_structure structure]:
3558
3559 {| class="wikitable" style="margin: 0 auto; text-align: center;"
3560 |-
3561 ! style="border-bottom:none; border-right:none;"| ''Offsets''
3562 ! style="border-left:none;"| Octet
3563 ! colspan="8" | 0
3564 ! colspan="8" | 1
3565 ! colspan="8" | 2
3566 ! colspan="8" | 3
3567 |-
3568 ! style="border-top: none" | Octet
3569 ! <tt>Bit</tt>!!<tt>&nbsp;0</tt>!!<tt>&nbsp;1</tt>!!<tt>&nbsp;2</tt>!!<tt>&nbsp;3</tt>!!<tt>&nbsp;4</tt>!!<tt>&nbsp;5</tt>!!<tt>&nbsp;6</tt>!!<tt>&nbsp;7</tt>!!<tt>&nbsp;8</tt>!!<tt>&nbsp;9</tt>!!<tt>10</tt>!!<tt>11</tt>!!<tt>12</tt>!!<tt>13</tt>!!<tt>14</tt>!!<tt>15</tt>!!<tt>16</tt>!!<tt>17</tt>!!<tt>18</tt>!!<tt>19</tt>!!<tt>20</tt>!!<tt>21</tt>!!<tt>22</tt>!!<tt>23</tt>!!<tt>24</tt>!!<tt>25</tt>!!<tt>26</tt>!!<tt>27</tt>!!<tt>28</tt>!!<tt>29</tt>!!<tt>30</tt>!!<tt>31</tt>
3570 |-
3571 ! 0
3572 !<tt> 0</tt>
3573 | colspan="16" style="background:#fdd;"| Source port || colspan="16"| Destination port
3574 |-
3575 ! 4
3576 !<tt>32</tt>
3577 | colspan="16"| Length || colspan="16" style="background:#fdd;"| Checksum
3578 |}
3579
3580 Knowing that, we can define an UDPPacket class that contains those fields.
3581
3582 === Creating the UDPPacket ===
3583
3584 First, in org.eclipse.linuxtools.pcap.core, create a new package named '''org.eclipse.linuxtools.pcap.core.protocol.name''' with name being the name of the new protocol. In our case name is udp so we create the package '''org.eclipse.linuxtools.pcap.core.protocol.udp'''. All our work is going in this package.
3585
3586 In this package, we create a new class named UDPPacket that extends Packet. All new protocol must define a packet type that extends the abstract class Packet. We also add different fields:
3587 * ''Packet'' '''fChildPacket''', which is the packet encapsulated by this UDP packet, if it exists. This field will be initialized by findChildPacket().
3588 * ''ByteBuffer'' '''fPayload''', which is the payload of this packet. Basically, it is the UDP packet without its header.
3589 * ''int'' '''fSourcePort''', which is an unsigned 16-bits field, that contains the source port of the packet (see packet structure).
3590 * ''int'' '''fDestinationPort''', which is an unsigned 16-bits field, that contains the destination port of the packet (see packet structure).
3591 * ''int'' '''fTotalLength''', which is an unsigned 16-bits field, that contains the total length (header + payload) of the packet.
3592 * ''int'' '''fChecksum''', which is an unsigned 16-bits field, that contains a checksum to verify the integrity of the data.
3593 * ''UDPEndpoint'' '''fSourceEndpoint''', which contains the source endpoint of the UDPPacket. The UDPEndpoint class will be created later in this tutorial.
3594 * ''UDPEndpoint'' '''fDestinationEndpoint''', which contains the destination endpoint of the UDPPacket.
3595 * ''ImmutableMap<String, String>'' '''fFields''', which is a map that contains all the packet fields (see in data structure) which assign a field name with its value. Those values will be displayed on the UI.
3596
3597 We also create the UDPPacket(PcapFile file, @Nullable Packet parent, ByteBuffer packet) constructor. The parameters are:
3598 * ''PcapFile'' '''file''', which is the pcap file to which this packet belongs.
3599 * ''Packet'' '''parent''', which is the packet encasulating this UDPPacket
3600 * ''ByteBuffer'' '''packet''', which is a ByteBuffer that contains all the data necessary to initialize the fields of this UDPPacket. We will retrieve bytes from it during object construction.
3601
3602 The following class is obtained:
3603
3604 <pre>
3605 package org.eclipse.linuxtools.pcap.core.protocol.udp;
3606
3607 import java.nio.ByteBuffer;
3608 import java.util.Map;
3609
3610 import org.eclipse.linuxtools.internal.pcap.core.endpoint.ProtocolEndpoint;
3611 import org.eclipse.linuxtools.internal.pcap.core.packet.BadPacketException;
3612 import org.eclipse.linuxtools.internal.pcap.core.packet.Packet;
3613
3614 public class UDPPacket extends Packet {
3615
3616 private final @Nullable Packet fChildPacket;
3617 private final @Nullable ByteBuffer fPayload;
3618
3619 private final int fSourcePort;
3620 private final int fDestinationPort;
3621 private final int fTotalLength;
3622 private final int fChecksum;
3623
3624 private @Nullable UDPEndpoint fSourceEndpoint;
3625 private @Nullable UDPEndpoint fDestinationEndpoint;
3626
3627 private @Nullable ImmutableMap<String, String> fFields;
3628
3629 /**
3630 * Constructor of the UDP Packet class.
3631 *
3632 * @param file
3633 * The file that contains this packet.
3634 * @param parent
3635 * The parent packet of this packet (the encapsulating packet).
3636 * @param packet
3637 * The entire packet (header and payload).
3638 * @throws BadPacketException
3639 * Thrown when the packet is erroneous.
3640 */
3641 public UDPPacket(PcapFile file, @Nullable Packet parent, ByteBuffer packet) throws BadPacketException {
3642 super(file, parent, Protocol.UDP);
3643 // TODO Auto-generated constructor stub
3644 }
3645
3646
3647 @Override
3648 public Packet getChildPacket() {
3649 // TODO Auto-generated method stub
3650 return null;
3651 }
3652
3653 @Override
3654 public ByteBuffer getPayload() {
3655 // TODO Auto-generated method stub
3656 return null;
3657 }
3658
3659 @Override
3660 public boolean validate() {
3661 // TODO Auto-generated method stub
3662 return false;
3663 }
3664
3665 @Override
3666 protected Packet findChildPacket() throws BadPacketException {
3667 // TODO Auto-generated method stub
3668 return null;
3669 }
3670
3671 @Override
3672 public ProtocolEndpoint getSourceEndpoint() {
3673 // TODO Auto-generated method stub
3674 return null;
3675 }
3676
3677 @Override
3678 public ProtocolEndpoint getDestinationEndpoint() {
3679 // TODO Auto-generated method stub
3680 return null;
3681 }
3682
3683 @Override
3684 public Map<String, String> getFields() {
3685 // TODO Auto-generated method stub
3686 return null;
3687 }
3688
3689 @Override
3690 public String getLocalSummaryString() {
3691 // TODO Auto-generated method stub
3692 return null;
3693 }
3694
3695 @Override
3696 protected String getSignificationString() {
3697 // TODO Auto-generated method stub
3698 return null;
3699 }
3700
3701 @Override
3702 public boolean equals(Object obj) {
3703 // TODO Auto-generated method stub
3704 return false;
3705 }
3706
3707 @Override
3708 public int hashCode() {
3709 // TODO Auto-generated method stub
3710 return 0;
3711 }
3712
3713 }
3714 </pre>
3715
3716 Now, we implement the constructor. It is done in four steps:
3717 * We initialize fSourceEndpoint, fDestinationEndpoint and fFields to null, since those are lazy-loaded. This allows faster construction of the packet and thus faster parsing.
3718 * We initialize fSourcePort, fDestinationPort, fTotalLength, fChecksum using ByteBuffer packet. Thanks to the packet data structure, we can simply retrieve packet.getShort() to get the value. Since there is no unsigned in Java, special care is taken to avoid negative number. We use the utility method ConversionHelper.unsignedShortToInt() to convert it to an integer, and initialize the fields.
3719 * Now that the header is parsed, we take the rest of the ByteBuffer packet to initialize the payload, if there is one. To do this, we simply generate a new ByteBuffer starting from the current position.
3720 * We initialize the field fChildPacket using the method findChildPacket()
3721
3722 The following constructor is obtained:
3723 <pre>
3724 public UDPPacket(PcapFile file, @Nullable Packet parent, ByteBuffer packet) throws BadPacketException {
3725 super(file, parent, Protocol.UDP);
3726
3727 // The endpoints and fFields are lazy loaded. They are defined in the get*Endpoint()
3728 // methods.
3729 fSourceEndpoint = null;
3730 fDestinationEndpoint = null;
3731 fFields = null;
3732
3733 // Initialize the fields from the ByteBuffer
3734 packet.order(ByteOrder.BIG_ENDIAN);
3735 packet.position(0);
3736
3737 fSourcePort = ConversionHelper.unsignedShortToInt(packet.getShort());
3738 fDestinationPort = ConversionHelper.unsignedShortToInt(packet.getShort());
3739 fTotalLength = ConversionHelper.unsignedShortToInt(packet.getShort());
3740 fChecksum = ConversionHelper.unsignedShortToInt(packet.getShort());
3741
3742 // Initialize the payload
3743 if (packet.array().length - packet.position() > 0) {
3744 byte[] array = new byte[packet.array().length - packet.position()];
3745 packet.get(array);
3746
3747 ByteBuffer payload = ByteBuffer.wrap(array);
3748 payload.order(ByteOrder.BIG_ENDIAN);
3749 payload.position(0);
3750 fPayload = payload;
3751 } else {
3752 fPayload = null;
3753 }
3754
3755 // Find child
3756 fChildPacket = findChildPacket();
3757
3758 }
3759 </pre>
3760
3761 Then, we implement the following methods:
3762 * ''public Packet'' '''getChildPacket()''': simple getter of fChildPacket
3763 * ''public ByteBuffer'' '''getPayload()''': simple getter of fPayload
3764 * ''public boolean'' '''validate()''': method that checks if the packet is valid. In our case, the packet is valid if the retrieved checksum fChecksum and the real checksum (that we can compute using the fields and payload of UDPPacket) are the same.
3765 * ''protected Packet'' '''findChildPacket()''': method that create a new packet if a encapsulated protocol is found. For instance, based on the fDestinationPort, it could determine what the encapsulated protocol is and creates a new packet object.
3766 * ''public ProtocolEndpoint'' '''getSourceEndpoint()''': method that initializes and returns the source endpoint.
3767 * ''public ProtocolEndpoint'' '''getDestinationEndpoint()''': method that initializes and returns the destination endpoint.
3768 * ''public Map<String, String>'' '''getFields()''': method that initializes and returns the map containing the fields matched to their value.
3769 * ''public String'' '''getLocalSummaryString()''': method that returns a string summarizing the most important fields of the packet. There is no need to list all the fields, just the most important one. This will be displayed on UI.
3770 * ''protected String'' '''getSignificationString()''': method that returns a string describing the meaning of the packet. If there is no particular meaning, it is possible to return getLocalSummaryString().
3771 * public boolean'' '''equals(Object obj)''': Object's equals method.
3772 * public int'' '''hashCode()''': Object's hashCode method.
3773
3774 We get the following code:
3775 <pre>
3776 @Override
3777 public @Nullable Packet getChildPacket() {
3778 return fChildPacket;
3779 }
3780
3781 @Override
3782 public @Nullable ByteBuffer getPayload() {
3783 return fPayload;
3784 }
3785
3786 /**
3787 * Getter method that returns the UDP Source Port.
3788 *
3789 * @return The source Port.
3790 */
3791 public int getSourcePort() {
3792 return fSourcePort;
3793 }
3794
3795 /**
3796 * Getter method that returns the UDP Destination Port.
3797 *
3798 * @return The destination Port.
3799 */
3800 public int getDestinationPort() {
3801 return fDestinationPort;
3802 }
3803
3804 /**
3805 * {@inheritDoc}
3806 *
3807 * See http://www.iana.org/assignments/service-names-port-numbers/service-
3808 * names-port-numbers.xhtml or
3809 * http://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers
3810 */
3811 @Override
3812 protected @Nullable Packet findChildPacket() throws BadPacketException {
3813 // When more protocols are implemented, we can simply do a switch on the fDestinationPort field to find the child packet.
3814 // For instance, if the destination port is 80, then chances are the HTTP protocol is encapsulated. We can create a new HTTP
3815 // packet (after some verification that it is indeed the HTTP protocol).
3816 ByteBuffer payload = fPayload;
3817 if (payload == null) {
3818 return null;
3819 }
3820
3821 return new UnknownPacket(getPcapFile(), this, payload);
3822 }
3823
3824 @Override
3825 public boolean validate() {
3826 // Not yet implemented. ATM, we consider that all packets are valid.
3827 // TODO Implement it. We can compute the real checksum and compare it to fChecksum.
3828 return true;
3829 }
3830
3831 @Override
3832 public UDPEndpoint getSourceEndpoint() {
3833 @Nullable
3834 UDPEndpoint endpoint = fSourceEndpoint;
3835 if (endpoint == null) {
3836 endpoint = new UDPEndpoint(this, true);
3837 }
3838 fSourceEndpoint = endpoint;
3839 return fSourceEndpoint;
3840 }
3841
3842 @Override
3843 public UDPEndpoint getDestinationEndpoint() {
3844 @Nullable UDPEndpoint endpoint = fDestinationEndpoint;
3845 if (endpoint == null) {
3846 endpoint = new UDPEndpoint(this, false);
3847 }
3848 fDestinationEndpoint = endpoint;
3849 return fDestinationEndpoint;
3850 }
3851
3852 @Override
3853 public Map<String, String> getFields() {
3854 ImmutableMap<String, String> map = fFields;
3855 if (map == null) {
3856 @SuppressWarnings("null")
3857 @NonNull ImmutableMap<String, String> newMap = ImmutableMap.<String, String> builder()
3858 .put("Source Port", String.valueOf(fSourcePort)) //$NON-NLS-1$
3859 .put("Destination Port", String.valueOf(fDestinationPort)) //$NON-NLS-1$
3860 .put("Length", String.valueOf(fTotalLength) + " bytes") //$NON-NLS-1$ //$NON-NLS-2$
3861 .put("Checksum", String.format("%s%04x", "0x", fChecksum)) //$NON-NLS-1$ //$NON-NLS-2$ //$NON-NLS-3$
3862 .build();
3863 fFields = newMap;
3864 return newMap;
3865 }
3866 return map;
3867 }
3868
3869 @Override
3870 public String getLocalSummaryString() {
3871 return "Src Port: " + fSourcePort + ", Dst Port: " + fDestinationPort; //$NON-NLS-1$ //$NON-NLS-2$
3872 }
3873
3874 @Override
3875 protected String getSignificationString() {
3876 return "Source Port: " + fSourcePort + ", Destination Port: " + fDestinationPort; //$NON-NLS-1$ //$NON-NLS-2$
3877 }
3878
3879 @Override
3880 public int hashCode() {
3881 final int prime = 31;
3882 int result = 1;
3883 result = prime * result + fChecksum;
3884 final Packet child = fChildPacket;
3885 if (child != null) {
3886 result = prime * result + child.hashCode();
3887 } else {
3888 result = prime * result;
3889 }
3890 result = prime * result + fDestinationPort;
3891 final ByteBuffer payload = fPayload;
3892 if (payload != null) {
3893 result = prime * result + payload.hashCode();
3894 } else {
3895 result = prime * result;
3896 }
3897 result = prime * result + fSourcePort;
3898 result = prime * result + fTotalLength;
3899 return result;
3900 }
3901
3902 @Override
3903 public boolean equals(@Nullable Object obj) {
3904 if (this == obj) {
3905 return true;
3906 }
3907 if (obj == null) {
3908 return false;
3909 }
3910 if (getClass() != obj.getClass()) {
3911 return false;
3912 }
3913 UDPPacket other = (UDPPacket) obj;
3914 if (fChecksum != other.fChecksum) {
3915 return false;
3916 }
3917 final Packet child = fChildPacket;
3918 if (child != null) {
3919 if (!child.equals(other.fChildPacket)) {
3920 return false;
3921 }
3922 } else {
3923 if (other.fChildPacket != null) {
3924 return false;
3925 }
3926 }
3927 if (fDestinationPort != other.fDestinationPort) {
3928 return false;
3929 }
3930 final ByteBuffer payload = fPayload;
3931 if (payload != null) {
3932 if (!payload.equals(other.fPayload)) {
3933 return false;
3934 }
3935 } else {
3936 if (other.fPayload != null) {
3937 return false;
3938 }
3939 }
3940 if (fSourcePort != other.fSourcePort) {
3941 return false;
3942 }
3943 if (fTotalLength != other.fTotalLength) {
3944 return false;
3945 }
3946 return true;
3947 }
3948 </pre>
3949
3950 The UDPPacket class is implemented. We now have the define the UDPEndpoint.
3951
3952 === Creating the UDPEndpoint ===
3953
3954 For the UDP protocol, an endpoint will be its source or its destination port, depending if it is the source endpoint or destination endpoint. Knowing that, we can create our UDPEndpoint class.
3955
3956 We create in our package a new class named UDPEndpoint that extends ProtocolEndpoint. We also add a field: fPort, which contains the source or destination port. We finally add a constructor public ExampleEndpoint(Packet packet, boolean isSourceEndpoint):
3957 * ''Packet'' '''packet''': the packet to build the endpoint from.
3958 * ''boolean'' '''isSourceEndpoint''': whether the endpoint is the source endpoint or destination endpoint.
3959
3960 We obtain the following unimplemented class:
3961
3962 <pre>
3963 package org.eclipse.linuxtools.pcap.core.protocol.udp;
3964
3965 import org.eclipse.linuxtools.internal.pcap.core.endpoint.ProtocolEndpoint;
3966 import org.eclipse.linuxtools.internal.pcap.core.packet.Packet;
3967
3968 public class UDPEndpoint extends ProtocolEndpoint {
3969
3970 private final int fPort;
3971
3972 public UDPEndpoint(Packet packet, boolean isSourceEndpoint) {
3973 super(packet, isSourceEndpoint);
3974 // TODO Auto-generated constructor stub
3975 }
3976
3977 @Override
3978 public int hashCode() {
3979 // TODO Auto-generated method stub
3980 return 0;
3981 }
3982
3983 @Override
3984 public boolean equals(Object obj) {
3985 // TODO Auto-generated method stub
3986 return false;
3987 }
3988
3989 @Override
3990 public String toString() {
3991 // TODO Auto-generated method stub
3992 return null;
3993 }
3994
3995 }
3996 </pre>
3997
3998 For the constructor, we simply initialize fPort. If isSourceEndpoint is true, then we take packet.getSourcePort(), else we take packet.getDestinationPort().
3999
4000 <pre>
4001 /**
4002 * Constructor of the {@link UDPEndpoint} class. It takes a packet to get
4003 * its endpoint. Since every packet has two endpoints (source and
4004 * destination), the isSourceEndpoint parameter is used to specify which
4005 * endpoint to take.
4006 *
4007 * @param packet
4008 * The packet that contains the endpoints.
4009 * @param isSourceEndpoint
4010 * Whether to take the source or the destination endpoint of the
4011 * packet.
4012 */
4013 public UDPEndpoint(UDPPacket packet, boolean isSourceEndpoint) {
4014 super(packet, isSourceEndpoint);
4015 fPort = isSourceEndpoint ? packet.getSourcePort() : packet.getDestinationPort();
4016 }
4017 </pre>
4018
4019 Then we implement the methods:
4020 * ''public int'' '''hashCode()''': method that returns an integer based on the fields value. In our case, it will return an integer depending on fPort, and the parent endpoint that we can retrieve with getParentEndpoint().
4021 * ''public boolean'' '''equals(Object obj)''': method that returns true if two objects are equals. In our case, two UDPEndpoints are equal if they both have the same fPort and have the same parent endpoint that we can retrieve with getParentEndpoint().
4022 * ''public String'' '''toString()''': method that returns a description of the UDPEndpoint as a string. In our case, it will be a concatenation of the string of the parent endpoint and fPort as a string.
4023
4024 <pre>
4025 @Override
4026 public int hashCode() {
4027 final int prime = 31;
4028 int result = 1;
4029 ProtocolEndpoint endpoint = getParentEndpoint();
4030 if (endpoint == null) {
4031 result = 0;
4032 } else {
4033 result = endpoint.hashCode();
4034 }
4035 result = prime * result + fPort;
4036 return result;
4037 }
4038
4039 @Override
4040 public boolean equals(@Nullable Object obj) {
4041 if (this == obj) {
4042 return true;
4043 }
4044 if (!(obj instanceof UDPEndpoint)) {
4045 return false;
4046 }
4047
4048 UDPEndpoint other = (UDPEndpoint) obj;
4049
4050 // Check on layer
4051 boolean localEquals = (fPort == other.fPort);
4052 if (!localEquals) {
4053 return false;
4054 }
4055
4056 // Check above layers.
4057 ProtocolEndpoint endpoint = getParentEndpoint();
4058 if (endpoint != null) {
4059 return endpoint.equals(other.getParentEndpoint());
4060 }
4061 return true;
4062 }
4063
4064 @Override
4065 public String toString() {
4066 ProtocolEndpoint endpoint = getParentEndpoint();
4067 if (endpoint == null) {
4068 @SuppressWarnings("null")
4069 @NonNull String ret = String.valueOf(fPort);
4070 return ret;
4071 }
4072 return endpoint.toString() + '/' + fPort;
4073 }
4074 </pre>
4075
4076 === Registering the UDP protocol ===
4077
4078 The last step is to register the new protocol. There are three places where the protocol has to be registered. First, the parser has to know that a new protocol has been added. This is defined in the enum org.eclipse.linuxtools.pcap.core.protocol.PcapProtocol. Simply add the protocol name here, along with a few arguments:
4079 * ''String'' '''longname''', which is the long version of name of the protocol. In our case, it is "User Datagram Protocol".
4080 * ''String'' '''shortName''', which is the shortened name of the protocol. In our case, it is "UDP".
4081 * ''Layer'' '''layer''', which is the layer to which the protocol belongs in the OSI model. In our case, this is the layer 4.
4082 * ''boolean'' '''supportsStream''', which defines whether or not the protocol supports packet streams. In our case, this is set to true.
4083
4084 Thus, the following line is added in the PcapProtocol enum:
4085 <pre>
4086 UDP("User Datagram Protocol", "udp", Layer.LAYER_4, true),
4087 </pre>
4088
4089 Also, TMF has to know about the new protocol. This is defined in org.eclipse.linuxtools.tmf.pcap.core.protocol.TmfPcapProtocol. We simply add it, with a reference to the corresponding protocol in PcapProtocol. Thus, the following line is added in the TmfPcapProtocol enum:
4090 <pre>
4091 UDP(PcapProtocol.UDP),
4092 </pre>
4093
4094 You will also have to update the ''ProtocolConversion'' class to register the protocol in the switch statements. Thus, for UDP, we add:
4095 <pre>
4096 case UDP:
4097 return TmfPcapProtocol.UDP;
4098 </pre>
4099 and
4100 <pre>
4101 case UDP:
4102 return PcapProtocol.UDP;
4103 </pre>
4104
4105 Finally, all the protocols that could be the parent of the new protocol (in our case, IPv4 and IPv6) have to be notified of the new protocol. This is done by modifying the findChildPacket() method of the packet class of those protocols. For instance, in IPv4Packet, we add a case in the switch statement of findChildPacket, if the Protocol number matches UDP's protocol number at the network layer:
4106 <pre>
4107 @Override
4108 protected @Nullable Packet findChildPacket() throws BadPacketException {
4109 ByteBuffer payload = fPayload;
4110 if (payload == null) {
4111 return null;
4112 }
4113
4114 switch (fIpDatagramProtocol) {
4115 case IPProtocolNumberHelper.PROTOCOL_NUMBER_TCP:
4116 return new TCPPacket(getPcapFile(), this, payload);
4117 case IPProtocolNumberHelper.PROTOCOL_NUMBER_UDP:
4118 return new UDPPacket(getPcapFile(), this, payload);
4119 default:
4120 return new UnknownPacket(getPcapFile(), this, payload);
4121 }
4122 }
4123 </pre>
4124
4125 The new protocol has been added. Running TMF should work just fine, and the new protocol is now recognized.
4126
4127 == Adding stream-based views ==
4128
4129 To add a stream-based View, simply monitor the TmfPacketStreamSelectedSignal in your view. It contains the new stream that you can retrieve with signal.getStream(). You must then make an event request to the current trace to get the events, and use the stream to filter the events of interest. Therefore, you must also monitor TmfTraceOpenedSignal, TmfTraceClosedSignal and TmfTraceSelectedSignal. Examples of stream-based views include a view that represents the packets as a sequence diagram, or that shows the TCP connection state based on the packets SYN/ACK/FIN/RST flags. A (very very very early) draft of such a view can be found at https://git.eclipse.org/r/#/c/31054/.
4130
4131 == TODO ==
4132
4133 * Add more protocols. At the moment, only four protocols are supported. The following protocols would need to be implemented: ARP, SLL, WLAN, USB, IPv6, ICMP, ICMPv6, IGMP, IGMPv6, SCTP, DNS, FTP, HTTP, RTP, SIP, SSH and Telnet. Other VoIP protocols would be nice.
4134 * Add a network graph view. It would be useful to produce graphs that are meaningful to network engineers, and that they could use (for presentation purpose, for instance). We could use the XML-based analysis to do that!
4135 * Add a Stream Diagram view. This view would represent a stream as a Sequence Diagram. It would be updated when a TmfNewPacketStreamSignal is thrown. It would be easy to see the packet exchange and the time delta between each packet. Also, when a packet is selected in the Stream Diagram, it should be selected in the event table and its content should be shown in the Properties View. See https://git.eclipse.org/r/#/c/31054/ for a draft of such a view.
4136 * Make adding protocol more "plugin-ish", via extension points for instance. This would make it easier to support new protocols, without modifying the source code.
4137 * Control dumpcap directly from eclipse, similar to how LTTng is controlled in the Control View.
4138 * Support pcapng. See: http://www.winpcap.org/ntar/draft/PCAP-DumpFileFormat.html for the file format.
4139 * Add SWTBOT tests to org.eclipse.linuxtools.tmf.pcap.ui
4140 * Add a Raw Viewer, similar to Wireshark. We could use the “Show Raw” in the event editor to do that.
4141 * Externalize strings in org.eclipse.linuxtools.pcap.core. At the moment, all the strings are hardcoded. It would be good to externalize them all.
This page took 0.161191 seconds and 5 git commands to generate.