7be5d07f2f531374f3d034f96b2dfaac9f03d317
[deliverable/tracecompass.git] / doc / org.eclipse.tracecompass.doc.dev / doc / Developer-Guide.mediawiki
1
2 = Table of Contents =
3
4 __TOC__
5
6 = Introduction =
7
8 The purpose of '''Trace Compass''' is to facilitate the integration of tracing
9 and monitoring tools into Eclipse, to provide out-of-the-box generic
10 functionalities/views and provide extension mechanisms of the base
11 functionalities for application specific purposes.
12
13 This guide goes over the internal components of the Trace Compass framework. It
14 should help developers trying to add new capabilities (support for new trace
15 type, new analysis or views, etc.) to the framework. End-users, using the RCP
16 for example, should not have to worry about the concepts explained here.
17
18 = Implementing a New Trace Type =
19
20 The framework can easily be extended to support more trace types. To make a new
21 trace type, one must define the following items:
22
23 * The event type
24 * The trace reader
25 * The trace context
26 * The trace location
27 * The ''org.eclipse.linuxtools.tmf.core.tracetype'' plug-in extension point
28 * (Optional) The ''org.eclipse.linuxtools.tmf.ui.tracetypeui'' plug-in extension point
29
30 The '''event type''' must implement an ''ITmfEvent'' or extend a class that
31 implements an ''ITmfEvent''. Typically it will extend ''TmfEvent''. The event
32 type must contain all the data of an event.
33
34 The '''trace reader''' must be of an ''ITmfTrace'' type. The ''TmfTrace'' class
35 will supply many background operations so that the reader only needs to
36 implement certain functions.
37
38 The '''trace context''' can be seen as the internals of an iterator. It is
39 required by the trace reader to parse events as it iterates the trace and to
40 keep track of its rank and location. It can have a timestamp, a rank, a file
41 position, or any other element, it should be considered to be ephemeral.
42
43 The '''trace location''' is an element that is cloned often to store
44 checkpoints, it is generally persistent. It is used to rebuild a context,
45 therefore, it needs to contain enough information to unambiguously point to one
46 and only one event. Finally the ''tracetype'' plug-in extension associates a
47 given trace, non-programmatically to a trace type for use in the UI.
48
49 == Optional Trace Type Attributes ==
50
51 After defining the trace type as described in the previous chapters it is
52 possible to define optional attributes for the trace type.
53
54 === Default Editor ===
55
56 The '''defaultEditor''' attribute of the '''org.eclipse.linuxtools.tmf.ui.tracetypeui'''
57 extension point allows for configuring the editor to use for displaying the
58 events. If omitted, the ''TmfEventsEditor'' is used as default.
59
60 To configure an editor, first add the '''defaultEditor''' attribute to the trace
61 type in the extension definition. This can be done by selecting the trace type
62 in the plug-in manifest editor. Then click the right mouse button and select
63 '''New -> defaultEditor''' in the context sensitive menu. Then select the newly
64 added attribute. Now you can specify the editor id to use on the right side of
65 the manifest editor. For example, this attribute could be used to implement an
66 extension of the class ''org.eclipse.ui.part.MultiPageEditor''. The first page
67 could use the ''TmfEventsEditor''' to display the events in a table as usual and
68 other pages can display other aspects of the trace.
69
70 === Events Table Type ===
71
72 The '''eventsTableType''' attribute of the '''org.eclipse.linuxtools.tmf.ui.tracetypeui'''
73 extension point allows for configuring the events table class to use in the
74 default events editor. If omitted, the default events table will be used.
75
76 To configure a trace type specific events table, first add the
77 '''eventsTableType''' attribute to the trace type in the extension definition.
78 This can be done by selecting the trace type in the plug-in manifest editor.
79 Then click the right mouse button and select '''New -> eventsTableType''' in the
80 context sensitive menu. Then select the newly added attribute and click on
81 ''class'' on the right side of the manifest editor. The new class wizard will
82 open. The ''superclass'' field will be already filled with the class ''org.eclipse.tracecompass.tmf.ui.viewers.events.TmfEventsTable''.
83
84 By using this attribute, a table with different columns than the default columns
85 can be defined. See the class
86 ''org.eclipse.tracecompass.internal.gdbtrace.ui.views.events.GdbEventsTable''
87 for an example implementation.
88
89 == Other Considerations ==
90
91 Other views and components may provide additional features that are active only
92 when the event or trace type class implements certain additional interfaces.
93
94 === Collapsing of repetitive events ===
95
96 By implementing the interface
97 ''org.eclipse.tracecompass.tmf.core.event.collapse.ITmfCollapsibleEvent'' the
98 event table will allow to collapse repetitive events by selecting the menu item
99 '''Collapse Events''' after pressing the right mouse button in the table.
100
101 == Best Practices ==
102
103 * Do not load the whole trace in RAM, it will limit the size of the trace that can be read.
104 * Reuse as much code as possible, it makes the trace format much easier to maintain.
105 * Use Eclipse's editor instead of editing the XML directly.
106 * Do not forget Java supports only signed data types, there may be special care needed to handle unsigned data.
107 * If the support for your trace has custom UI elements (like icons, views, etc.), split the core and UI parts in separate plugins, named identically except for a ''.core'' or ''.ui'' suffix.
108 ** Implement the ''tmf.core.tracetype'' extension in the core plugin, and the ''tmf.ui.tracetypeui'' extension in the UI plugin if applicable.
109
110 == An Example: Nexus-lite parser ==
111
112 === Description of the file ===
113
114 This is a very small subset of the nexus trace format, with some changes to make
115 it easier to read. There is one file. This file starts with 64 Strings
116 containing the event names, then an arbitrarily large number of events. The
117 events are each 64 bits long. the first 32 are the timestamp in microseconds,
118 the second 32 are split into 6 bits for the event type, and 26 for the data
119 payload.
120
121 The trace type will be made of two parts, part 1 is the event description, it is
122 just 64 strings, comma separated and then a line feed.
123
124 <pre>
125 Startup,Stop,Load,Add, ... ,reserved\n
126 </pre>
127
128 Then there will be the events in this format
129
130 {| width= "85%"
131 |style="width: 50%; background-color: #ffffcc;"|timestamp (32 bits)
132 |style="width: 10%; background-color: #ffccff;"|type (6 bits)
133 |style="width: 40%; background-color: #ccffcc;"|payload (26 bits)
134 |-
135 |style="background-color: #ffcccc;" colspan="3"|64 bits total
136 |}
137
138 all events will be the same size (64 bits).
139
140 === NexusLite Plug-in ===
141
142 Create a '''New''', '''Project...''', '''Plug-in Project''', set the title to
143 '''com.example.nexuslite''', click '''Next >''' then click on '''Finish'''.
144
145 Now the structure for the Nexus trace Plug-in is set up.
146
147 Add a dependency to TMF core and UI by opening the '''MANIFEST.MF''' in
148 '''META-INF''', selecting the '''Dependencies''' tab and '''Add ...'''
149 '''org.eclipse.tracecompass.tmf.core''' and '''org.eclipse.tracecompass.tmf.ui'''.
150
151 [[Image:images/NTTAddDepend.png]]<br>
152 [[Image:images/NTTSelectProjects.png]]<br>
153
154 Now the project can access TMF classes.
155
156 === Trace Event ===
157
158 The '''TmfEvent''' class will work for this example. No code required.
159
160 === Trace Reader ===
161
162 The trace reader will extend a '''TmfTrace''' class.
163
164 It will need to implement:
165
166 * validate (is the trace format valid?)
167
168 * initTrace (called as the trace is opened
169
170 * seekEvent (go to a position in the trace and create a context)
171
172 * getNext (implemented in the base class)
173
174 * parseEvent (read the next element in the trace)
175
176 For reference, there is an example implementation of the Nexus Trace file in
177 org.eclipse.tracecompass.tracing.examples.core.trace.nexus.NexusTrace.java.
178
179 In this example, the '''validate''' function first checks if the file
180 exists, then makes sure that it is really a file, and not a directory. Then we
181 attempt to read the file header, to make sure that it is really a Nexus Trace.
182 If that check passes, we return a TraceValidationStatus with a confidence of 20.
183
184 Typically, TraceValidationStatus confidences should range from 1 to 100. 1 meaning
185 "there is a very small chance that this trace is of this type", and 100 meaning
186 "it is this type for sure, and cannot be anything else". At run-time, the
187 auto-detection will pick the type which returned the highest confidence. So
188 checks of the type "does the file exist?" should not return a too high
189 confidence. If confidence 0 is returned the auto-detection won't pick this type.
190
191 Here we used a confidence of 20, to leave "room" for more specific trace types
192 in the Nexus format that could be defined in TMF.
193
194 The '''initTrace''' function will read the event names, and find where the data
195 starts. After this, the number of events is known, and since each event is 8
196 bytes long according to the specs, the seek is then trivial.
197
198 The '''seek''' here will just reset the reader to the right location.
199
200 The '''parseEvent''' method needs to parse and return the current event and
201 store the current location.
202
203 The '''getNext''' method (in base class) will read the next event and update the
204 context. It calls the '''parseEvent''' method to read the event and update the
205 location. It does not need to be overridden and in this example it is not. The
206 sequence of actions necessary are parse the next event from the trace, create an
207 '''ITmfEvent''' with that data, update the current location, call
208 '''updateAttributes''', update the context then return the event.
209
210 Traces will typically implement an index, to make seeking faster. The index can
211 be rebuilt every time the trace is opened. Alternatively, it can be saved to
212 disk, to make future openings of the same trace quicker. To do so, the trace
213 object can implement the '''ITmfPersistentlyIndexable''' interface.
214
215 === Trace Context ===
216
217 The trace context will be a '''TmfContext'''
218
219 === Trace Location ===
220
221 The trace location will be a long, representing the rank in the file. The
222 '''TmfLongLocation''' will be the used, once again, no code is required.
223
224 === The ''org.eclipse.linuxtools.tmf.core.tracetype'' and ''org.eclipse.linuxtools.tmf.ui.tracetypeui'' plug-in extension points ===
225
226 One should use the ''tmf.core.tracetype'' extension point in their own plug-in.
227 In this example, the Nexus trace plug-in will be modified.
228
229 The '''plugin.xml''' file in the ui plug-in needs to be updated if one wants
230 users to access the given event type. It can be updated in the Eclipse plug-in
231 editor.
232
233 # In Extensions tab, add the '''org.eclipse.linuxtools.tmf.core.tracetype''' extension point.
234 [[Image:images/NTTExtension.png]]<br>
235 [[Image:images/NTTTraceType.png]]<br>
236 [[Image:images/NTTExtensionPoint.png]]<br>
237
238 # Add in the '''org.eclipse.linuxtools.tmf.ui.tracetype''' extension a new type. To do that, '''right click''' on the extension then in the context menu, go to '''New >''', '''type'''.
239
240 [[Image:images/NTTAddType.png]]<br>
241
242 The '''id''' is the unique identifier used to refer to the trace.
243
244 The '''name''' is the field that shall be displayed when a trace type is selected.
245
246 The '''trace type''' is the canonical path refering to the class of the trace.
247
248 The '''event type''' is the canonical path refering to the class of the events of a given trace.
249
250 The '''category''' (optional) is the container in which this trace type will be stored.
251
252 # (Optional) To also add UI-specific properties to your trace type, use the '''org.eclipse.linuxtools.tmf.ui.tracetypeui''' extension. To do that, '''right click''' on the extension then in the context menu, go to '''New >''', '''type'''.
253
254 The '''tracetype''' here is the '''id''' of the
255 ''org.eclipse.linuxtools.tmf.core.tracetype'' mentioned above.
256
257 The '''icon''' is the image to associate with that trace type.
258
259 In the end, the extension menu should look like this.
260
261 [[Image:images/NTTPluginxmlComplete.png]]<br>
262
263 = View Tutorial =
264
265 This tutorial describes how to create a simple view using the TMF framework and the SWTChart library. SWTChart is a library based on SWT that can draw several types of charts including a line chart which we will use in this tutorial. We will create a view containing a line chart that displays time stamps on the X axis and the corresponding event values on the Y axis.
266
267 This tutorial will cover concepts like:
268
269 * Extending TmfView
270 * Signal handling (@TmfSignalHandler)
271 * Data requests (TmfEventRequest)
272 * SWTChart integration
273
274 '''Note''': Trace Compass 0.1.0 provides base implementations for generating SWTChart viewers and views. For more details please refer to chapter [[#TMF Built-in Views and Viewers]].
275
276 === Prerequisites ===
277
278 The tutorial is based on Eclipse 4.4 (Eclipse Luna), Trace Compass 0.1.0 and SWTChart 0.7.0. If you are using TMF from the source repository, SWTChart is already included in the target definition file (see org.eclipse.tracecompass.target). You can also install it manually by using the Orbit update site. http://download.eclipse.org/tools/orbit/downloads/
279
280 === Creating an Eclipse UI Plug-in ===
281
282 To create a new project with name org.eclipse.tracecompass.tmf.sample.ui select '''File -> New -> Project -> Plug-in Development -> Plug-in Project'''. <br>
283 [[Image:images/Screenshot-NewPlug-inProject1.png]]<br>
284
285 [[Image:images/Screenshot-NewPlug-inProject2.png]]<br>
286
287 [[Image:images/Screenshot-NewPlug-inProject3.png]]<br>
288
289 === Creating a View ===
290
291 To open the plug-in manifest, double-click on the MANIFEST.MF file. <br>
292 [[Image:images/SelectManifest.png]]<br>
293
294 Change to the Dependencies tab and select '''Add...''' of the ''Required Plug-ins'' section. A new dialog box will open. Next find plug-in ''org.eclipse.tracecompass.tmf.core'' and press '''OK'''<br>
295 Following the same steps, add ''org.eclipse.tracecompass.tmf.ui'' and ''org.swtchart''.<br>
296 [[Image:images/AddDependencyTmfUi.png]]<br>
297
298 Change to the Extensions tab and select '''Add...''' of the ''All Extension'' section. A new dialog box will open. Find the view extension ''org.eclipse.ui.views'' and press '''Finish'''.<br>
299 [[Image:images/AddViewExtension1.png]]<br>
300
301 To create a view, click the right mouse button. Then select '''New -> view'''<br>
302 [[Image:images/AddViewExtension2.png]]<br>
303
304 A new view entry has been created. Fill in the fields ''id'' and ''name''. For ''class'' click on the '''class hyperlink''' and it will show the New Java Class dialog. Enter the name ''SampleView'', change the superclass to ''TmfView'' and click Finish. This will create the source file and fill the ''class'' field in the process. We use TmfView as the superclass because it provides extra functionality like getting the active trace, pinning and it has support for signal handling between components.<br>
305 [[Image:images/FillSampleViewExtension.png]]<br>
306
307 This will generate an empty class. Once the quick fixes are applied, the following code is obtained:
308
309 <pre>
310 package org.eclipse.tracecompass.tmf.sample.ui;
311
312 import org.eclipse.swt.widgets.Composite;
313 import org.eclipse.ui.part.ViewPart;
314
315 public class SampleView extends TmfView {
316
317 public SampleView(String viewName) {
318 super(viewName);
319 // TODO Auto-generated constructor stub
320 }
321
322 @Override
323 public void createPartControl(Composite parent) {
324 // TODO Auto-generated method stub
325
326 }
327
328 @Override
329 public void setFocus() {
330 // TODO Auto-generated method stub
331
332 }
333
334 }
335 </pre>
336
337 This creates an empty view, however the basic structure is now is place.
338
339 === Implementing a view ===
340
341 We will start by adding a empty chart then it will need to be populated with the trace data. Finally, we will make the chart more visually pleasing by adjusting the range and formating the time stamps.
342
343 ==== Adding an Empty Chart ====
344
345 First, we can add an empty chart to the view and initialize some of its components.
346
347 <pre>
348 private static final String SERIES_NAME = "Series";
349 private static final String Y_AXIS_TITLE = "Signal";
350 private static final String X_AXIS_TITLE = "Time";
351 private static final String FIELD = "value"; // The name of the field that we want to display on the Y axis
352 private static final String VIEW_ID = "org.eclipse.tracecompass.tmf.sample.ui.view";
353 private Chart chart;
354 private ITmfTrace currentTrace;
355
356 public SampleView() {
357 super(VIEW_ID);
358 }
359
360 @Override
361 public void createPartControl(Composite parent) {
362 chart = new Chart(parent, SWT.BORDER);
363 chart.getTitle().setVisible(false);
364 chart.getAxisSet().getXAxis(0).getTitle().setText(X_AXIS_TITLE);
365 chart.getAxisSet().getYAxis(0).getTitle().setText(Y_AXIS_TITLE);
366 chart.getSeriesSet().createSeries(SeriesType.LINE, SERIES_NAME);
367 chart.getLegend().setVisible(false);
368 }
369
370 @Override
371 public void setFocus() {
372 chart.setFocus();
373 }
374 </pre>
375
376 The view is prepared. Run the Example. To launch the an Eclipse Application select the ''Overview'' tab and click on '''Launch an Eclipse Application'''<br>
377 [[Image:images/RunEclipseApplication.png]]<br>
378
379 A new Eclipse application window will show. In the new window go to '''Windows -> Show View -> Other... -> Other -> Sample View'''.<br>
380 [[Image:images/ShowViewOther.png]]<br>
381
382 You should now see a view containing an empty chart<br>
383 [[Image:images/EmptySampleView.png]]<br>
384
385 ==== Signal Handling ====
386
387 We would like to populate the view when a trace is selected. To achieve this, we can use a signal hander which is specified with the '''@TmfSignalHandler''' annotation.
388
389 <pre>
390 @TmfSignalHandler
391 public void traceSelected(final TmfTraceSelectedSignal signal) {
392
393 }
394 </pre>
395
396 ==== Requesting Data ====
397
398 Then we need to actually gather data from the trace. This is done asynchronously using a ''TmfEventRequest''
399
400 <pre>
401 @TmfSignalHandler
402 public void traceSelected(final TmfTraceSelectedSignal signal) {
403 // Don't populate the view again if we're already showing this trace
404 if (currentTrace == signal.getTrace()) {
405 return;
406 }
407 currentTrace = signal.getTrace();
408
409 // Create the request to get data from the trace
410
411 TmfEventRequest req = new TmfEventRequest(TmfEvent.class,
412 TmfTimeRange.ETERNITY, 0, ITmfEventRequest.ALL_DATA,
413 ITmfEventRequest.ExecutionType.BACKGROUND) {
414
415 @Override
416 public void handleData(ITmfEvent data) {
417 // Called for each event
418 super.handleData(data);
419 }
420
421 @Override
422 public void handleSuccess() {
423 // Request successful, not more data available
424 super.handleSuccess();
425 }
426
427 @Override
428 public void handleFailure() {
429 // Request failed, not more data available
430 super.handleFailure();
431 }
432 };
433 ITmfTrace trace = signal.getTrace();
434 trace.sendRequest(req);
435 }
436 </pre>
437
438 ==== Transferring Data to the Chart ====
439
440 The chart expects an array of doubles for both the X and Y axis values. To provide that, we can accumulate each event's time and value in their respective list then convert the list to arrays when all events are processed.
441
442 <pre>
443 TmfEventRequest req = new TmfEventRequest(TmfEvent.class,
444 TmfTimeRange.ETERNITY, 0, ITmfEventRequest.ALL_DATA,
445 ITmfEventRequest.ExecutionType.BACKGROUND) {
446
447 ArrayList<Double> xValues = new ArrayList<Double>();
448 ArrayList<Double> yValues = new ArrayList<Double>();
449
450 @Override
451 public void handleData(ITmfEvent data) {
452 // Called for each event
453 super.handleData(data);
454 ITmfEventField field = data.getContent().getField(FIELD);
455 if (field != null) {
456 yValues.add((Double) field.getValue());
457 xValues.add((double) data.getTimestamp().getValue());
458 }
459 }
460
461 @Override
462 public void handleSuccess() {
463 // Request successful, not more data available
464 super.handleSuccess();
465
466 final double x[] = toArray(xValues);
467 final double y[] = toArray(yValues);
468
469 // This part needs to run on the UI thread since it updates the chart SWT control
470 Display.getDefault().asyncExec(new Runnable() {
471
472 @Override
473 public void run() {
474 chart.getSeriesSet().getSeries()[0].setXSeries(x);
475 chart.getSeriesSet().getSeries()[0].setYSeries(y);
476
477 chart.redraw();
478 }
479
480 });
481 }
482
483 /**
484 * Convert List<Double> to double[]
485 */
486 private double[] toArray(List<Double> list) {
487 double[] d = new double[list.size()];
488 for (int i = 0; i < list.size(); ++i) {
489 d[i] = list.get(i);
490 }
491
492 return d;
493 }
494 };
495 </pre>
496
497 ==== Adjusting the Range ====
498
499 The chart now contains values but they might be out of range and not visible. We can adjust the range of each axis by computing the minimum and maximum values as we add events.
500
501 <pre>
502
503 ArrayList<Double> xValues = new ArrayList<Double>();
504 ArrayList<Double> yValues = new ArrayList<Double>();
505 private double maxY = -Double.MAX_VALUE;
506 private double minY = Double.MAX_VALUE;
507 private double maxX = -Double.MAX_VALUE;
508 private double minX = Double.MAX_VALUE;
509
510 @Override
511 public void handleData(ITmfEvent data) {
512 super.handleData(data);
513 ITmfEventField field = data.getContent().getField(FIELD);
514 if (field != null) {
515 Double yValue = (Double) field.getValue();
516 minY = Math.min(minY, yValue);
517 maxY = Math.max(maxY, yValue);
518 yValues.add(yValue);
519
520 double xValue = (double) data.getTimestamp().getValue();
521 xValues.add(xValue);
522 minX = Math.min(minX, xValue);
523 maxX = Math.max(maxX, xValue);
524 }
525 }
526
527 @Override
528 public void handleSuccess() {
529 super.handleSuccess();
530 final double x[] = toArray(xValues);
531 final double y[] = toArray(yValues);
532
533 // This part needs to run on the UI thread since it updates the chart SWT control
534 Display.getDefault().asyncExec(new Runnable() {
535
536 @Override
537 public void run() {
538 chart.getSeriesSet().getSeries()[0].setXSeries(x);
539 chart.getSeriesSet().getSeries()[0].setYSeries(y);
540
541 // Set the new range
542 if (!xValues.isEmpty() && !yValues.isEmpty()) {
543 chart.getAxisSet().getXAxis(0).setRange(new Range(0, x[x.length - 1]));
544 chart.getAxisSet().getYAxis(0).setRange(new Range(minY, maxY));
545 } else {
546 chart.getAxisSet().getXAxis(0).setRange(new Range(0, 1));
547 chart.getAxisSet().getYAxis(0).setRange(new Range(0, 1));
548 }
549 chart.getAxisSet().adjustRange();
550
551 chart.redraw();
552 }
553 });
554 }
555 </pre>
556
557 ==== Formatting the Time Stamps ====
558
559 To display the time stamps on the X axis nicely, we need to specify a format or else the time stamps will be displayed as ''long''. We use TmfTimestampFormat to make it consistent with the other TMF views. We also need to handle the '''TmfTimestampFormatUpdateSignal''' to make sure that the time stamps update when the preferences change.
560
561 <pre>
562 @Override
563 public void createPartControl(Composite parent) {
564 ...
565
566 chart.getAxisSet().getXAxis(0).getTick().setFormat(new TmfChartTimeStampFormat());
567 }
568
569 public class TmfChartTimeStampFormat extends SimpleDateFormat {
570 private static final long serialVersionUID = 1L;
571 @Override
572 public StringBuffer format(Date date, StringBuffer toAppendTo, FieldPosition fieldPosition) {
573 long time = date.getTime();
574 toAppendTo.append(TmfTimestampFormat.getDefaulTimeFormat().format(time));
575 return toAppendTo;
576 }
577 }
578
579 @TmfSignalHandler
580 public void timestampFormatUpdated(TmfTimestampFormatUpdateSignal signal) {
581 // Called when the time stamp preference is changed
582 chart.getAxisSet().getXAxis(0).getTick().setFormat(new TmfChartTimeStampFormat());
583 chart.redraw();
584 }
585 </pre>
586
587 We also need to populate the view when a trace is already selected and the view is opened. We can reuse the same code by having the view send the '''TmfTraceSelectedSignal''' to itself.
588
589 <pre>
590 @Override
591 public void createPartControl(Composite parent) {
592 ...
593
594 ITmfTrace trace = getActiveTrace();
595 if (trace != null) {
596 traceSelected(new TmfTraceSelectedSignal(this, trace));
597 }
598 }
599 </pre>
600
601 The view is now ready but we need a proper trace to test it. For this example, a trace was generated using LTTng-UST so that it would produce a sine function.<br>
602
603 [[Image:images/SampleView.png]]<br>
604
605 In summary, we have implemented a simple TMF view using the SWTChart library. We made use of signals and requests to populate the view at the appropriate time and we formated the time stamps nicely. We also made sure that the time stamp format is updated when the preferences change.
606
607 == TMF Built-in Views and Viewers ==
608
609 TMF provides base implementations for several types of views and viewers for generating custom X-Y-Charts, Time Graphs, or Trees. They are well integrated with various TMF features such as reading traces and time synchronization with other views. They also handle mouse events for navigating the trace and view, zooming or presenting detailed information at mouse position. The code can be found in the TMF UI plug-in ''org.eclipse.tracecompass.tmf.ui''. See below for a list of relevant java packages:
610
611 * Generic
612 ** ''org.eclipse.tracecompass.tmf.ui.views'': Common TMF view base classes
613 * X-Y-Chart
614 ** ''org.eclipse.tracecompass.tmf.ui.viewers.xycharts'': Common base classes for X-Y-Chart viewers based on SWTChart
615 ** ''org.eclipse.tracecompass.tmf.ui.viewers.xycharts.barcharts'': Base classes for bar charts
616 ** ''org.eclipse.tracecompass.tmf.ui.viewers.xycharts.linecharts'': Base classes for line charts
617 * Time Graph View
618 ** ''org.eclipse.tracecompass.tmf.ui.widgets.timegraph'': Base classes for time graphs e.g. Gantt-charts
619 * Tree Viewer
620 ** ''org.eclipse.tracecompass.tmf.ui.viewers.tree'': Base classes for TMF specific tree viewers
621
622 Several features in TMF and the Eclipse LTTng integration are using this framework and can be used as example for further developments:
623 * X-Y- Chart
624 ** ''org.eclipse.tracecompass.internal.lttng2.ust.ui.views.memusage.MemUsageView.java''
625 ** ''org.eclipse.tracecompass.analysis.os.linux.ui.views.cpuusage.CpuUsageView.java''
626 ** ''org.eclipse.tracecompass.tracing.examples.ui.views.histogram.NewHistogramView.java''
627 * Time Graph View
628 ** ''org.eclipse.tracecompass.analysis.os.linux.ui.views.controlflow.ControlFlowView.java''
629 ** ''org.eclipse.tracecompass.analysis.os.linux.ui.views.resources.ResourcesView.java''
630 * Tree Viewer
631 ** ''org.eclipse.tracecompass.tmf.ui.views.statesystem.TmfStateSystemExplorer.java''
632 ** ''org.eclipse.tracecompass.analysis.os.linux.ui.views.cpuusage.CpuUsageComposite.java''
633
634 = Component Interaction =
635
636 TMF provides a mechanism for different components to interact with each other using signals. The signals can carry information that is specific to each signal.
637
638 The TMF Signal Manager handles registration of components and the broadcasting of signals to their intended receivers.
639
640 Components can register as VIP receivers which will ensure they will receive the signal before non-VIP receivers.
641
642 == Sending Signals ==
643
644 In order to send a signal, an instance of the signal must be created and passed as argument to the signal manager to be dispatched. Every component that can handle the signal will receive it. The receivers do not need to be known by the sender.
645
646 <pre>
647 TmfExampleSignal signal = new TmfExampleSignal(this, ...);
648 TmfSignalManager.dispatchSignal(signal);
649 </pre>
650
651 If the sender is an instance of the class TmfComponent, the broadcast method can be used:
652
653 <pre>
654 TmfExampleSignal signal = new TmfExampleSignal(this, ...);
655 broadcast(signal);
656 </pre>
657
658 == Receiving Signals ==
659
660 In order to receive any signal, the receiver must first be registered with the signal manager. The receiver can register as a normal or VIP receiver.
661
662 <pre>
663 TmfSignalManager.register(this);
664 TmfSignalManager.registerVIP(this);
665 </pre>
666
667 If the receiver is an instance of the class TmfComponent, it is automatically registered as a normal receiver in the constructor.
668
669 When the receiver is destroyed or disposed, it should deregister itself from the signal manager.
670
671 <pre>
672 TmfSignalManager.deregister(this);
673 </pre>
674
675 To actually receive and handle any specific signal, the receiver must use the @TmfSignalHandler annotation and implement a method that will be called when the signal is broadcast. The name of the method is irrelevant.
676
677 <pre>
678 @TmfSignalHandler
679 public void example(TmfExampleSignal signal) {
680 ...
681 }
682 </pre>
683
684 The source of the signal can be used, if necessary, by a component to filter out and ignore a signal that was broadcast by itself when the component is also a receiver of the signal but only needs to handle it when it was sent by another component or another instance of the component.
685
686 == Signal Throttling ==
687
688 It is possible for a TmfComponent instance to buffer the dispatching of signals so that only the last signal queued after a specified delay without any other signal queued is sent to the receivers. All signals that are preempted by a newer signal within the delay are discarded.
689
690 The signal throttler must first be initialized:
691
692 <pre>
693 final int delay = 100; // in ms
694 TmfSignalThrottler throttler = new TmfSignalThrottler(this, delay);
695 </pre>
696
697 Then the sending of signals should be queued through the throttler:
698
699 <pre>
700 TmfExampleSignal signal = new TmfExampleSignal(this, ...);
701 throttler.queue(signal);
702 </pre>
703
704 When the throttler is no longer needed, it should be disposed:
705
706 <pre>
707 throttler.dispose();
708 </pre>
709
710 == Signal Reference ==
711
712 The following is a list of built-in signals defined in the framework.
713
714 === TmfStartSynchSignal ===
715
716 ''Purpose''
717
718 This signal is used to indicate the start of broadcasting of a signal. Internally, the data provider will not fire event requests until the corresponding TmfEndSynchSignal signal is received. This allows coalescing of requests triggered by multiple receivers of the broadcast signal.
719
720 ''Senders''
721
722 Sent by TmfSignalManager before dispatching a signal to all receivers.
723
724 ''Receivers''
725
726 Received by TmfDataProvider.
727
728 === TmfEndSynchSignal ===
729
730 ''Purpose''
731
732 This signal is used to indicate the end of broadcasting of a signal. Internally, the data provider fire all pending event requests that were received and buffered since the corresponding TmfStartSynchSignal signal was received. This allows coalescing of requests triggered by multiple receivers of the broadcast signal.
733
734 ''Senders''
735
736 Sent by TmfSignalManager after dispatching a signal to all receivers.
737
738 ''Receivers''
739
740 Received by TmfDataProvider.
741
742 === TmfTraceOpenedSignal ===
743
744 ''Purpose''
745
746 This signal is used to indicate that a trace has been opened in an editor.
747
748 ''Senders''
749
750 Sent by a TmfEventsEditor instance when it is created.
751
752 ''Receivers''
753
754 Received by TmfTrace, TmfExperiment, TmfTraceManager and every view that shows trace data. Components that show trace data should handle this signal.
755
756 === TmfTraceSelectedSignal ===
757
758 ''Purpose''
759
760 This signal is used to indicate that a trace has become the currently selected trace.
761
762 ''Senders''
763
764 Sent by a TmfEventsEditor instance when it receives focus. Components can send this signal to make a trace editor be brought to front.
765
766 ''Receivers''
767
768 Received by TmfTraceManager and every view that shows trace data. Components that show trace data should handle this signal.
769
770 === TmfTraceClosedSignal ===
771
772 ''Purpose''
773
774 This signal is used to indicate that a trace editor has been closed.
775
776 ''Senders''
777
778 Sent by a TmfEventsEditor instance when it is disposed.
779
780 ''Receivers''
781
782 Received by TmfTraceManager and every view that shows trace data. Components that show trace data should handle this signal.
783
784 === TmfTraceRangeUpdatedSignal ===
785
786 ''Purpose''
787
788 This signal is used to indicate that the valid time range of a trace has been updated. This triggers indexing of the trace up to the end of the range. In the context of streaming, this end time is considered a safe time up to which all events are guaranteed to have been completely received. For non-streaming traces, the end time is set to infinity indicating that all events can be read immediately. Any processing of trace events that wants to take advantage of request coalescing should be triggered by this signal.
789
790 ''Senders''
791
792 Sent by TmfExperiment and non-streaming TmfTrace. Streaming traces should send this signal in the TmfTrace subclass when a new safe time is determined by a specific implementation.
793
794 ''Receivers''
795
796 Received by TmfTrace, TmfExperiment and components that process trace events. Components that need to process trace events should handle this signal.
797
798 === TmfTraceUpdatedSignal ===
799
800 ''Purpose''
801
802 This signal is used to indicate that new events have been indexed for a trace.
803
804 ''Senders''
805
806 Sent by TmfCheckpointIndexer when new events have been indexed and the number of events has changed.
807
808 ''Receivers''
809
810 Received by components that need to be notified of a new trace event count.
811
812 === TmfTimeSynchSignal ===
813
814 ''Purpose''
815
816 This signal is used to indicate that a new time or time range has been
817 selected. It contains a begin and end time. If a single time is selected then
818 the begin and end time are the same.
819
820 ''Senders''
821
822 Sent by any component that allows the user to select a time or time range.
823
824 ''Receivers''
825
826 Received by any component that needs to be notified of the currently selected time or time range.
827
828 === TmfRangeSynchSignal ===
829
830 ''Purpose''
831
832 This signal is used to indicate that a new time range window has been set.
833
834 ''Senders''
835
836 Sent by any component that allows the user to set a time range window.
837
838 ''Receivers''
839
840 Received by any component that needs to be notified of the current visible time range window.
841
842 === TmfEventFilterAppliedSignal ===
843
844 ''Purpose''
845
846 This signal is used to indicate that a filter has been applied to a trace.
847
848 ''Senders''
849
850 Sent by TmfEventsTable when a filter is applied.
851
852 ''Receivers''
853
854 Received by any component that shows trace data and needs to be notified of applied filters.
855
856 === TmfEventSearchAppliedSignal ===
857
858 ''Purpose''
859
860 This signal is used to indicate that a search has been applied to a trace.
861
862 ''Senders''
863
864 Sent by TmfEventsTable when a search is applied.
865
866 ''Receivers''
867
868 Received by any component that shows trace data and needs to be notified of applied searches.
869
870 === TmfTimestampFormatUpdateSignal ===
871
872 ''Purpose''
873
874 This signal is used to indicate that the timestamp format preference has been updated.
875
876 ''Senders''
877
878 Sent by TmfTimestampFormat when the default timestamp format preference is changed.
879
880 ''Receivers''
881
882 Received by any component that needs to refresh its display for the new timestamp format.
883
884 === TmfStatsUpdatedSignal ===
885
886 ''Purpose''
887
888 This signal is used to indicate that the statistics data model has been updated.
889
890 ''Senders''
891
892 Sent by statistic providers when new statistics data has been processed.
893
894 ''Receivers''
895
896 Received by statistics viewers and any component that needs to be notified of a statistics update.
897
898 === TmfPacketStreamSelected ===
899
900 ''Purpose''
901
902 This signal is used to indicate that the user has selected a packet stream to analyze.
903
904 ''Senders''
905
906 Sent by the Stream List View when the user selects a new packet stream.
907
908 ''Receivers''
909
910 Received by views that analyze packet streams.
911
912 == Debugging ==
913
914 TMF has built-in Eclipse tracing support for the debugging of signal interaction between components. To enable it, open the '''Run/Debug Configuration...''' dialog, select a configuration, click the '''Tracing''' tab, select the plug-in '''org.eclipse.tracecompass.tmf.core''', and check the '''signal''' item.
915
916 All signals sent and received will be logged to the file TmfTrace.log located in the Eclipse home directory.
917
918 = Generic State System =
919
920 == Introduction ==
921
922 The Generic State System is a utility available in TMF to track different states
923 over the duration of a trace. It works by first sending some or all events of
924 the trace into a state provider, which defines the state changes for a given
925 trace type. Once built, views and analysis modules can then query the resulting
926 database of states (called "state history") to get information.
927
928 For example, let's suppose we have the following sequence of events in a kernel
929 trace:
930
931 10 s, sys_open, fd = 5, file = /home/user/myfile
932 ...
933 15 s, sys_read, fd = 5, size=32
934 ...
935 20 s, sys_close, fd = 5
936
937 Now let's say we want to implement an analysis module which will track the
938 amount of bytes read and written to each file. Here, of course the sys_read is
939 interesting. However, by just looking at that event, we have no information on
940 which file is being read, only its fd (5) is known. To get the match
941 fd5 = /home/user/myfile, we have to go back to the sys_open event which happens
942 5 seconds earlier.
943
944 But since we don't know exactly where this sys_open event is, we will have to go
945 back to the very start of the trace, and look through events one by one! This is
946 obviously not efficient, and will not scale well if we want to analyze many
947 similar patterns, or for very large traces.
948
949 A solution in this case would be to use the state system to keep track of the
950 amount of bytes read/written to every *filename* (instead of every file
951 descriptor, like we get from the events). Then the module could ask the state
952 system "what is the amount of bytes read for file "/home/user/myfile" at time
953 16 s", and it would return the answer "32" (assuming there is no other read
954 than the one shown).
955
956 == High-level components ==
957
958 The State System infrastructure is composed of 3 parts:
959 * The state provider
960 * The central state system
961 * The storage backend
962
963 The state provider is the customizable part. This is where the mapping from
964 trace events to state changes is done. This is what you want to implement for
965 your specific trace type and analysis type. It's represented by the
966 ITmfStateProvider interface (with a threaded implementation in
967 AbstractTmfStateProvider, which you can extend).
968
969 The core of the state system is exposed through the ITmfStateSystem and
970 ITmfStateSystemBuilder interfaces. The former allows only read-only access and
971 is typically used for views doing queries. The latter also allows writing to the
972 state history, and is typically used by the state provider.
973
974 Finally, each state system has its own separate backend. This determines how the
975 intervals, or the "state history", are saved (in RAM, on disk, etc.) You can
976 select the type of backend at construction time in the TmfStateSystemFactory.
977
978 == Definitions ==
979
980 Before we dig into how to use the state system, we should go over some useful
981 definitions:
982
983 === Attribute ===
984
985 An attribute is the smallest element of the model that can be in any particular
986 state. When we refer to the "full state", in fact it means we are interested in
987 the state of every single attribute of the model.
988
989 === Attribute Tree ===
990
991 Attributes in the model can be placed in a tree-like structure, a bit like files
992 and directories in a file system. However, note that an attribute can always
993 have both a value and sub-attributes, so they are like files and directories at
994 the same time. We are then able to refer to every single attribute with its
995 path in the tree.
996
997 For example, in the attribute tree for Linux kernel traces, we use the following
998 attributes, among others:
999
1000 <pre>
1001 |- Processes
1002 | |- 1000
1003 | | |- PPID
1004 | | |- Exec_name
1005 | |- 1001
1006 | | |- PPID
1007 | | |- Exec_name
1008 | ...
1009 |- CPUs
1010 |- 0
1011 | |- Status
1012 | |- Current_pid
1013 ...
1014 </pre>
1015
1016 In this model, the attribute "Processes/1000/PPID" refers to the PPID of process
1017 with PID 1000. The attribute "CPUs/0/Status" represents the status (running,
1018 idle, etc.) of CPU 0. "Processes/1000/PPID" and "Processes/1001/PPID" are two
1019 different attribute, even though their base name is the same: the whole path is
1020 the unique identifier.
1021
1022 The value of each attribute can change over the duration of the trace,
1023 independently of the other ones, and independently of its position in the tree.
1024
1025 The tree-like organization is optional, all attributes could be at the same
1026 level. But it's possible to put them in a tree, and it helps make things
1027 clearer.
1028
1029 === Quark ===
1030
1031 In addition to a given path, each attribute also has a unique integer
1032 identifier, called the "quark". To continue with the file system analogy, this
1033 is like the inode number. When a new attribute is created, a new unique quark
1034 will be assigned automatically. They are assigned incrementally, so they will
1035 normally be equal to their order of creation, starting at 0.
1036
1037 Methods are offered to get the quark of an attribute from its path. The API
1038 methods for inserting state changes and doing queries normally use quarks
1039 instead of paths. This is to encourage users to cache the quarks and re-use
1040 them, which avoids re-walking the attribute tree over and over, which avoids
1041 unneeded hashing of strings.
1042
1043 === State value ===
1044
1045 The path and quark of an attribute will remain constant for the whole duration
1046 of the trace. However, the value carried by the attribute will change. The value
1047 of a specific attribute at a specific time is called the state value.
1048
1049 In the TMF implementation, state values can be integers, longs, doubles, or strings.
1050 There is also a "null value" type, which is used to indicate that no particular
1051 value is active for this attribute at this time, but without resorting to a
1052 'null' reference.
1053
1054 Any other type of value could be used, as long as the backend knows how to store
1055 it.
1056
1057 Note that the TMF implementation also forces every attribute to always carry the
1058 same type of state value. This is to make it simpler for views, so they can
1059 expect that an attribute will always use a given type, without having to check
1060 every single time. Null values are an exception, they are always allowed for all
1061 attributes, since they can safely be "unboxed" into all types.
1062
1063 === State change ===
1064
1065 A state change is the element that is inserted in the state system. It consists
1066 of:
1067 * a timestamp (the time at which the state change occurs)
1068 * an attribute (the attribute whose value will change)
1069 * a state value (the new value that the attribute will carry)
1070
1071 It's not an object per se in the TMF implementation (it's represented by a
1072 function call in the state provider). Typically, the state provider will insert
1073 zero, one or more state changes for every trace event, depending on its event
1074 type, payload, etc.
1075
1076 Note, we use "timestamp" here, but it's in fact a generic term that could be
1077 referred to as "index". For example, if a given trace type has no notion of
1078 timestamp, the event rank could be used.
1079
1080 In the TMF implementation, the timestamp is a long (64-bit integer).
1081
1082 === State interval ===
1083
1084 State changes are inserted into the state system, but state intervals are the
1085 objects that come out on the other side. Those are stocked in the storage
1086 backend. A state interval represents a "state" of an attribute we want to track.
1087 When doing queries on the state system, intervals are what is returned. The
1088 components of a state interval are:
1089 * Start time
1090 * End time
1091 * State value
1092 * Quark
1093
1094 The start and end times represent the time range of the state. The state value
1095 is the same as the state value in the state change that started this interval.
1096 The interval also keeps a reference to its quark, although you normally know
1097 your quark in advance when you do queries.
1098
1099 === State history ===
1100
1101 The state history is the name of the container for all the intervals created by
1102 the state system. The exact implementation (how the intervals are stored) is
1103 determined by the storage backend that is used.
1104
1105 Some backends will use a state history that is peristent on disk, others do not.
1106 When loading a trace, if a history file is available and the backend supports
1107 it, it will be loaded right away, skipping the need to go through another
1108 construction phase.
1109
1110 === Construction phase ===
1111
1112 Before we can query a state system, we need to build the state history first. To
1113 do so, trace events are sent one-by-one through the state provider, which in
1114 turn sends state changes to the central component, which then creates intervals
1115 and stores them in the backend. This is called the construction phase.
1116
1117 Note that the state system needs to receive its events into chronological order.
1118 This phase will end once the end of the trace is reached.
1119
1120 Also note that it is possible to query the state system while it is being build.
1121 Any timestamp between the start of the trace and the current end time of the
1122 state system (available with ITmfStateSystem#getCurrentEndTime()) is a valid
1123 timestamp that can be queried.
1124
1125 === Queries ===
1126
1127 As mentioned previously, when doing queries on the state system, the returned
1128 objects will be state intervals. In most cases it's the state *value* we are
1129 interested in, but since the backend has to instantiate the interval object
1130 anyway, there is no additional cost to return the interval instead. This way we
1131 also get the start and end times of the state "for free".
1132
1133 There are two types of queries that can be done on the state system:
1134
1135 ==== Full queries ====
1136
1137 A full query means that we want to retrieve the whole state of the model for one
1138 given timestamp. As we remember, this means "the state of every single attribute
1139 in the model". As parameter we only need to pass the timestamp (see the API
1140 methods below). The return value will be an array of intervals, where the offset
1141 in the array represents the quark of each attribute.
1142
1143 ==== Single queries ====
1144
1145 In other cases, we might only be interested in the state of one particular
1146 attribute at one given timestamp. For these cases it's better to use a
1147 single query. For a single query. we need to pass both a timestamp and a
1148 quark in parameter. The return value will be a single interval, representing
1149 the state that this particular attribute was at that time.
1150
1151 Single queries are typically faster than full queries (but once again, this
1152 depends on the backend that is used), but not by much. Even if you only want the
1153 state of say 10 attributes out of 200, it could be faster to use a full query
1154 and only read the ones you need. Single queries should be used for cases where
1155 you only want one attribute per timestamp (for example, if you follow the state
1156 of the same attribute over a time range).
1157
1158
1159 == Relevant interfaces/classes ==
1160
1161 This section will describe the public interface and classes that can be used if
1162 you want to use the state system.
1163
1164 === Main classes in org.eclipse.tracecompass.tmf.core.statesystem ===
1165
1166 ==== ITmfStateProvider / AbstractTmfStateProvider ====
1167
1168 ITmfStateProvider is the interface you have to implement to define your state
1169 provider. This is where most of the work has to be done to use a state system
1170 for a custom trace type or analysis type.
1171
1172 For first-time users, it's recommended to extend AbstractTmfStateProvider
1173 instead. This class takes care of all the initialization mumbo-jumbo, and also
1174 runs the event handler in a separate thread. You will only need to implement
1175 eventHandle, which is the call-back that will be called for every event in the
1176 trace.
1177
1178 For an example, you can look at StatsStateProvider in the TMF tree, or at the
1179 small example below.
1180
1181 ==== TmfStateSystemFactory ====
1182
1183 Once you have defined your state provider, you need to tell your trace type to
1184 build a state system with this provider during its initialization. This consists
1185 of overriding TmfTrace#buildStateSystems() and in there of calling the method in
1186 TmfStateSystemFactory that corresponds to the storage backend you want to use
1187 (see the section [[#Comparison of state system backends]]).
1188
1189 You will have to pass in parameter the state provider you want to use, which you
1190 should have defined already. Each backend can also ask for more configuration
1191 information.
1192
1193 You must then call registerStateSystem(id, statesystem) to make your state
1194 system visible to the trace objects and the views. The ID can be any string of
1195 your choosing. To access this particular state system, the views or modules will
1196 need to use this ID.
1197
1198 Also, don't forget to call super.buildStateSystems() in your implementation,
1199 unless you know for sure you want to skip the state providers built by the
1200 super-classes.
1201
1202 You can look at how LttngKernelTrace does it for an example. It could also be
1203 possible to build a state system only under certain conditions (like only if the
1204 trace contains certain event types).
1205
1206
1207 ==== ITmfStateSystem ====
1208
1209 ITmfStateSystem is the main interface through which views or analysis modules
1210 will access the state system. It offers a read-only view of the state system,
1211 which means that no states can be inserted, and no attributes can be created.
1212 Calling TmfTrace#getStateSystems().get(id) will return you a ITmfStateSystem
1213 view of the requested state system. The main methods of interest are:
1214
1215 ===== getQuarkAbsolute()/getQuarkRelative() =====
1216
1217 Those are the basic quark-getting methods. The goal of the state system is to
1218 return the state values of given attributes at given timestamps. As we've seen
1219 earlier, attributes can be described with a file-system-like path. The goal of
1220 these methods is to convert from the path representation of the attribute to its
1221 quark.
1222
1223 Since quarks are created on-the-fly, there is no guarantee that the same
1224 attributes will have the same quark for two traces of the same type. The views
1225 should always query their quarks when dealing with a new trace or a new state
1226 provider. Beyond that however, quarks should be cached and reused as much as
1227 possible, to avoid potentially costly string re-hashing.
1228
1229 getQuarkAbsolute() takes a variable amount of Strings in parameter, which
1230 represent the full path to the attribute. Some of them can be constants, some
1231 can come programatically, often from the event's fields.
1232
1233 getQuarkRelative() is to be used when you already know the quark of a certain
1234 attribute, and want to access on of its sub-attributes. Its first parameter is
1235 the origin quark, followed by a String varagrs which represent the relative path
1236 to the final attribute.
1237
1238 These two methods will throw an AttributeNotFoundException if trying to access
1239 an attribute that does not exist in the model.
1240
1241 These methods also imply that the view has the knowledge of how the attribute
1242 tree is organized. This should be a reasonable hypothesis, since the same
1243 analysis plugin will normally ship both the state provider and the view, and
1244 they will have been written by the same person. In other cases, it's possible to
1245 use getSubAttributes() to explore the organization of the attribute tree first.
1246
1247 ===== waitUntilBuilt() =====
1248
1249 This is a simple method used to block the caller until the construction phase of
1250 this state system is done. If the view prefers to wait until all information is
1251 available before starting to do queries (to get all known attributes right away,
1252 for example), this is the guy to call.
1253
1254 ===== queryFullState() =====
1255
1256 This is the method to do full queries. As mentioned earlier, you only need to
1257 pass a target timestamp in parameter. It will return a List of state intervals,
1258 in which the offset corresponds to the attribute quark. This will represent the
1259 complete state of the model at the requested time.
1260
1261 ===== querySingleState() =====
1262
1263 The method to do single queries. You pass in parameter both a timestamp and an
1264 attribute quark. This will return the single state matching this
1265 timestamp/attribute pair.
1266
1267 Other methods are available, you are encouraged to read their Javadoc and see if
1268 they can be potentially useful.
1269
1270 ==== ITmfStateSystemBuilder ====
1271
1272 ITmfStateSystemBuilder is the read-write interface to the state system. It
1273 extends ITmfStateSystem itself, so all its methods are available. It then adds
1274 methods that can be used to write to the state system, either by creating new
1275 attributes of inserting state changes.
1276
1277 It is normally reserved for the state provider and should not be visible to
1278 external components. However it will be available in AbstractTmfStateProvider,
1279 in the field 'ss'. That way you can call ss.modifyAttribute() etc. in your state
1280 provider to write to the state.
1281
1282 The main methods of interest are:
1283
1284 ===== getQuark*AndAdd() =====
1285
1286 getQuarkAbsoluteAndAdd() and getQuarkRelativeAndAdd() work exactly like their
1287 non-AndAdd counterparts in ITmfStateSystem. The difference is that the -AndAdd
1288 versions will not throw any exception: if the requested attribute path does not
1289 exist in the system, it will be created, and its newly-assigned quark will be
1290 returned.
1291
1292 When in a state provider, the -AndAdd version should normally be used (unless
1293 you know for sure the attribute already exist and don't want to create it
1294 otherwise). This means that there is no need to define the whole attribute tree
1295 in advance, the attributes will be created on-demand.
1296
1297 ===== modifyAttribute() =====
1298
1299 This is the main state-change-insertion method. As was explained before, a state
1300 change is defined by a timestamp, an attribute and a state value. Those three
1301 elements need to be passed to modifyAttribute as parameters.
1302
1303 Other state change insertion methods are available (increment-, push-, pop- and
1304 removeAttribute()), but those are simply convenience wrappers around
1305 modifyAttribute(). Check their Javadoc for more information.
1306
1307 ===== closeHistory() =====
1308
1309 When the construction phase is done, do not forget to call closeHistory() to
1310 tell the backend that no more intervals will be received. Depending on the
1311 backend type, it might have to save files, close descriptors, etc. This ensures
1312 that a persitent file can then be re-used when the trace is opened again.
1313
1314 If you use the AbstractTmfStateProvider, it will call closeHistory()
1315 automatically when it reaches the end of the trace.
1316
1317 === Other relevant interfaces ===
1318
1319 ==== ITmfStateValue ====
1320
1321 This is the interface used to represent state values. Those are used when
1322 inserting state changes in the provider, and is also part of the state intervals
1323 obtained when doing queries.
1324
1325 The abstract TmfStateValue class contains the factory methods to create new
1326 state values of either int, long, double or string types. To retrieve the real
1327 object inside the state value, one can use the .unbox* methods.
1328
1329 Note: Do not instantiate null values manually, use TmfStateValue.nullValue()
1330
1331 ==== ITmfStateInterval ====
1332
1333 This is the interface to represent the state intervals, which are stored in the
1334 state history backend, and are returned when doing state system queries. A very
1335 simple implementation is available in TmfStateInterval. Its methods should be
1336 self-descriptive.
1337
1338 === Exceptions ===
1339
1340 The following exceptions, found in o.e.t.statesystem.core.exceptions, are related to
1341 state system activities.
1342
1343 ==== AttributeNotFoundException ====
1344
1345 This is thrown by getQuarkRelative() and getQuarkAbsolute() (but not byt the
1346 -AndAdd versions!) when passing an attribute path that is not present in the
1347 state system. This is to ensure that no new attribute is created when using
1348 these versions of the methods.
1349
1350 Views can expect some attributes to be present, but they should handle these
1351 exceptions for when the attributes end up not being in the state system (perhaps
1352 this particular trace didn't have a certain type of events, etc.)
1353
1354 ==== StateValueTypeException ====
1355
1356 This exception will be thrown when trying to unbox a state value into a type
1357 different than its own. You should always check with ITmfStateValue#getType()
1358 beforehand if you are not sure about the type of a given state value.
1359
1360 ==== TimeRangeException ====
1361
1362 This exception is thrown when trying to do a query on the state system for a
1363 timestamp that is outside of its range. To be safe, you should check with
1364 ITmfStateSystem#getStartTime() and #getCurrentEndTime() for the current valid
1365 range of the state system. This is especially important when doing queries on
1366 a state system that is currently being built.
1367
1368 ==== StateSystemDisposedException ====
1369
1370 This exception is thrown when trying to access a state system that has been
1371 disposed, with its dispose() method. This can potentially happen at shutdown,
1372 since Eclipse is not always consistent with the order in which the components
1373 are closed.
1374
1375
1376 == Comparison of state system backends ==
1377
1378 As we have seen in section [[#High-level components]], the state system needs
1379 a storage backend to save the intervals. Different implementations are
1380 available when building your state system from TmfStateSystemFactory.
1381
1382 Do not confuse full/single queries with full/partial history! All backend types
1383 should be able to handle any type of queries defined in the ITmfStateSystem API,
1384 unless noted otherwise.
1385
1386 === Full history ===
1387
1388 Available with TmfStateSystemFactory#newFullHistory(). The full history uses a
1389 History Tree data structure, which is an optimized structure store state
1390 intervals on disk. Once built, it can respond to queries in a ''log(n)'' manner.
1391
1392 You need to specify a file at creation time, which will be the container for
1393 the history tree. Once it's completely built, it will remain on disk (until you
1394 delete the trace from the project). This way it can be reused from one session
1395 to another, which makes subsequent loading time much faster.
1396
1397 This the backend used by the LTTng kernel plugin. It offers good scalability and
1398 performance, even at extreme sizes (it's been tested with traces of sizes up to
1399 500 GB). Its main downside is the amount of disk space required: since every
1400 single interval is written to disk, the size of the history file can quite
1401 easily reach and even surpass the size of the trace itself.
1402
1403 === Null history ===
1404
1405 Available with TmfStateSystemFactory#newNullHistory(). As its name implies the
1406 null history is in fact an absence of state history. All its query methods will
1407 return null (see the Javadoc in NullBackend).
1408
1409 Obviously, no file is required, and almost no memory space is used.
1410
1411 It's meant to be used in cases where you are not interested in past states, but
1412 only in the "ongoing" one. It can also be useful for debugging and benchmarking.
1413
1414 === In-memory history ===
1415
1416 Available with TmfStateSystemFactory#newInMemHistory(). This is a simple wrapper
1417 using a TreeSet to store all state intervals in memory. The implementation at
1418 the moment is quite simple, it will perform a binary search on entries when
1419 doing queries to find the ones that match.
1420
1421 The advantage of this method is that it's very quick to build and query, since
1422 all the information resides in memory. However, you are limited to 2^31 entries
1423 (roughly 2 billions), and depending on your state provider and trace type, that
1424 can happen really fast!
1425
1426 There are no safeguards, so if you bust the limit you will end up with
1427 ArrayOutOfBoundsException's everywhere. If your trace or state history can be
1428 arbitrarily big, it's probably safer to use a Full History instead.
1429
1430 === Partial history ===
1431
1432 Available with TmfStateSystemFactory#newPartialHistory(). The partial history is
1433 a more advanced form of the full history. Instead of writing all state intervals
1434 to disk like with the full history, we only write a small fraction of them, and
1435 go back to read the trace to recreate the states in-between.
1436
1437 It has a big advantage over a full history in terms of disk space usage. It's
1438 very possible to reduce the history tree file size by a factor of 1000, while
1439 keeping query times within a factor of two. Its main downside comes from the
1440 fact that you cannot do efficient single queries with it (they are implemented
1441 by doing full queries underneath).
1442
1443 This makes it a poor choice for views like the Control Flow view, where you do
1444 a lot of range queries and single queries. However, it is a perfect fit for
1445 cases like statistics, where you usually do full queries already, and you store
1446 lots of small states which are very easy to "compress".
1447
1448 However, it can't really be used until bug 409630 is fixed.
1449
1450 == State System Operations ==
1451
1452 TmfStateSystemOperations is a static class that implements additional
1453 statistical operations that can be performed on attributes of the state system.
1454
1455 These operations require that the attribute be one of the numerical values
1456 (int, long or double).
1457
1458 The speed of these operations can be greatly improved for large data sets if
1459 the attribute was inserted in the state system as a mipmap attribute. Refer to
1460 the [[#Mipmap feature | Mipmap feature]] section.
1461
1462 ===== queryRangeMax() =====
1463
1464 This method returns the maximum numerical value of an attribute in the
1465 specified time range. The attribute must be of type int, long or double.
1466 Null values are ignored. The returned value will be of the same state value
1467 type as the base attribute, or a null value if there is no state interval
1468 stored in the given time range.
1469
1470 ===== queryRangeMin() =====
1471
1472 This method returns the minimum numerical value of an attribute in the
1473 specified time range. The attribute must be of type int, long or double.
1474 Null values are ignored. The returned value will be of the same state value
1475 type as the base attribute, or a null value if there is no state interval
1476 stored in the given time range.
1477
1478 ===== queryRangeAverage() =====
1479
1480 This method returns the average numerical value of an attribute in the
1481 specified time range. The attribute must be of type int, long or double.
1482 Each state interval value is weighted according to time. Null values are
1483 counted as zero. The returned value will be a double primitive, which will
1484 be zero if there is no state interval stored in the given time range.
1485
1486 == Code example ==
1487
1488 Here is a small example of code that will use the state system. For this
1489 example, let's assume we want to track the state of all the CPUs in a LTTng
1490 kernel trace. To do so, we will watch for the "sched_switch" event in the state
1491 provider, and will update an attribute indicating if the associated CPU should
1492 be set to "running" or "idle".
1493
1494 We will use an attribute tree that looks like this:
1495 <pre>
1496 CPUs
1497 |--0
1498 | |--Status
1499 |
1500 |--1
1501 | |--Status
1502 |
1503 | 2
1504 | |--Status
1505 ...
1506 </pre>
1507
1508 The second-level attributes will be named from the information available in the
1509 trace events. Only the "Status" attributes will carry a state value (this means
1510 we could have just used "1", "2", "3",... directly, but we'll do it in a tree
1511 for the example's sake).
1512
1513 Also, we will use integer state values to represent "running" or "idle", instead
1514 of saving the strings that would get repeated every time. This will help in
1515 reducing the size of the history file.
1516
1517 First we will define a state provider in MyStateProvider. Then, we define an
1518 analysis module that takes care of creating the state provider. The analysis
1519 module will also contain code that can query the state system.
1520
1521 === State Provider ===
1522
1523 <pre>
1524 import org.eclipse.tracecompass.statesystem.core.exceptions.AttributeNotFoundException;
1525 import org.eclipse.tracecompass.statesystem.core.exceptions.StateValueTypeException;
1526 import org.eclipse.tracecompass.statesystem.core.exceptions.TimeRangeException;
1527 import org.eclipse.tracecompass.statesystem.core.statevalue.ITmfStateValue;
1528 import org.eclipse.tracecompass.statesystem.core.statevalue.TmfStateValue;
1529 import org.eclipse.tracecompass.tmf.core.event.ITmfEvent;
1530 import org.eclipse.tracecompass.tmf.core.statesystem.AbstractTmfStateProvider;
1531 import org.eclipse.tracecompass.tmf.core.trace.ITmfTrace;
1532 import org.eclipse.tracecompass.tmf.ctf.core.event.CtfTmfEvent;
1533
1534 /**
1535 * Example state system provider.
1536 *
1537 * @author Alexandre Montplaisir
1538 */
1539 public class MyStateProvider extends AbstractTmfStateProvider {
1540
1541 /** State value representing the idle state */
1542 public static ITmfStateValue IDLE = TmfStateValue.newValueInt(0);
1543
1544 /** State value representing the running state */
1545 public static ITmfStateValue RUNNING = TmfStateValue.newValueInt(1);
1546
1547 /**
1548 * Constructor
1549 *
1550 * @param trace
1551 * The trace to which this state provider is associated
1552 */
1553 public MyStateProvider(ITmfTrace trace) {
1554 super(trace, CtfTmfEvent.class, "Example"); //$NON-NLS-1$
1555 /*
1556 * The third parameter here is not important, it's only used to name a
1557 * thread internally.
1558 */
1559 }
1560
1561 @Override
1562 public int getVersion() {
1563 /*
1564 * If the version of an existing file doesn't match the version supplied
1565 * in the provider, a rebuild of the history will be forced.
1566 */
1567 return 1;
1568 }
1569
1570 @Override
1571 public MyStateProvider getNewInstance() {
1572 return new MyStateProvider(getTrace());
1573 }
1574
1575 @Override
1576 protected void eventHandle(ITmfEvent ev) {
1577 /*
1578 * AbstractStateChangeInput should have already checked for the correct
1579 * class type.
1580 */
1581 CtfTmfEvent event = (CtfTmfEvent) ev;
1582
1583 final long ts = event.getTimestamp().getValue();
1584 Integer nextTid = ((Long) event.getContent().getField("next_tid").getValue()).intValue();
1585
1586 try {
1587
1588 if (event.getType().getName().equals("sched_switch")) {
1589 ITmfStateSystemBuilder ss = getStateSystemBuilder();
1590 int quark = ss.getQuarkAbsoluteAndAdd("CPUs", String.valueOf(event.getCPU()), "Status");
1591 ITmfStateValue value;
1592 if (nextTid > 0) {
1593 value = RUNNING;
1594 } else {
1595 value = IDLE;
1596 }
1597 ss.modifyAttribute(ts, value, quark);
1598 }
1599
1600 } catch (TimeRangeException e) {
1601 /*
1602 * This should not happen, since the timestamp comes from a trace
1603 * event.
1604 */
1605 throw new IllegalStateException(e);
1606 } catch (AttributeNotFoundException e) {
1607 /*
1608 * This should not happen either, since we're only accessing a quark
1609 * we just created.
1610 */
1611 throw new IllegalStateException(e);
1612 } catch (StateValueTypeException e) {
1613 /*
1614 * This wouldn't happen here, but could potentially happen if we try
1615 * to insert mismatching state value types in the same attribute.
1616 */
1617 e.printStackTrace();
1618 }
1619
1620 }
1621
1622 }
1623 </pre>
1624
1625 === Analysis module definition ===
1626
1627 <pre>
1628 import static org.eclipse.tracecompass.common.core.NonNullUtils.checkNotNull;
1629
1630 import java.util.List;
1631
1632 import org.eclipse.tracecompass.statesystem.core.exceptions.AttributeNotFoundException;
1633 import org.eclipse.tracecompass.statesystem.core.exceptions.StateSystemDisposedException;
1634 import org.eclipse.tracecompass.statesystem.core.exceptions.TimeRangeException;
1635 import org.eclipse.tracecompass.statesystem.core.interval.ITmfStateInterval;
1636 import org.eclipse.tracecompass.statesystem.core.statevalue.ITmfStateValue;
1637 import org.eclipse.tracecompass.tmf.core.statesystem.ITmfStateProvider;
1638 import org.eclipse.tracecompass.tmf.core.statesystem.TmfStateSystemAnalysisModule;
1639 import org.eclipse.tracecompass.tmf.core.trace.ITmfTrace;
1640
1641 /**
1642 * Class showing examples of a StateSystemAnalysisModule with state system queries.
1643 *
1644 * @author Alexandre Montplaisir
1645 */
1646 public class MyStateSystemAnalysisModule extends TmfStateSystemAnalysisModule {
1647
1648 @Override
1649 protected ITmfStateProvider createStateProvider() {
1650 ITmfTrace trace = checkNotNull(getTrace());
1651 return new MyStateProvider(trace);
1652 }
1653
1654 @Override
1655 protected StateSystemBackendType getBackendType() {
1656 return StateSystemBackendType.FULL;
1657 }
1658
1659 /**
1660 * Example method of querying one attribute in the state system.
1661 *
1662 * We pass it a cpu and a timestamp, and it returns us if that cpu was
1663 * executing a process (true/false) at that time.
1664 *
1665 * @param cpu
1666 * The CPU to check
1667 * @param timestamp
1668 * The timestamp of the query
1669 * @return True if the CPU was running, false otherwise
1670 */
1671 public boolean cpuIsRunning(int cpu, long timestamp) {
1672 try {
1673 int quark = getStateSystem().getQuarkAbsolute("CPUs", String.valueOf(cpu), "Status");
1674 ITmfStateValue value = getStateSystem().querySingleState(timestamp, quark).getStateValue();
1675
1676 if (value.equals(MyStateProvider.RUNNING)) {
1677 return true;
1678 }
1679
1680 /*
1681 * Since at this level we have no guarantee on the contents of the state
1682 * system, it's important to handle these cases correctly.
1683 */
1684 } catch (AttributeNotFoundException e) {
1685 /*
1686 * Handle the case where the attribute does not exist in the state
1687 * system (no CPU with this number, etc.)
1688 */
1689 } catch (TimeRangeException e) {
1690 /*
1691 * Handle the case where 'timestamp' is outside of the range of the
1692 * history.
1693 */
1694 } catch (StateSystemDisposedException e) {
1695 /*
1696 * Handle the case where the state system is being disposed. If this
1697 * happens, it's normally when shutting down, so the view can just
1698 * return immediately and wait it out.
1699 */
1700 }
1701 return false;
1702 }
1703
1704
1705 /**
1706 * Example method of using a full query.
1707 *
1708 * We pass it a timestamp, and it returns us how many CPUs were executing a
1709 * process at that moment.
1710 *
1711 * @param timestamp
1712 * The target timestamp
1713 * @return The amount of CPUs that were running at that time
1714 */
1715 public int getNbRunningCpus(long timestamp) {
1716 int count = 0;
1717
1718 try {
1719 /* Get the list of the quarks we are interested in. */
1720 List<Integer> quarks = getStateSystem().getQuarks("CPUs", "*", "Status");
1721
1722 /*
1723 * Get the full state at our target timestamp (it's better than
1724 * doing an arbitrary number of single queries).
1725 */
1726 List<ITmfStateInterval> state = getStateSystem().queryFullState(timestamp);
1727
1728 /* Look at the value of the state for each quark */
1729 for (Integer quark : quarks) {
1730 ITmfStateValue value = state.get(quark).getStateValue();
1731 if (value.equals(MyStateProvider.RUNNING)) {
1732 count++;
1733 }
1734 }
1735
1736 } catch (TimeRangeException e) {
1737 /*
1738 * Handle the case where 'timestamp' is outside of the range of the
1739 * history.
1740 */
1741 } catch (StateSystemDisposedException e) {
1742 /* Handle the case where the state system is being disposed. */
1743 }
1744 return count;
1745 }
1746 }
1747 </pre>
1748
1749 == Mipmap feature ==
1750
1751 The mipmap feature allows attributes to be inserted into the state system with
1752 additional computations performed to automatically store sub-attributes that
1753 can later be used for statistical operations. The mipmap has a resolution which
1754 represents the number of state attribute changes that are used to compute the
1755 value at the next mipmap level.
1756
1757 The supported mipmap features are: max, min, and average. Each one of these
1758 features requires that the base attribute be a numerical state value (int, long
1759 or double). An attribute can be mipmapped for one or more of the features at
1760 the same time.
1761
1762 To use a mipmapped attribute in queries, call the corresponding methods of the
1763 static class [[#State System Operations | TmfStateSystemOperations]].
1764
1765 === AbstractTmfMipmapStateProvider ===
1766
1767 AbstractTmfMipmapStateProvider is an abstract provider class that allows adding
1768 features to a specific attribute into a mipmap tree. It extends AbstractTmfStateProvider.
1769
1770 If a provider wants to add mipmapped attributes to its tree, it must extend
1771 AbstractTmfMipmapStateProvider and call modifyMipmapAttribute() in the event
1772 handler, specifying one or more mipmap features to compute. Then the structure
1773 of the attribute tree will be :
1774
1775 <pre>
1776 |- <attribute>
1777 | |- <mipmapFeature> (min/max/avg)
1778 | | |- 1
1779 | | |- 2
1780 | | |- 3
1781 | | ...
1782 | | |- n (maximum mipmap level)
1783 | |- <mipmapFeature> (min/max/avg)
1784 | | |- 1
1785 | | |- 2
1786 | | |- 3
1787 | | ...
1788 | | |- n (maximum mipmap level)
1789 | ...
1790 </pre>
1791
1792 = UML2 Sequence Diagram Framework =
1793
1794 The purpose of the UML2 Sequence Diagram Framework of TMF is to provide a framework for generation of UML2 sequence diagrams. It provides
1795 *UML2 Sequence diagram drawing capabilities (i.e. lifelines, messages, activations, object creation and deletion)
1796 *a generic, re-usable Sequence Diagram View
1797 *Eclipse Extension Point for the creation of sequence diagrams
1798 *callback hooks for searching and filtering within the Sequence Diagram View
1799 *scalability<br>
1800 The following chapters describe the Sequence Diagram Framework as well as a reference implementation and its usage.
1801
1802 == TMF UML2 Sequence Diagram Extensions ==
1803
1804 In the UML2 Sequence Diagram Framework an Eclipse extension point is defined so that other plug-ins can contribute code to create sequence diagram.
1805
1806 '''Identifier''': org.eclipse.linuxtools.tmf.ui.uml2SDLoader<br>
1807 '''Description''': This extension point aims to list and connect any UML2 Sequence Diagram loader.<br>
1808 '''Configuration Markup''':<br>
1809
1810 <pre>
1811 <!ELEMENT extension (uml2SDLoader)+>
1812 <!ATTLIST extension
1813 point CDATA #REQUIRED
1814 id CDATA #IMPLIED
1815 name CDATA #IMPLIED
1816 >
1817 </pre>
1818
1819 *point - A fully qualified identifier of the target extension point.
1820 *id - An optional identifier of the extension instance.
1821 *name - An optional name of the extension instance.
1822
1823 <pre>
1824 <!ELEMENT uml2SDLoader EMPTY>
1825 <!ATTLIST uml2SDLoader
1826 id CDATA #REQUIRED
1827 name CDATA #REQUIRED
1828 class CDATA #REQUIRED
1829 view CDATA #REQUIRED
1830 default (true | false)
1831 </pre>
1832
1833 *id - A unique identifier for this uml2SDLoader. This is not mandatory as long as the id attribute cannot be retrieved by the provider plug-in. The class attribute is the one on which the underlying algorithm relies.
1834 *name - An name of the extension instance.
1835 *class - The implementation of this UML2 SD viewer loader. The class must implement org.eclipse.tracecompass.tmf.ui.views.uml2sd.load.IUml2SDLoader.
1836 *view - The view ID of the view that this loader aims to populate. Either org.eclipse.tracecompass.tmf.ui.views.uml2sd.SDView itself or a extension of org.eclipse.tracecompass.tmf.ui.views.uml2sd.SDView.
1837 *default - Set to true to make this loader the default one for the view; in case of several default loaders, first one coming from extensions list is taken.
1838
1839
1840 == Management of the Extension Point ==
1841
1842 The TMF UI plug-in is responsible for evaluating each contribution to the extension point.
1843 <br>
1844 <br>
1845 With this extension point, a loader class is associated with a Sequence Diagram View. Multiple loaders can be associated to a single Sequence Diagram View. However, additional means have to be implemented to specify which loader should be used when opening the view. For example, an eclipse action or command could be used for that. This additional code is not necessary if there is only one loader for a given Sequence Diagram View associated and this loader has the attribute "default" set to "true". (see also [[#Using one Sequence Diagram View with Multiple Loaders | Using one Sequence Diagram View with Multiple Loaders]])
1846
1847 == Sequence Diagram View ==
1848
1849 For this extension point a Sequence Diagram View has to be defined as well. The Sequence Diagram View class implementation is provided by the plug-in ''org.eclipse.tracecompass.tmf.ui'' (''org.eclipse.tracecompass.tmf.ui.views.uml2sd.SDView'') and can be used as is or can also be sub-classed. For that, a view extension has to be added to the ''plugin.xml''.
1850
1851 === Supported Widgets ===
1852
1853 The loader class provides a frame containing all the UML2 widgets to be displayed. The following widgets exist:
1854
1855 *Lifeline
1856 *Activation
1857 *Synchronous Message
1858 *Asynchronous Message
1859 *Synchronous Message Return
1860 *Asynchronous Message Return
1861 *Stop
1862
1863 For a lifeline, a category can be defined. The lifeline category defines icons, which are displayed in the lifeline header.
1864
1865 === Zooming ===
1866
1867 The Sequence Diagram View allows the user to zoom in, zoom out and reset the zoom factor.
1868
1869 === Printing ===
1870
1871 It is possible to print the whole sequence diagram as well as part of it.
1872
1873 === Key Bindings ===
1874
1875 *SHIFT+ALT+ARROW-DOWN - to scroll down within sequence diagram one view page at a time
1876 *SHIFT+ALT+ARROW-UP - to scroll up within sequence diagram one view page at a time
1877 *SHIFT+ALT+ARROW-RIGHT - to scroll right within sequence diagram one view page at a time
1878 *SHIFT+ALT+ARROW-LEFT - to scroll left within sequence diagram one view page at a time
1879 *SHIFT+ALT+ARROW-HOME - to jump to the beginning of the selected message if not already visible in page
1880 *SHIFT+ALT+ARROW-END - to jump to the end of the selected message if not already visible in page
1881 *CTRL+F - to open find dialog if either the basic or extended find provider is defined (see [[#Using the Find Provider Interface | Using the Find Provider Interface]])
1882 *CTRL+P - to open print dialog
1883
1884 === Preferences ===
1885
1886 The UML2 Sequence Diagram Framework provides preferences to customize the appearance of the Sequence Diagram View. The color of all widgets and text as well as the fonts of the text of all widget can be adjust. Amongst others the default lifeline width can be alternated. To change preferences select '''Windows->Preferences->Tracing->UML2 Sequence Diagrams'''. The following preference page will show:<br>
1887 [[Image:images/SeqDiagramPref.png]] <br>
1888 After changing the preferences select '''OK'''.
1889
1890 === Callback hooks ===
1891
1892 The Sequence Diagram View provides several callback hooks so that extension can provide application specific functionality. The following interfaces can be provided:
1893 * Basic find provider or extended find Provider<br> For finding within the sequence diagram
1894 * Basic filter provider and extended Filter Provider<br> For filtering within the sequnce diagram.
1895 * Basic paging provider or advanced paging provider<br> For scalability reasons, used to limit number of displayed messages
1896 * Properies provider<br> To provide properties of selected elements
1897 * Collapse provider <br> To collapse areas of the sequence diagram
1898
1899 == Tutorial ==
1900
1901 This tutorial describes how to create a UML2 Sequence Diagram Loader extension and use this loader in the in Eclipse.
1902
1903 === Prerequisites ===
1904
1905 The tutorial is based on Eclipse 4.4 (Eclipse Luna) and TMF 3.0.0.
1906
1907 === Creating an Eclipse UI Plug-in ===
1908
1909 To create a new project with name org.eclipse.tracecompass.tmf.sample.ui select '''File -> New -> Project -> Plug-in Development -> Plug-in Project'''. <br>
1910 [[Image:images/Screenshot-NewPlug-inProject1.png]]<br>
1911
1912 [[Image:images/Screenshot-NewPlug-inProject2.png]]<br>
1913
1914 [[Image:images/Screenshot-NewPlug-inProject3.png]]<br>
1915
1916 === Creating a Sequence Diagram View ===
1917
1918 To open the plug-in manifest, double-click on the MANIFEST.MF file. <br>
1919 [[Image:images/SelectManifest.png]]<br>
1920
1921 Change to the Dependencies tab and select '''Add...''' of the ''Required Plug-ins'' section. A new dialog box will open. Next find plug-ins ''org.eclipse.tracecompass.tmf.ui'' and ''org.eclipse.tracecompass.tmf.core'' and then press '''OK'''<br>
1922 [[Image:images/AddDependencyTmfUi.png]]<br>
1923
1924 Change to the Extensions tab and select '''Add...''' of the ''All Extension'' section. A new dialog box will open. Find the view extension ''org.eclipse.ui.views'' and press '''Finish'''.<br>
1925 [[Image:images/AddViewExtension1.png]]<br>
1926
1927 To create a Sequence Diagram View, click the right mouse button. Then select '''New -> view'''<br>
1928 [[Image:images/AddViewExtension2.png]]<br>
1929
1930 A new view entry has been created. Fill in the fields ''id'', ''name'' and ''class''. Note that for ''class'' the SD view implementation (''org.eclipse.tracecompass.tmf.ui.views.SDView'') of the TMF UI plug-in is used.<br>
1931 [[Image:images/FillSampleSeqDiagram.png]]<br>
1932
1933 The view is prepared. Run the Example. To launch the an Eclipse Application select the ''Overview'' tab and click on '''Launch an Eclipse Application'''<br>
1934 [[Image:images/RunEclipseApplication.png]]<br>
1935
1936 A new Eclipse application window will show. In the new window go to '''Windows -> Show View -> Other... -> Other -> Sample Sequence Diagram'''.<br>
1937 [[Image:images/ShowViewOther.png]]<br>
1938
1939 The Sequence Diagram View will open with an blank page.<br>
1940 [[Image:images/BlankSampleSeqDiagram.png]]<br>
1941
1942 Close the Example Application.
1943
1944 === Defining the uml2SDLoader Extension ===
1945
1946 After defining the Sequence Diagram View it's time to create the ''uml2SDLoader'' Extension. <br>
1947
1948 To create the loader extension, change to the Extensions tab and select '''Add...''' of the ''All Extension'' section. A new dialog box will open. Find the extension ''org.eclipse.linuxtools.tmf.ui.uml2SDLoader'' and press '''Finish'''.<br>
1949 [[Image:images/AddTmfUml2SDLoader.png]]<br>
1950
1951 A new 'uml2SDLoader'' extension has been created. Fill in fields ''id'', ''name'', ''class'', ''view'' and ''default''. Use ''default'' equal true for this example. For the view add the id of the Sequence Diagram View of chapter [[#Creating a Sequence Diagram View | Creating a Sequence Diagram View]]. <br>
1952 [[Image:images/FillSampleLoader.png]]<br>
1953
1954 Then click on ''class'' (see above) to open the new class dialog box. Fill in the relevant fields and select '''Finish'''. <br>
1955 [[Image:images/NewSampleLoaderClass.png]]<br>
1956
1957 A new Java class will be created which implements the interface ''org.eclipse.tracecompass.tmf.ui.views.uml2sd.load.IUml2SDLoader''.<br>
1958
1959 <pre>
1960 package org.eclipse.tracecompass.tmf.sample.ui;
1961
1962 import org.eclipse.tracecompass.tmf.ui.views.uml2sd.SDView;
1963 import org.eclipse.tracecompass.tmf.ui.views.uml2sd.load.IUml2SDLoader;
1964
1965 public class SampleLoader implements IUml2SDLoader {
1966
1967 public SampleLoader() {
1968 // TODO Auto-generated constructor stub
1969 }
1970
1971 @Override
1972 public void dispose() {
1973 // TODO Auto-generated method stub
1974
1975 }
1976
1977 @Override
1978 public String getTitleString() {
1979 // TODO Auto-generated method stub
1980 return null;
1981 }
1982
1983 @Override
1984 public void setViewer(SDView arg0) {
1985 // TODO Auto-generated method stub
1986
1987 }
1988 </pre>
1989
1990 === Implementing the Loader Class ===
1991
1992 Next is to implement the methods of the IUml2SDLoader interface method. The following code snippet shows how to create the major sequence diagram elements. Please note that no time information is stored.<br>
1993
1994 <pre>
1995 package org.eclipse.tracecompass.tmf.sample.ui;
1996
1997 import org.eclipse.tracecompass.tmf.ui.views.uml2sd.SDView;
1998 import org.eclipse.tracecompass.tmf.ui.views.uml2sd.core.AsyncMessage;
1999 import org.eclipse.tracecompass.tmf.ui.views.uml2sd.core.AsyncMessageReturn;
2000 import org.eclipse.tracecompass.tmf.ui.views.uml2sd.core.EllipsisMessage;
2001 import org.eclipse.tracecompass.tmf.ui.views.uml2sd.core.ExecutionOccurrence;
2002 import org.eclipse.tracecompass.tmf.ui.views.uml2sd.core.Frame;
2003 import org.eclipse.tracecompass.tmf.ui.views.uml2sd.core.Lifeline;
2004 import org.eclipse.tracecompass.tmf.ui.views.uml2sd.core.Stop;
2005 import org.eclipse.tracecompass.tmf.ui.views.uml2sd.core.SyncMessage;
2006 import org.eclipse.tracecompass.tmf.ui.views.uml2sd.core.SyncMessageReturn;
2007 import org.eclipse.tracecompass.tmf.ui.views.uml2sd.load.IUml2SDLoader;
2008
2009 public class SampleLoader implements IUml2SDLoader {
2010
2011 private SDView fSdView;
2012
2013 public SampleLoader() {
2014 }
2015
2016 @Override
2017 public void dispose() {
2018 }
2019
2020 @Override
2021 public String getTitleString() {
2022 return "Sample Diagram";
2023 }
2024
2025 @Override
2026 public void setViewer(SDView arg0) {
2027 fSdView = arg0;
2028 createFrame();
2029 }
2030
2031 private void createFrame() {
2032
2033 Frame testFrame = new Frame();
2034 testFrame.setName("Sample Frame");
2035
2036 /*
2037 * Create lifelines
2038 */
2039
2040 Lifeline lifeLine1 = new Lifeline();
2041 lifeLine1.setName("Object1");
2042 testFrame.addLifeLine(lifeLine1);
2043
2044 Lifeline lifeLine2 = new Lifeline();
2045 lifeLine2.setName("Object2");
2046 testFrame.addLifeLine(lifeLine2);
2047
2048
2049 /*
2050 * Create Sync Message
2051 */
2052 // Get new occurrence on lifelines
2053 lifeLine1.getNewEventOccurrence();
2054
2055 // Get Sync message instances
2056 SyncMessage start = new SyncMessage();
2057 start.setName("Start");
2058 start.setEndLifeline(lifeLine1);
2059 testFrame.addMessage(start);
2060
2061 /*
2062 * Create Sync Message
2063 */
2064 // Get new occurrence on lifelines
2065 lifeLine1.getNewEventOccurrence();
2066 lifeLine2.getNewEventOccurrence();
2067
2068 // Get Sync message instances
2069 SyncMessage syn1 = new SyncMessage();
2070 syn1.setName("Sync Message 1");
2071 syn1.setStartLifeline(lifeLine1);
2072 syn1.setEndLifeline(lifeLine2);
2073 testFrame.addMessage(syn1);
2074
2075 /*
2076 * Create corresponding Sync Message Return
2077 */
2078
2079 // Get new occurrence on lifelines
2080 lifeLine1.getNewEventOccurrence();
2081 lifeLine2.getNewEventOccurrence();
2082
2083 SyncMessageReturn synReturn1 = new SyncMessageReturn();
2084 synReturn1.setName("Sync Message Return 1");
2085 synReturn1.setStartLifeline(lifeLine2);
2086 synReturn1.setEndLifeline(lifeLine1);
2087 synReturn1.setMessage(syn1);
2088 testFrame.addMessage(synReturn1);
2089
2090 /*
2091 * Create Activations (Execution Occurrence)
2092 */
2093 ExecutionOccurrence occ1 = new ExecutionOccurrence();
2094 occ1.setStartOccurrence(start.getEventOccurrence());
2095 occ1.setEndOccurrence(synReturn1.getEventOccurrence());
2096 lifeLine1.addExecution(occ1);
2097 occ1.setName("Activation 1");
2098
2099 ExecutionOccurrence occ2 = new ExecutionOccurrence();
2100 occ2.setStartOccurrence(syn1.getEventOccurrence());
2101 occ2.setEndOccurrence(synReturn1.getEventOccurrence());
2102 lifeLine2.addExecution(occ2);
2103 occ2.setName("Activation 2");
2104
2105 /*
2106 * Create Sync Message
2107 */
2108 // Get new occurrence on lifelines
2109 lifeLine1.getNewEventOccurrence();
2110 lifeLine2.getNewEventOccurrence();
2111
2112 // Get Sync message instances
2113 AsyncMessage asyn1 = new AsyncMessage();
2114 asyn1.setName("Async Message 1");
2115 asyn1.setStartLifeline(lifeLine1);
2116 asyn1.setEndLifeline(lifeLine2);
2117 testFrame.addMessage(asyn1);
2118
2119 /*
2120 * Create corresponding Sync Message Return
2121 */
2122
2123 // Get new occurrence on lifelines
2124 lifeLine1.getNewEventOccurrence();
2125 lifeLine2.getNewEventOccurrence();
2126
2127 AsyncMessageReturn asynReturn1 = new AsyncMessageReturn();
2128 asynReturn1.setName("Async Message Return 1");
2129 asynReturn1.setStartLifeline(lifeLine2);
2130 asynReturn1.setEndLifeline(lifeLine1);
2131 asynReturn1.setMessage(asyn1);
2132 testFrame.addMessage(asynReturn1);
2133
2134 /*
2135 * Create a note
2136 */
2137
2138 // Get new occurrence on lifelines
2139 lifeLine1.getNewEventOccurrence();
2140
2141 EllipsisMessage info = new EllipsisMessage();
2142 info.setName("Object deletion");
2143 info.setStartLifeline(lifeLine2);
2144 testFrame.addNode(info);
2145
2146 /*
2147 * Create a Stop
2148 */
2149 Stop stop = new Stop();
2150 stop.setLifeline(lifeLine2);
2151 stop.setEventOccurrence(lifeLine2.getNewEventOccurrence());
2152 lifeLine2.addNode(stop);
2153
2154 fSdView.setFrame(testFrame);
2155 }
2156 }
2157 </pre>
2158
2159 Now it's time to run the example application. To launch the Example Application select the ''Overview'' tab and click on '''Launch an Eclipse Application'''<br>
2160 [[Image:images/SampleDiagram1.png]] <br>
2161
2162 === Adding time information ===
2163
2164 To add time information in sequence diagram the timestamp has to be set for each message. The sequence diagram framework uses the ''TmfTimestamp'' class of plug-in ''org.eclipse.tracecompass.tmf.core''. Use ''setTime()'' on each message ''SyncMessage'' since start and end time are the same. For each ''AsyncMessage'' set start and end time separately by using methods ''setStartTime'' and ''setEndTime''. For example: <br>
2165
2166 <pre>
2167 private void createFrame() {
2168 //...
2169 start.setTime(new TmfTimestamp(1000, -3));
2170 syn1.setTime(new TmfTimestamp(1005, -3));
2171 synReturn1.setTime(new TmfTimestamp(1050, -3));
2172 asyn1.setStartTime(new TmfTimestamp(1060, -3));
2173 asyn1.setEndTime(new TmfTimestamp(1070, -3));
2174 asynReturn1.setStartTime(new TmfTimestamp(1060, -3));
2175 asynReturn1.setEndTime(new TmfTimestamp(1070, -3));
2176 //...
2177 }
2178 </pre>
2179
2180 When running the example application, a time compression bar on the left appears which indicates the time elapsed between consecutive events. The time compression scale shows where the time falls between the minimum and maximum delta times. The intensity of the color is used to indicate the length of time, namely, the deeper the intensity, the higher the delta time. The minimum and maximum delta times are configurable through the collbar menu ''Configure Min Max''. The time compression bar and scale may provide an indication about which events consumes the most time. By hovering over the time compression bar a tooltip appears containing more information. <br>
2181
2182 [[Image:images/SampleDiagramTimeComp.png]] <br>
2183
2184 By hovering over a message it will show the time information in the appearing tooltip. For each ''SyncMessage'' it shows its time occurrence and for each ''AsyncMessage'' it shows the start and end time.
2185
2186 [[Image:images/SampleDiagramSyncMessage.png]] <br>
2187 [[Image:images/SampleDiagramAsyncMessage.png]] <br>
2188
2189 To see the time elapsed between 2 messages, select one message and hover over a second message. A tooltip will show with the delta in time. Note if the second message is before the first then a negative delta is displayed. Note that for ''AsyncMessage'' the end time is used for the delta calculation.<br>
2190 [[Image:images/SampleDiagramMessageDelta.png]] <br>
2191
2192 === Default Coolbar and Menu Items ===
2193
2194 The Sequence Diagram View comes with default coolbar and menu items. By default, each sequence diagram shows the following actions:
2195 * Zoom in
2196 * Zoom out
2197 * Reset Zoom Factor
2198 * Selection
2199 * Configure Min Max (drop-down menu only)
2200 * Navigation -> Show the node end (drop-down menu only)
2201 * Navigation -> Show the node start (drop-down menu only)
2202
2203 [[Image:images/DefaultCoolbarMenu.png]]<br>
2204
2205 === Implementing Optional Callbacks ===
2206
2207 The following chapters describe how to use all supported provider interfaces.
2208
2209 ==== Using the Paging Provider Interface ====
2210
2211 For scalability reasons, the paging provider interfaces exists to limit the number of messages displayed in the Sequence Diagram View at a time. For that, two interfaces exist, the basic paging provider and the advanced paging provider. When using the basic paging interface, actions for traversing page by page through the sequence diagram of a trace will be provided.
2212 <br>
2213 To use the basic paging provider, first the interface methods of the ''ISDPagingProvider'' have to be implemented by a class. (i.e. ''hasNextPage()'', ''hasPrevPage()'', ''nextPage()'', ''prevPage()'', ''firstPage()'' and ''endPage()''. Typically, this is implemented in the loader class. Secondly, the provider has to be set in the Sequence Diagram View. This will be done in the ''setViewer()'' method of the loader class. Lastly, the paging provider has to be removed from the view, when the ''dispose()'' method of the loader class is called.
2214
2215 <pre>
2216 public class SampleLoader implements IUml2SDLoader, ISDPagingProvider {
2217 //...
2218 private int page = 0;
2219
2220 @Override
2221 public void dispose() {
2222 if (fSdView != null) {
2223 fSdView.resetProviders();
2224 }
2225 }
2226
2227 @Override
2228 public void setViewer(SDView arg0) {
2229 fSdView = arg0;
2230 fSdView.setSDPagingProvider(this);
2231 createFrame();
2232 }
2233
2234 private void createSecondFrame() {
2235 Frame testFrame = new Frame();
2236 testFrame.setName("SecondFrame");
2237 Lifeline lifeline = new Lifeline();
2238 lifeline.setName("LifeLine 0");
2239 testFrame.addLifeLine(lifeline);
2240 lifeline = new Lifeline();
2241 lifeline.setName("LifeLine 1");
2242 testFrame.addLifeLine(lifeline);
2243 for (int i = 1; i < 5; i++) {
2244 SyncMessage message = new SyncMessage();
2245 message.autoSetStartLifeline(testFrame.getLifeline(0));
2246 message.autoSetEndLifeline(testFrame.getLifeline(0));
2247 message.setName((new StringBuilder("Message ")).append(i).toString());
2248 testFrame.addMessage(message);
2249
2250 SyncMessageReturn messageReturn = new SyncMessageReturn();
2251 messageReturn.autoSetStartLifeline(testFrame.getLifeline(0));
2252 messageReturn.autoSetEndLifeline(testFrame.getLifeline(0));
2253
2254 testFrame.addMessage(messageReturn);
2255 messageReturn.setName((new StringBuilder("Message return ")).append(i).toString());
2256 ExecutionOccurrence occ = new ExecutionOccurrence();
2257 occ.setStartOccurrence(testFrame.getSyncMessage(i - 1).getEventOccurrence());
2258 occ.setEndOccurrence(testFrame.getSyncMessageReturn(i - 1).getEventOccurrence());
2259 testFrame.getLifeline(0).addExecution(occ);
2260 }
2261 fSdView.setFrame(testFrame);
2262 }
2263
2264 @Override
2265 public boolean hasNextPage() {
2266 return page == 0;
2267 }
2268
2269 @Override
2270 public boolean hasPrevPage() {
2271 return page == 1;
2272 }
2273
2274 @Override
2275 public void nextPage() {
2276 page = 1;
2277 createSecondFrame();
2278 }
2279
2280 @Override
2281 public void prevPage() {
2282 page = 0;
2283 createFrame();
2284 }
2285
2286 @Override
2287 public void firstPage() {
2288 page = 0;
2289 createFrame();
2290 }
2291
2292 @Override
2293 public void lastPage() {
2294 page = 1;
2295 createSecondFrame();
2296 }
2297 //...
2298 }
2299
2300 </pre>
2301
2302 When running the example application, new actions will be shown in the coolbar and the coolbar menu. <br>
2303
2304 [[Image:images/PageProviderAdded.png]]
2305
2306 <br><br>
2307 To use the advanced paging provider, the interface ''ISDAdvancePagingProvider'' has to be implemented. It extends the basic paging provider. The methods ''currentPage()'', ''pagesCount()'' and ''pageNumberChanged()'' have to be added.
2308 <br>
2309
2310 ==== Using the Find Provider Interface ====
2311
2312 For finding nodes in a sequence diagram two interfaces exists. One for basic finding and one for extended finding. The basic find comes with a dialog box for entering find criteria as regular expressions. This find criteria can be used to execute the find. Find criteria a persisted in the Eclipse workspace.
2313 <br>
2314 For the extended find provider interface a ''org.eclipse.jface.action.Action'' class has to be provided. The actual find handling has to be implemented and triggered by the action.
2315 <br>
2316 Only on at a time can be active. If the extended find provder is defined it obsoletes the basic find provider.
2317 <br>
2318 To use the basic find provider, first the interface methods of the ''ISDFindProvider'' have to be implemented by a class. Typically, this is implemented in the loader class. Add the ISDFindProvider to the list of implemented interfaces, implement the methods ''find()'' and ''cancel()'' and set the provider in the ''setViewer()'' method as well as remove the provider in the ''dispose()'' method of the loader class. Please note that the ''ISDFindProvider'' extends the interface ''ISDGraphNodeSupporter'' which methods (''isNodeSupported()'' and ''getNodeName()'') have to be implemented, too. The following shows an example implementation. Please note that only search for lifelines and SynchMessage are supported. The find itself will always find only the first occurrence the pattern to match.
2319
2320 <pre>
2321 public class SampleLoader implements IUml2SDLoader, ISDPagingProvider, ISDFindProvider {
2322
2323 //...
2324 @Override
2325 public void dispose() {
2326 if (fSdView != null) {
2327 fSdView.resetProviders();
2328 }
2329 }
2330
2331 @Override
2332 public void setViewer(SDView arg0) {
2333 fSdView = arg0;
2334 fSdView.setSDPagingProvider(this);
2335 fSdView.setSDFindProvider(this);
2336 createFrame();
2337 }
2338
2339 @Override
2340 public boolean isNodeSupported(int nodeType) {
2341 switch (nodeType) {
2342 case ISDGraphNodeSupporter.LIFELINE:
2343 case ISDGraphNodeSupporter.SYNCMESSAGE:
2344 return true;
2345
2346 default:
2347 break;
2348 }
2349 return false;
2350 }
2351
2352 @Override
2353 public String getNodeName(int nodeType, String loaderClassName) {
2354 switch (nodeType) {
2355 case ISDGraphNodeSupporter.LIFELINE:
2356 return "Lifeline";
2357 case ISDGraphNodeSupporter.SYNCMESSAGE:
2358 return "Sync Message";
2359 }
2360 return "";
2361 }
2362
2363 @Override
2364 public boolean find(Criteria criteria) {
2365 Frame frame = fSdView.getFrame();
2366 if (criteria.isLifeLineSelected()) {
2367 for (int i = 0; i < frame.lifeLinesCount(); i++) {
2368 if (criteria.matches(frame.getLifeline(i).getName())) {
2369 fSdView.getSDWidget().moveTo(frame.getLifeline(i));
2370 return true;
2371 }
2372 }
2373 }
2374 if (criteria.isSyncMessageSelected()) {
2375 for (int i = 0; i < frame.syncMessageCount(); i++) {
2376 if (criteria.matches(frame.getSyncMessage(i).getName())) {
2377 fSdView.getSDWidget().moveTo(frame.getSyncMessage(i));
2378 return true;
2379 }
2380 }
2381 }
2382 return false;
2383 }
2384
2385 @Override
2386 public void cancel() {
2387 // reset find parameters
2388 }
2389 //...
2390 }
2391 </pre>
2392
2393 When running the example application, the find action will be shown in the coolbar and the coolbar menu. <br>
2394 [[Image:images/FindProviderAdded.png]]
2395
2396 To find a sequence diagram node press on the find button of the coolbar (see above). A new dialog box will open. Enter a regular expression in the ''Matching String'' text box, select the node types (e.g. Sync Message) and press '''Find'''. If found the corresponding node will be selected. If not found the dialog box will indicate not found. <br>
2397 [[Image:images/FindDialog.png]]<br>
2398
2399 Note that the find dialog will be opened by typing the key shortcut CRTL+F.
2400
2401 ==== Using the Filter Provider Interface ====
2402
2403 For filtering of sequence diagram elements two interfaces exist. One basic for filtering and one for extended filtering. The basic filtering comes with two dialog for entering filter criteria as regular expressions and one for selecting the filter to be used. Multiple filters can be active at a time. Filter criteria are persisted in the Eclipse workspace.
2404 <br>
2405 To use the basic filter provider, first the interface method of the ''ISDFilterProvider'' has to be implemented by a class. Typically, this is implemented in the loader class. Add the ''ISDFilterProvider'' to the list of implemented interfaces, implement the method ''filter()''and set the provider in the ''setViewer()'' method as well as remove the provider in the ''dispose()'' method of the loader class. Please note that the ''ISDFindProvider'' extends the interface ''ISDGraphNodeSupporter'' which methods (''isNodeSupported()'' and ''getNodeName()'') have to be implemented, too. <br>
2406 Note that no example implementation of ''filter()'' is provided.
2407 <br>
2408
2409 <pre>
2410 public class SampleLoader implements IUml2SDLoader, ISDPagingProvider, ISDFindProvider, ISDFilterProvider {
2411
2412 //...
2413 @Override
2414 public void dispose() {
2415 if (fSdView != null) {
2416 fSdView.resetProviders();
2417 }
2418 }
2419
2420 @Override
2421 public void setViewer(SDView arg0) {
2422 fSdView = arg0;
2423 fSdView.setSDPagingProvider(this);
2424 fSdView.setSDFindProvider(this);
2425 fSdView.setSDFilterProvider(this);
2426 createFrame();
2427 }
2428
2429 @Override
2430 public boolean filter(List<FilterCriteria> list) {
2431 return false;
2432 }
2433 //...
2434 }
2435 </pre>
2436
2437 When running the example application, the filter action will be shown in the coolbar menu. <br>
2438 [[Image:images/HidePatternsMenuItem.png]]
2439
2440 To filter select the '''Hide Patterns...''' of the coolbar menu. A new dialog box will open. <br>
2441 [[Image:images/DialogHidePatterns.png]]
2442
2443 To Add a new filter press '''Add...'''. A new dialog box will open. Enter a regular expression in the ''Matching String'' text box, select the node types (e.g. Sync Message) and press '''Create''''. <br>
2444 [[Image:images/DialogHidePatterns.png]] <br>
2445
2446 Now back at the Hide Pattern dialog. Select one or more filter and select '''OK'''.
2447
2448 To use the extended filter provider, the interface ''ISDExtendedFilterProvider'' has to be implemented. It will provide a ''org.eclipse.jface.action.Action'' class containing the actual filter handling and filter algorithm.
2449
2450 ==== Using the Extended Action Bar Provider Interface ====
2451
2452 The extended action bar provider can be used to add customized actions to the Sequence Diagram View.
2453 To use the extended action bar provider, first the interface method of the interface ''ISDExtendedActionBarProvider'' has to be implemented by a class. Typically, this is implemented in the loader class. Add the ''ISDExtendedActionBarProvider'' to the list of implemented interfaces, implement the method ''supplementCoolbarContent()'' and set the provider in the ''setViewer()'' method as well as remove the provider in the ''dispose()'' method of the loader class. <br>
2454
2455 <pre>
2456 public class SampleLoader implements IUml2SDLoader, ISDPagingProvider, ISDFindProvider, ISDFilterProvider, ISDExtendedActionBarProvider {
2457 //...
2458
2459 @Override
2460 public void dispose() {
2461 if (fSdView != null) {
2462 fSdView.resetProviders();
2463 }
2464 }
2465
2466 @Override
2467 public void setViewer(SDView arg0) {
2468 fSdView = arg0;
2469 fSdView.setSDPagingProvider(this);
2470 fSdView.setSDFindProvider(this);
2471 fSdView.setSDFilterProvider(this);
2472 fSdView.setSDExtendedActionBarProvider(this);
2473 createFrame();
2474 }
2475
2476 @Override
2477 public void supplementCoolbarContent(IActionBars iactionbars) {
2478 Action action = new Action("Refresh") {
2479 @Override
2480 public void run() {
2481 System.out.println("Refreshing...");
2482 }
2483 };
2484 iactionbars.getMenuManager().add(action);
2485 iactionbars.getToolBarManager().add(action);
2486 }
2487 //...
2488 }
2489 </pre>
2490
2491 When running the example application, all new actions will be added to the coolbar and coolbar menu according to the implementation of ''supplementCoolbarContent()''<br>.
2492 For the example above the coolbar and coolbar menu will look as follows.
2493
2494 [[Image:images/SupplCoolbar.png]]
2495
2496 ==== Using the Properties Provider Interface====
2497
2498 This interface can be used to provide property information. A property provider which returns an ''IPropertyPageSheet'' (see ''org.eclipse.ui.views'') has to be implemented and set in the Sequence Diagram View. <br>
2499
2500 To use the property provider, first the interface method of the ''ISDPropertiesProvider'' has to be implemented by a class. Typically, this is implemented in the loader class. Add the ''ISDPropertiesProvider'' to the list of implemented interfaces, implement the method ''getPropertySheetEntry()'' and set the provider in the ''setViewer()'' method as well as remove the provider in the ''dispose()'' method of the loader class. Please note that no example is provided here.
2501
2502 Please refer to the following Eclipse articles for more information about properties and tabed properties.
2503 *[http://www.eclipse.org/articles/Article-Properties-View/properties-view.html | Take control of your properties]
2504 *[http://www.eclipse.org/articles/Article-Tabbed-Properties/tabbed_properties_view.html | The Eclipse Tabbed Properties View]
2505
2506 ==== Using the Collapse Provider Interface ====
2507
2508 This interface can be used to define a provider which responsibility is to collapse two selected lifelines. This can be used to hide a pair of lifelines.
2509
2510 To use the collapse provider, first the interface method of the ''ISDCollapseProvider'' has to be implemented by a class. Typically, this is implemented in the loader class. Add the ISDCollapseProvider to the list of implemented interfaces, implement the method ''collapseTwoLifelines()'' and set the provider in the ''setViewer()'' method as well as remove the provider in the ''dispose()'' method of the loader class. Please note that no example is provided here.
2511
2512 ==== Using the Selection Provider Service ====
2513
2514 The Sequence Diagram View comes with a build in selection provider service. To this service listeners can be added. To use the selection provider service, the interface ''ISelectionListener'' of plug-in ''org.eclipse.ui'' has to implemented. Typically this is implemented in loader class. Firstly, add the ''ISelectionListener'' interface to the list of implemented interfaces, implement the method ''selectionChanged()'' and set the listener in method ''setViewer()'' as well as remove the listener in the ''dispose()'' method of the loader class.
2515
2516 <pre>
2517 public class SampleLoader implements IUml2SDLoader, ISDPagingProvider, ISDFindProvider, ISDFilterProvider, ISDExtendedActionBarProvider, ISelectionListener {
2518
2519 //...
2520 @Override
2521 public void dispose() {
2522 if (fSdView != null) {
2523 PlatformUI.getWorkbench().getActiveWorkbenchWindow().getSelectionService().removePostSelectionListener(this);
2524 fSdView.resetProviders();
2525 }
2526 }
2527
2528 @Override
2529 public String getTitleString() {
2530 return "Sample Diagram";
2531 }
2532
2533 @Override
2534 public void setViewer(SDView arg0) {
2535 fSdView = arg0;
2536 PlatformUI.getWorkbench().getActiveWorkbenchWindow().getSelectionService().addPostSelectionListener(this);
2537 fSdView.setSDPagingProvider(this);
2538 fSdView.setSDFindProvider(this);
2539 fSdView.setSDFilterProvider(this);
2540 fSdView.setSDExtendedActionBarProvider(this);
2541
2542 createFrame();
2543 }
2544
2545 @Override
2546 public void selectionChanged(IWorkbenchPart part, ISelection selection) {
2547 ISelection sel = PlatformUI.getWorkbench().getActiveWorkbenchWindow().getSelectionService().getSelection();
2548 if (sel != null && (sel instanceof StructuredSelection)) {
2549 StructuredSelection stSel = (StructuredSelection) sel;
2550 if (stSel.getFirstElement() instanceof BaseMessage) {
2551 BaseMessage syncMsg = ((BaseMessage) stSel.getFirstElement());
2552 System.out.println("Message '" + syncMsg.getName() + "' selected.");
2553 }
2554 }
2555 }
2556
2557 //...
2558 }
2559 </pre>
2560
2561 === Printing a Sequence Diagram ===
2562
2563 To print a the whole sequence diagram or only parts of it, select the Sequence Diagram View and select '''File -> Print...''' or type the key combination ''CTRL+P''. A new print dialog will open. <br>
2564
2565 [[Image:images/PrintDialog.png]] <br>
2566
2567 Fill in all the relevant information, select '''Printer...''' to choose the printer and the press '''OK'''.
2568
2569 === Using one Sequence Diagram View with Multiple Loaders ===
2570
2571 A Sequence Diagram View definition can be used with multiple sequence diagram loaders. However, the active loader to be used when opening the view has to be set. For this define an Eclipse action or command and assign the current loader to the view. Here is a code snippet for that:
2572
2573 <pre>
2574 public class OpenSDView extends AbstractHandler {
2575 @Override
2576 public Object execute(ExecutionEvent event) throws ExecutionException {
2577 try {
2578 IWorkbenchPage persp = TmfUiPlugin.getDefault().getWorkbench().getActiveWorkbenchWindow().getActivePage();
2579 SDView view = (SDView) persp.showView("org.eclipse.linuxtools.ust.examples.ui.componentinteraction");
2580 LoadersManager.getLoadersManager().createLoader("org.eclipse.tracecompass.tmf.ui.views.uml2sd.impl.TmfUml2SDSyncLoader", view);
2581 } catch (PartInitException e) {
2582 throw new ExecutionException("PartInitException caught: ", e);
2583 }
2584 return null;
2585 }
2586 }
2587 </pre>
2588
2589 === Downloading the Tutorial ===
2590
2591 Use the following link to download the source code of the tutorial [https://wiki.eclipse.org/images/7/79/SamplePluginTC.zip Plug-in of Tutorial].
2592
2593 == Integration of Tracing and Monitoring Framework with Sequence Diagram Framework ==
2594
2595 In the previous sections the Sequence Diagram Framework has been described and a tutorial was provided. In the following sections the integration of the Sequence Diagram Framework with other features of TMF will be described. Together it is a powerful framework to analyze and visualize content of traces. The integration is explained using the reference implementation of a UML2 sequence diagram loader which part of the TMF UI delivery. The reference implementation can be used as is, can be sub-classed or simply be an example for other sequence diagram loaders to be implemented.
2596
2597 === Reference Implementation ===
2598
2599 A Sequence Diagram View Extension is defined in the plug-in TMF UI as well as a uml2SDLoader Extension with the reference loader.
2600
2601 [[Image:images/ReferenceExtensions.png]]
2602
2603 === Used Sequence Diagram Features ===
2604
2605 Besides the default features of the Sequence Diagram Framework, the reference implementation uses the following additional features:
2606 *Advanced paging
2607 *Basic finding
2608 *Basic filtering
2609 *Selection Service
2610
2611 ==== Advanced paging ====
2612
2613 The reference loader implements the interface ''ISDAdvancedPagingProvider'' interface. Please refer to section [[#Using the Paging Provider Interface | Using the Paging Provider Interface]] for more details about the advanced paging feature.
2614
2615 ==== Basic finding ====
2616
2617 The reference loader implements the interface ''ISDFindProvider'' interface. The user can search for ''Lifelines'' and ''Interactions''. The find is done across pages. If the expression to match is not on the current page a new thread is started to search on other pages. If expression is found the corresponding page is shown as well as the searched item is displayed. If not found then a message is displayed in the ''Progress View'' of Eclipse. Please refer to section [[#Using the Find Provider Interface | Using the Find Provider Interface]] for more details about the basic find feature.
2618
2619 ==== Basic filtering ====
2620
2621 The reference loader implements the interface ''ISDFilterProvider'' interface. The user can filter on ''Lifelines'' and ''Interactions''. Please refer to section [[#Using the Filter Provider Interface | Using the Filter Provider Interface]] for more details about the basic filter feature.
2622
2623 ==== Selection Service ====
2624
2625 The reference loader implements the interface ''ISelectionListener'' interface. When an interaction is selected a ''TmfTimeSynchSignal'' is broadcast (see [[#TMF Signal Framework | TMF Signal Framework]]). Please also refer to section [[#Using the Selection Provider Service | Using the Selection Provider Service]] for more details about the selection service and .
2626
2627 === Used TMF Features ===
2628
2629 The reference implementation uses the following features of TMF:
2630 *TMF Experiment and Trace for accessing traces
2631 *Event Request Framework to request TMF events from the experiment and respective traces
2632 *Signal Framework for broadcasting and receiving TMF signals for synchronization purposes
2633
2634 ==== TMF Experiment and Trace for accessing traces ====
2635
2636 The reference loader uses TMF Experiments to access traces and to request data from the traces.
2637
2638 ==== TMF Event Request Framework ====
2639
2640 The reference loader use the TMF Event Request Framework to request events from the experiment and its traces.
2641
2642 When opening a trace (which is triggered by signal ''TmfTraceSelectedSignal'') or when opening the Sequence Diagram View after a trace had been opened previously, a TMF background request is initiated to index the trace and to fill in the first page of the sequence diagram. The purpose of the indexing is to store time ranges for pages with 10000 messages per page. This allows quickly to move to certain pages in a trace without having to re-parse from the beginning. The request is called indexing request.
2643
2644 When switching pages, the a TMF foreground event request is initiated to retrieve the corresponding events from the experiment. It uses the time range stored in the index for the respective page.
2645
2646 A third type of event request is issued for finding specific data across pages.
2647
2648 ==== TMF Signal Framework ====
2649
2650 The reference loader extends the class ''TmfComponent''. By doing that the loader is registered as a TMF signal handler for sending and receiving TMF signals. The loader implements signal handlers for the following TMF signals:
2651 *''TmfTraceSelectedSignal''
2652 This signal indicates that a trace or experiment was selected. When receiving this signal the indexing request is initiated and the first page is displayed after receiving the relevant information.
2653 *''TmfTraceClosedSignal''
2654 This signal indicates that a trace or experiment was closed. When receiving this signal the loader resets its data and a blank page is loaded in the Sequence Diagram View.
2655 *''TmfTimeSynchSignal''
2656 This signal is used to indicate that a new time or time range has been selected. It contains a begin and end time. If a single time is selected then the begin and end time are the same. When receiving this signal the corresponding message matching the begin time is selected in the Sequence Diagram View. If necessary, the page is changed.
2657 *''TmfRangeSynchSignal''
2658 This signal indicates that a new time range is in focus. When receiving this signal the loader loads the page which corresponds to the start time of the time range signal. The message with the start time will be in focus.
2659
2660 Besides acting on receiving signals, the reference loader is also sending signals. A ''TmfTimeSynchSignal'' is broadcasted with the timestamp of the message which was selected in the Sequence Diagram View. ''TmfRangeSynchSignal'' is sent when a page is changed in the Sequence Diagram View. The start timestamp of the time range sent is the timestamp of the first message. The end timestamp sent is the timestamp of the first message plus the current time range window. The current time range window is the time window that was indicated in the last received ''TmfRangeSynchSignal''.
2661
2662 === Supported Traces ===
2663
2664 The reference implementation is able to analyze traces from a single component that traces the interaction with other components. For example, a server node could have trace information about its interaction with client nodes. The server node could be traced and then analyzed using TMF and the Sequence Diagram Framework of TMF could used to visualize the interactions with the client nodes.<br>
2665
2666 Note that combined traces of multiple components, that contain the trace information about the same interactions are not supported in the reference implementation!
2667
2668 === Trace Format ===
2669
2670 The reference implementation in class ''TmfUml2SDSyncLoader'' in package ''org.eclipse.tracecompass.tmf.ui.views.uml2sd.impl'' analyzes events from type ''ITmfEvent'' and creates events type ''ITmfSyncSequenceDiagramEvent'' if the ''ITmfEvent'' contains all relevant information information. The parsing algorithm looks like as follows:
2671
2672 <pre>
2673 /**
2674 * @param tmfEvent Event to parse for sequence diagram event details
2675 * @return sequence diagram event if details are available else null
2676 */
2677 protected ITmfSyncSequenceDiagramEvent getSequenceDiagramEvent(ITmfEvent tmfEvent){
2678 //type = .*RECEIVE.* or .*SEND.*
2679 //content = sender:<sender name>:receiver:<receiver name>,signal:<signal name>
2680 String eventType = tmfEvent.getType().toString();
2681 if (eventType.contains(Messages.TmfUml2SDSyncLoader_EventTypeSend) || eventType.contains(Messages.TmfUml2SDSyncLoader_EventTypeReceive)) {
2682 Object sender = tmfEvent.getContent().getField(Messages.TmfUml2SDSyncLoader_FieldSender);
2683 Object receiver = tmfEvent.getContent().getField(Messages.TmfUml2SDSyncLoader_FieldReceiver);
2684 Object name = tmfEvent.getContent().getField(Messages.TmfUml2SDSyncLoader_FieldSignal);
2685 if ((sender instanceof ITmfEventField) && (receiver instanceof ITmfEventField) && (name instanceof ITmfEventField)) {
2686 ITmfSyncSequenceDiagramEvent sdEvent = new TmfSyncSequenceDiagramEvent(tmfEvent,
2687 ((ITmfEventField) sender).getValue().toString(),
2688 ((ITmfEventField) receiver).getValue().toString(),
2689 ((ITmfEventField) name).getValue().toString());
2690
2691 return sdEvent;
2692 }
2693 }
2694 return null;
2695 }
2696 </pre>
2697
2698 The analysis looks for event type Strings containing ''SEND'' and ''RECEIVE''. If event type matches these key words, the analyzer will look for strings ''sender'', ''receiver'' and ''signal'' in the event fields of type ''ITmfEventField''. If all the data is found a sequence diagram event can be created using this information. Note that Sync Messages are assumed, which means start and end time are the same.
2699
2700 === How to use the Reference Implementation ===
2701
2702 An example CTF (Common Trace Format) trace is provided that contains trace events with sequence diagram information. To download the reference trace, use the following link: [https://wiki.eclipse.org/images/3/35/ReferenceTrace.zip Reference Trace].
2703
2704 Run an Eclipse application with Trace Compass 0.1.0 or later installed. To open the Reference Sequence Diagram View, select '''Windows -> Show View -> Other... -> Tracing -> Sequence Diagram''' <br>
2705 [[Image:images/ShowTmfSDView.png]]<br>
2706
2707 A blank Sequence Diagram View will open.
2708
2709 Then import the reference trace to the '''Project Explorer''' using the '''Import Trace Package...''' menu option.<br>
2710 [[Image:images/ImportTracePackage.png]]
2711
2712 Next, open the trace by double-clicking on the trace element in the '''Project Explorer'''. The trace will be opened and the Sequence Diagram view will be filled.
2713 [[Image:images/ReferenceSeqDiagram.png]]<br>
2714
2715 Now the reference implementation can be explored. To demonstrate the view features try the following things:
2716 *Select a message in the Sequence diagram. As result the corresponding event will be selected in the Events View.
2717 *Select an event in the Events View. As result the corresponding message in the Sequence Diagram View will be selected. If necessary, the page will be changed.
2718 *In the Events View, press key ''End''. As result, the Sequence Diagram view will jump to the last page.
2719 *In the Events View, press key ''Home''. As result, the Sequence Diagram view will jump to the first page.
2720 *In the Sequence Diagram View select the find button. Enter the expression '''REGISTER.*''', select '''Search for Interaction''' and press '''Find'''. As result the corresponding message will be selected in the Sequence Diagram and the corresponding event in the Events View will be selected. Select again '''Find''' the next occurrence of will be selected. Since the second occurrence is on a different page than the first, the corresponding page will be loaded.
2721 * In the Sequence Diagram View, select menu item '''Hide Patterns...'''. Add the filter '''BALL.*''' for '''Interaction''' only and select '''OK'''. As result all messages with name ''BALL_REQUEST'' and ''BALL_REPLY'' will be hidden. To remove the filter, select menu item '''Hide Patterns...''', deselect the corresponding filter and press '''OK'''. All the messages will be shown again.<br>
2722
2723 === Extending the Reference Loader ===
2724
2725 In some case it might be necessary to change the implementation of the analysis of each ''TmfEvent'' for the generation of ''Sequence Diagram Events''. For that just extend the class ''TmfUml2SDSyncLoader'' and overwrite the method ''protected ITmfSyncSequenceDiagramEvent getSequenceDiagramEvent(ITmfEvent tmfEvent)'' with your own implementation.
2726
2727 = CTF Parser =
2728
2729 == CTF Format ==
2730 CTF is a format used to store traces. It is self defining, binary and made to be easy to write to.
2731 Before going further, the full specification of the CTF file format can be found at http://www.efficios.com/ .
2732
2733 For the purpose of the reader some basic description will be given. A CTF trace typically is made of several files all in the same folder.
2734
2735 These files can be split into two types :
2736 * Metadata
2737 * Event streams
2738
2739 === Metadata ===
2740 The metadata is either raw text or packetized text. It is TSDL encoded. it contains a description of the type of data in the event streams. It can grow over time if new events are added to a trace but it will never overwrite what is already there.
2741
2742 === Event Streams ===
2743 The event streams are a file per stream per cpu. These streams are binary and packet based. The streams store events and event information (ie lost events) The event data is stored in headers and field payloads.
2744
2745 So if you have two streams (channels) "channel1" and "channel2" and 4 cores, you will have the following files in your trace directory: "channel1_0" , "channel1_1" , "channel1_2" , "channel1_3" , "channel2_0" , "channel2_1" , "channel2_2" & "channel2_3"
2746
2747 == Reading a trace ==
2748 In order to read a CTF trace, two steps must be done.
2749 * The metadata must be read to know how to read the events.
2750 * the events must be read.
2751
2752 The metadata is a written in a subset of the C language called TSDL. To read it, first it is depacketized (if it is not in plain text) then the raw text is parsed by an antlr grammar. The parsing is done in two phases. There is a lexer (CTFLexer.g) which separated the metatdata text into tokens. The tokens are then pattern matched using the parser (CTFParser.g) to form an AST. This AST is walked through using "IOStructGen.java" to populate streams and traces in trace parent object.
2753
2754 When the metadata is loaded and read, the trace object will be populated with 3 items:
2755 * the event definitions available per stream: a definition is a description of the datatype.
2756 * the event declarations available per stream: this will save declaration creation on a per event basis. They will all be created in advance, just not populated.
2757 * the beginning of a packet index.
2758
2759 Now all the trace readers for the event streams have everything they need to read a trace. They will each point to one file, and read the file from packet to packet. Every time the trace reader changes packet, the index is updated with the new packet's information. The readers are in a priority queue and sorted by timestamp. This ensures that the events are read in a sequential order. They are also sorted by file name so that in the eventuality that two events occur at the same time, they stay in the same order.
2760
2761 == Seeking in a trace ==
2762 The reason for maintaining an index is to speed up seeks. In the case that a user wishes to seek to a certain timestamp, they just have to find the index entry that contains the timestamp, and go there to iterate in that packet until the proper event is found. this will reduce the searches time by an order of 8000 for a 256k packet size (kernel default).
2763
2764 == Interfacing to TMF ==
2765 The trace can be read easily now but the data is still awkward to extract.
2766
2767 === CtfLocation ===
2768 A location in a given trace, it is currently the timestamp of a trace and the index of the event. The index shows for a given timestamp if it is the first second or nth element.
2769
2770 === CtfTmfTrace ===
2771 The CtfTmfTrace is a wrapper for the standard CTF trace that allows it to perform the following actions:
2772 * '''initTrace()''' create a trace
2773 * '''validateTrace()''' is the trace a CTF trace?
2774 * '''getLocationRatio()''' how far in the trace is my location?
2775 * '''seekEvent()''' sets the cursor to a certain point in a trace.
2776 * '''readNextEvent()''' reads the next event and then advances the cursor
2777 * '''getTraceProperties()''' gets the 'env' structures of the metadata
2778
2779 === CtfIterator ===
2780 The CtfIterator is a wrapper to the CTF file reader. It behaves like an iterator on a trace. However, it contains a file pointer and thus cannot be duplicated too often or the system will run out of file handles. To alleviate the situation, a pool of iterators is created at the very beginning and stored in the CtfTmfTrace. They can be retried by calling the GetIterator() method.
2781
2782 === CtfIteratorManager ===
2783 Since each CtfIterator will have a file reader, the OS will run out of handles if too many iterators are spawned. The solution is to use the iterator manager. This will allow the user to get an iterator. If there is a context at the requested position, the manager will return that one, if not, a context will be selected at random and set to the correct location. Using random replacement minimizes contention as it will settle quickly at a new balance point.
2784
2785 === CtfTmfContext ===
2786 The CtfTmfContext implements the ITmfContext type. It is the CTF equivalent of TmfContext. It has a CtfLocation and points to an iterator in the CtfTmfTrace iterator pool as well as the parent trace. it is made to be cloned easily and not affect system resources much. Contexts behave much like C file pointers (FILE*) but they can be copied until one runs out of RAM.
2787
2788 === CtfTmfTimestamp ===
2789 The CtfTmfTimestamp take a CTF time (normally a long int) and outputs the time formats it as a TmfTimestamp, allowing it to be compared to other timestamps. The time is stored with the UTC offset already applied. It also features a simple toString() function that allows it to output the time in more Human readable ways: "yyyy/mm/dd/hh:mm:ss.nnnnnnnnn ns" for example. An additional feature is the getDelta() function that allows two timestamps to be substracted, showing the time difference between A and B.
2790
2791 === CtfTmfEvent ===
2792 The CtfTmfEvent is an ITmfEvent that is used to wrap event declarations and event definitions from the CTF side into easier to read and parse chunks of information. It is a final class with final fields made to be newed very often without incurring performance costs. Most of the information is already available. It should be noted that one type of event can appear called "lost event" these are synthetic events that do not exist in the trace. They will not appear in other trace readers such as babeltrace.
2793
2794 === Other ===
2795 There are other helper files that format given events for views, they are simpler and the architecture does not depend on them.
2796
2797 === Limitations ===
2798 For the moment live trace reading is not supported, there are no sources of traces to test on.
2799
2800 = Event matching and trace synchronization =
2801
2802 Event matching consists in taking an event from a trace and linking it to another event in a possibly different trace. The example that comes to mind is matching network packets sent from one traced machine to another traced machine. These matches can be used to synchronize traces.
2803
2804 Trace synchronization consists in taking traces, taken on different machines, with a different time reference, and finding the formula to transform the timestamps of some of the traces, so that they all have the same time reference.
2805
2806 == Event matching interfaces ==
2807
2808 Here's a description of the major parts involved in event matching. These classes are all in the ''org.eclipse.tracecompass.tmf.core.event.matching'' package:
2809
2810 * '''ITmfEventMatching''': Controls the event matching process
2811 * '''ITmfMatchEventDefinition''': Describes how events are matched
2812 * '''IMatchProcessingUnit''': Processes the matched events
2813
2814 == Implementation details and how to extend it ==
2815
2816 === ITmfEventMatching interface and derived classes ===
2817
2818 This interface and its default abstract implementation '''TmfEventMatching''' control the event matching itself. Their only public method is ''matchEvents''. The class needs to manage how to setup the traces, and any initialization or finalization procedures.
2819
2820 The abstract class generates an event request for each trace from which events are matched and waits for the request to complete before calling the one from another trace. The ''handleData'' method from the request calls the ''matchEvent'' method that needs to be implemented in children classes.
2821
2822 Class '''TmfNetworkEventMatching''' is a concrete implementation of this interface. It applies to all use cases where a ''in'' event can be matched with a ''out' event (''in'' and ''out'' can be the same event, with different data). It creates a '''TmfEventDependency''' between the source and destination events. The dependency is added to the processing unit.
2823
2824 To match events requiring other mechanisms (for instance, a series of events can be matched with another series of events), one would need to implement another class either extending '''TmfEventMatching''' or implementing '''ITmfEventMatching'''. It would most probably also require a new '''ITmfMatchEventDefinition''' implementation.
2825
2826 === ITmfMatchEventDefinition interface and its derived classes ===
2827
2828 These are the classes that describe how to actually match specific events together.
2829
2830 The '''canMatchTrace''' method will tell if a definition is compatible with a given trace.
2831
2832 The '''getEventKey''' method will return a key for an event that uniquely identifies this event and will match the key from another event.
2833
2834 Typically, there would be a match definition abstract class/interface per event matching type.
2835
2836 The interface '''ITmfNetworkMatchDefinition''' adds the ''getDirection'' method to indicate whether this event is a ''in'' or ''out'' event to be matched with one from the opposite direction.
2837
2838 As examples, two concrete network match definitions have been implemented in the ''org.eclipse.tracecompass.internal.lttng2.kernel.core.event.matching'' package for two compatible methods of matching TCP packets (See the Trace Compass User Guide on ''trace synchronization'' for information on those matching methods). Each one tells which events need to be present in the metadata of a CTF trace for this matching method to be applicable. It also returns the field values from each event that will uniquely match 2 events together.
2839
2840 === IMatchProcessingUnit interface and derived classes ===
2841
2842 While matching events is an exercise in itself, it's what to do with the match that really makes this functionality interesting. This is the job of the '''IMatchProcessingUnit''' interface.
2843
2844 '''TmfEventMatches''' provides a default implementation that only stores the matches to count them. When a new match is obtained, the ''addMatch'' is called with the match and the processing unit can do whatever needs to be done with it.
2845
2846 A match processing unit can be an analysis in itself. For example, trace synchronization is done through such a processing unit. One just needs to set the processing unit in the TmfEventMatching constructor.
2847
2848 == Code examples ==
2849
2850 === Using network packets matching in an analysis ===
2851
2852 This example shows how one can create a processing unit inline to create a link between two events. In this example, the code already uses an event request, so there is no need here to call the ''matchEvents'' method, that will only create another request.
2853
2854 <pre>
2855 class MyAnalysis extends TmfAbstractAnalysisModule {
2856
2857 private TmfNetworkEventMatching tcpMatching;
2858
2859 ...
2860
2861 protected void executeAnalysis() {
2862
2863 IMatchProcessingUnit matchProcessing = new IMatchProcessingUnit() {
2864 @Override
2865 public void matchingEnded() {
2866 }
2867
2868 @Override
2869 public void init(ITmfTrace[] fTraces) {
2870 }
2871
2872 @Override
2873 public int countMatches() {
2874 return 0;
2875 }
2876
2877 @Override
2878 public void addMatch(TmfEventDependency match) {
2879 log.debug("we got a tcp match! " + match.getSourceEvent().getContent() + " " + match.getDestinationEvent().getContent());
2880 TmfEvent source = match.getSourceEvent();
2881 TmfEvent destination = match.getDestinationEvent();
2882 /* Create a link between the two events */
2883 }
2884 };
2885
2886 ITmfTrace[] traces = { getTrace() };
2887 tcpMatching = new TmfNetworkEventMatching(traces, matchProcessing);
2888 tcpMatching.initMatching();
2889
2890 MyEventRequest request = new MyEventRequest(this, i);
2891 getTrace().sendRequest(request);
2892 }
2893
2894 public void analyzeEvent(TmfEvent event) {
2895 ...
2896 tcpMatching.matchEvent(event, 0);
2897 ...
2898 }
2899
2900 ...
2901
2902 }
2903
2904 class MyEventRequest extends TmfEventRequest {
2905
2906 private final MyAnalysis analysis;
2907
2908 MyEventRequest(MyAnalysis analysis, int traceno) {
2909 super(CtfTmfEvent.class,
2910 TmfTimeRange.ETERNITY,
2911 0,
2912 TmfDataRequest.ALL_DATA,
2913 ITmfDataRequest.ExecutionType.FOREGROUND);
2914 this.analysis = analysis;
2915 }
2916
2917 @Override
2918 public void handleData(final ITmfEvent event) {
2919 super.handleData(event);
2920 if (event != null) {
2921 analysis.analyzeEvent(event);
2922 }
2923 }
2924 }
2925 </pre>
2926
2927 === Match network events from UST traces ===
2928
2929 Suppose a client-server application is instrumented using LTTng-UST. Traces are collected on the server and some clients on different machines. The traces can be synchronized using network event matching.
2930
2931 The following metadata describes the events:
2932
2933 <pre>
2934 event {
2935 name = "myapp:send";
2936 id = 0;
2937 stream_id = 0;
2938 loglevel = 13;
2939 fields := struct {
2940 integer { size = 32; align = 8; signed = 1; encoding = none; base = 10; } _sendto;
2941 integer { size = 64; align = 8; signed = 1; encoding = none; base = 10; } _messageid;
2942 integer { size = 64; align = 8; signed = 1; encoding = none; base = 10; } _data;
2943 };
2944 };
2945
2946 event {
2947 name = "myapp:receive";
2948 id = 1;
2949 stream_id = 0;
2950 loglevel = 13;
2951 fields := struct {
2952 integer { size = 32; align = 8; signed = 1; encoding = none; base = 10; } _from;
2953 integer { size = 64; align = 8; signed = 1; encoding = none; base = 10; } _messageid;
2954 integer { size = 64; align = 8; signed = 1; encoding = none; base = 10; } _data;
2955 };
2956 };
2957 </pre>
2958
2959 One would need to write an event match definition for those 2 events as follows:
2960
2961 <pre>
2962 public class MyAppUstEventMatching implements ITmfNetworkMatchDefinition {
2963
2964 @Override
2965 public Direction getDirection(ITmfEvent event) {
2966 String evname = event.getType().getName();
2967 if (evname.equals("myapp:receive")) {
2968 return Direction.IN;
2969 } else if (evname.equals("myapp:send")) {
2970 return Direction.OUT;
2971 }
2972 return null;
2973 }
2974
2975 @Override
2976 public IEventMatchingKey getEventKey(ITmfEvent event) {
2977 IEventMatchingKey key;
2978
2979 if (evname.equals("myapp:receive")) {
2980 key = new MyEventMatchingKey(event.getContent().getField("from").getValue(),
2981 event.getContent().getField("messageid").getValue());
2982 } else {
2983 key = new MyEventMatchingKey(event.getContent().getField("sendto").getValue(),
2984 event.getContent().getField("messageid").getValue());
2985 }
2986
2987 return key;
2988 }
2989
2990 @Override
2991 public boolean canMatchTrace(ITmfTrace trace) {
2992 if (!(trace instanceof CtfTmfTrace)) {
2993 return false;
2994 }
2995 CtfTmfTrace ktrace = (CtfTmfTrace) trace;
2996 String[] events = { "myapp:receive", "myapp:send" };
2997 return ktrace.hasAtLeastOneOfEvents(events);
2998 }
2999
3000 @Override
3001 public MatchingType[] getApplicableMatchingTypes() {
3002 MatchingType[] types = { MatchingType.NETWORK };
3003 return types;
3004 }
3005
3006 }
3007 </pre>
3008
3009 Somewhere in code that will be executed at the start of the plugin (like in the Activator), the following code will have to be run:
3010
3011 <pre>
3012 TmfEventMatching.registerMatchObject(new MyAppUstEventMatching());
3013 </pre>
3014
3015 Now, only adding the traces in an experiment and clicking the '''Synchronize traces''' menu element would synchronize the traces using the new definition for event matching.
3016
3017 == Trace synchronization ==
3018
3019 Trace synchronization classes and interfaces are located in the ''org.eclipse.tracecompass.tmf.core.synchronization'' package.
3020
3021 === Synchronization algorithm ===
3022
3023 Synchronization algorithms are used to synchronize traces from events matched between traces. After synchronization, traces taken on different machines with different time references see their timestamps modified such that they all use the same time reference (typically, the time of at least one of the traces). With traces from different machines, it is impossible to have perfect synchronization, so the result is a best approximation that takes network latency into account.
3024
3025 The abstract class '''SynchronizationAlgorithm''' is a processing unit for matches. New synchronization algorithms must extend this one, it already contains the functions to get the timestamp transforms for different traces.
3026
3027 The ''fully incremental convex hull'' synchronization algorithm is the default synchronization algorithm.
3028
3029 While the synchronization system provisions for more synchronization algorithms, there is not yet a way to select one, the experiment's trace synchronization uses the default algorithm. To test a new synchronization algorithm, the synchronization should be called directly like this:
3030
3031 <pre>
3032 SynchronizationAlgorithm syncAlgo = new MyNewSynchronizationAlgorithm();
3033 syncAlgo = SynchronizationManager.synchronizeTraces(syncFile, traces, syncAlgo, true);
3034 </pre>
3035
3036 === Timestamp transforms ===
3037
3038 Timestamp transforms are the formulae used to transform the timestamps from a trace into the reference time. The '''ITmfTimestampTransform''' is the interface to implement to add a new transform.
3039
3040 The following classes implement this interface:
3041
3042 * '''TmfTimestampTransform''': default transform. It cannot be instantiated, it has a single static object TmfTimestampTransform.IDENTITY, which returns the original timestamp.
3043 * '''TmfTimestampTransformLinear''': transforms the timestamp using a linear formula: ''f(t) = at + b'', where ''a'' and ''b'' are computed by the synchronization algorithm.
3044
3045 One could extend the interface for other timestamp transforms, for instance to have a transform where the formula would change over the course of the trace.
3046
3047 == Todo ==
3048
3049 Here's a list of features not yet implemented that would enhance trace synchronization and event matching:
3050
3051 * Ability to select a synchronization algorithm
3052 * Implement a better way to select the reference trace instead of arbitrarily taking the first in alphabetical order (for instance, the minimum spanning tree algorithm by Masoume Jabbarifar (article on the subject not published yet))
3053 * Ability to join traces from the same host so that even if one of the traces is not synchronized with the reference trace, it will take the same timestamp transform as the one on the same machine.
3054 * Instead of having the timestamp transforms per trace, have the timestamp transform as part of an experiment context, so that the trace's specific analysis, like the state system, are in the original trace, but are transformed only when needed for an experiment analysis.
3055 * Add more views to display the synchronization information (only textual statistics are available for now)
3056
3057 = Analysis Framework =
3058
3059 Analysis modules are useful to tell the user exactly what can be done with a trace. The analysis framework provides an easy way to access and execute the modules and open the various outputs available.
3060
3061 Analyses can have parameters they can use in their code. They also have outputs registered to them to display the results from their execution.
3062
3063 == Creating a new module ==
3064
3065 All analysis modules must implement the '''IAnalysisModule''' interface from the o.e.l.tmf.core project. An abstract class, '''TmfAbstractAnalysisModule''', provides a good base implementation. It is strongly suggested to use it as a superclass of any new analysis.
3066
3067 === Example ===
3068
3069 This example shows how to add a simple analysis module for an LTTng kernel trace with two parameters. It also specifies two mandatory events by overriding '''getAnalysisRequirements'''. The analysis requirements are further explained in the section [[#Providing requirements to analyses]].
3070
3071 <pre>
3072 public class MyLttngKernelAnalysis extends TmfAbstractAnalysisModule {
3073
3074 public static final String PARAM1 = "myparam";
3075 public static final String PARAM2 = "myotherparam";
3076
3077 @Override
3078 public Iterable<TmfAnalysisRequirement> getAnalysisRequirements() {
3079
3080 // initialize the requirement: domain and events
3081 TmfAnalysisRequirement domainReq = new TmfAnalysisRequirement(SessionConfigStrings.CONFIG_ELEMENT_DOMAIN);
3082 domainReq.addValue(SessionConfigStrings.CONFIG_DOMAIN_TYPE_KERNEL, ValuePriorityLevel.MANDATORY);
3083
3084 List<String> requiredEvents = ImmutableList.of("sched_switch", "sched_wakeup");
3085 TmfAnalysisRequirement eventReq = new TmfAnalysisRequirement(SessionConfigStrings.CONFIG_ELEMENT_EVENT,
3086 requiredEvents, ValuePriorityLevel.MANDATORY);
3087
3088 return ImmutableList.of(domainReq, eventReq);
3089 }
3090
3091 @Override
3092 protected void canceling() {
3093 /* The job I am running in is being cancelled, let's clean up */
3094 }
3095
3096 @Override
3097 protected boolean executeAnalysis(final IProgressMonitor monitor) {
3098 /*
3099 * I am running in an Eclipse job, and I already know I can execute
3100 * on a given trace.
3101 *
3102 * In the end, I will return true if I was successfully completed or
3103 * false if I was either interrupted or something wrong occurred.
3104 */
3105 Object param1 = getParameter(PARAM1);
3106 int param2 = (Integer) getParameter(PARAM2);
3107 }
3108
3109 @Override
3110 public Object getParameter(String name) {
3111 Object value = super.getParameter(name);
3112 /* Make sure the value of param2 is of the right type. For sake of
3113 simplicity, the full parameter format validation is not presented
3114 here */
3115 if ((value != null) && name.equals(PARAM2) && (value instanceof String)) {
3116 return Integer.parseInt((String) value);
3117 }
3118 return value;
3119 }
3120
3121 }
3122 </pre>
3123
3124 === Available base analysis classes and interfaces ===
3125
3126 The following are available as base classes for analysis modules. They also extend the abstract '''TmfAbstractAnalysisModule'''
3127
3128 * '''TmfStateSystemAnalysisModule''': A base analysis module that builds one state system. A module extending this class only needs to provide a state provider and the type of state system backend to use. All state systems should now use this base class as it also contains all the methods to actually create the state sytem with a given backend.
3129
3130 The following interfaces can optionally be implemented by analysis modules if they use their functionalities. For instance, some utility views, like the State System Explorer, may have access to the module's data through these interfaces.
3131
3132 * '''ITmfAnalysisModuleWithStateSystems''': Modules implementing this have one or more state systems included in them. For example, a module may "hide" 2 state system modules for its internal workings. By implementing this interface, it tells that it has state systems and can return them if required.
3133
3134 === How it works ===
3135
3136 Analyses are managed through the '''TmfAnalysisManager'''. The analysis manager is a singleton in the application and keeps track of all available analysis modules, with the help of '''IAnalysisModuleHelper'''. It can be queried to get the available analysis modules, either all of them or only those for a given tracetype. The helpers contain the non-trace specific information on an analysis module: its id, its name, the tracetypes it applies to, etc.
3137
3138 When a trace is opened, the helpers for the applicable analysis create new instances of the analysis modules. The analysis are then kept in a field of the trace and can be executed automatically or on demand.
3139
3140 The analysis is executed by calling the '''IAnalysisModule#schedule()''' method. This method makes sure the analysis is executed only once and, if it is already running, it won't start again. The analysis itself is run inside an Eclipse job that can be cancelled by the user or the application. The developer must consider the progress monitor that comes as a parameter of the '''executeAnalysis()''' method, to handle the proper cancellation of the processing. The '''IAnalysisModule#waitForCompletion()''' method will block the calling thread until the analysis is completed. The method will return whether the analysis was successfully completed or if it was cancelled.
3141
3142 A running analysis can be cancelled by calling the '''IAnalysisModule#cancel()''' method. This will set the analysis as done, so it cannot start again unless it is explicitly reset. This is done by calling the protected method '''resetAnalysis'''.
3143
3144 == Telling TMF about the analysis module ==
3145
3146 Now that the analysis module class exists, it is time to hook it to the rest of TMF so that it appears under the traces in the project explorer. The way to do so is to add an extension of type ''org.eclipse.linuxtools.tmf.core.analysis'' to a plugin, either through the ''Extensions'' tab of the Plug-in Manifest Editor or by editing directly the plugin.xml file.
3147
3148 The following code shows what the resulting plugin.xml file should look like.
3149
3150 <pre>
3151 <extension
3152 point="org.eclipse.linuxtools.tmf.core.analysis">
3153 <module
3154 id="my.lttng.kernel.analysis.id"
3155 name="My LTTng Kernel Analysis"
3156 analysis_module="my.plugin.package.MyLttngKernelAnalysis"
3157 automatic="true">
3158 <parameter
3159 name="myparam">
3160 </parameter>
3161 <parameter
3162 default_value="3"
3163 name="myotherparam">
3164 <tracetype
3165 class="org.eclipse.tracecompass.lttng2.kernel.core.trace.LttngKernelTrace">
3166 </tracetype>
3167 </module>
3168 </extension>
3169 </pre>
3170
3171 This defines an analysis module where the ''analysis_module'' attribute corresponds to the module class and must implement IAnalysisModule. This module has 2 parameters: ''myparam'' and ''myotherparam'' which has default value of 3. The ''tracetype'' element tells which tracetypes this analysis applies to. There can be many tracetypes. Also, the ''automatic'' attribute of the module indicates whether this analysis should be run when the trace is opened, or wait for the user's explicit request.
3172
3173 Note that with these extension points, it is possible to use the same module class for more than one analysis (with different ids and names). That is a desirable behavior. For instance, a third party plugin may add a new tracetype different from the one the module is meant for, but on which the analysis can run. Also, different analyses could provide different results with the same module class but with different default values of parameters.
3174
3175 == Attaching outputs and views to the analysis module ==
3176
3177 Analyses will typically produce outputs the user can examine. Outputs can be a text dump, a .dot file, an XML file, a view, etc. All output types must implement the '''IAnalysisOutput''' interface.
3178
3179 An output can be registered to an analysis module at any moment by calling the '''IAnalysisModule#registerOutput()''' method. Analyses themselves may know what outputs are available and may register them in the analysis constructor or after analysis completion.
3180
3181 The various concrete output types are:
3182
3183 * '''TmfAnalysisViewOutput''': It takes a view ID as parameter and, when selected, opens the view.
3184
3185 === Using the extension point to add outputs ===
3186
3187 Analysis outputs can also be hooked to an analysis using the same extension point ''org.eclipse.linuxtools.tmf.core.analysis'' in the plugin.xml file. Outputs can be matched either to a specific analysis identified by an ID, or to all analysis modules extending or implementing a given class or interface.
3188
3189 The following code shows how to add a view output to the analysis defined above directly in the plugin.xml file. This extension does not have to be in the same plugin as the extension defining the analysis. Typically, an analysis module can be defined in a core plugin, along with some outputs that do not require UI elements. Other outputs, like views, who need UI elements, will be defined in a ui plugin.
3190
3191 <pre>
3192 <extension
3193 point="org.eclipse.linuxtools.tmf.core.analysis">
3194 <output
3195 class="org.eclipse.tracecompass.tmf.ui.analysis.TmfAnalysisViewOutput"
3196 id="my.plugin.package.ui.views.myView">
3197 <analysisId
3198 id="my.lttng.kernel.analysis.id">
3199 </analysisId>
3200 </output>
3201 <output
3202 class="org.eclipse.tracecompass.tmf.ui.analysis.TmfAnalysisViewOutput"
3203 id="my.plugin.package.ui.views.myMoreGenericView">
3204 <analysisModuleClass
3205 class="my.plugin.package.core.MyAnalysisModuleClass">
3206 </analysisModuleClass>
3207 </output>
3208 </extension>
3209 </pre>
3210
3211 == Providing help for the module ==
3212
3213 For now, the only way to provide a meaningful help message to the user is by overriding the '''IAnalysisModule#getHelpText()''' method and return a string that will be displayed in a message box.
3214
3215 What still needs to be implemented is for a way to add a full user/developer documentation with mediawiki text file for each module and automatically add it to Eclipse Help. Clicking on the Help menu item of an analysis module would open the corresponding page in the help.
3216
3217 == Using analysis parameter providers ==
3218
3219 An analysis may have parameters that can be used during its execution. Default values can be set when describing the analysis module in the plugin.xml file, or they can use the '''IAnalysisParameterProvider''' interface to provide values for parameters. '''TmfAbstractAnalysisParamProvider''' provides an abstract implementation of this interface, that automatically notifies the module of a parameter change.
3220
3221 === Example parameter provider ===
3222
3223 The following example shows how to have a parameter provider listen to a selection in the LTTng kernel Control Flow view and send the thread id to the analysis.
3224
3225 <pre>
3226 public class MyLttngKernelParameterProvider extends TmfAbstractAnalysisParamProvider {
3227
3228 private ControlFlowEntry fCurrentEntry = null;
3229
3230 private static final String NAME = "My Lttng kernel parameter provider"; //$NON-NLS-1$
3231
3232 private ISelectionListener selListener = new ISelectionListener() {
3233 @Override
3234 public void selectionChanged(IWorkbenchPart part, ISelection selection) {
3235 if (selection instanceof IStructuredSelection) {
3236 Object element = ((IStructuredSelection) selection).getFirstElement();
3237 if (element instanceof ControlFlowEntry) {
3238 ControlFlowEntry entry = (ControlFlowEntry) element;
3239 setCurrentThreadEntry(entry);
3240 }
3241 }
3242 }
3243 };
3244
3245 /*
3246 * Constructor
3247 */
3248 public MyLttngKernelParameterProvider() {
3249 super();
3250 registerListener();
3251 }
3252
3253 @Override
3254 public String getName() {
3255 return NAME;
3256 }
3257
3258 @Override
3259 public Object getParameter(String name) {
3260 if (fCurrentEntry == null) {
3261 return null;
3262 }
3263 if (name.equals(MyLttngKernelAnalysis.PARAM1)) {
3264 return fCurrentEntry.getThreadId();
3265 }
3266 return null;
3267 }
3268
3269 @Override
3270 public boolean appliesToTrace(ITmfTrace trace) {
3271 return (trace instanceof LttngKernelTrace);
3272 }
3273
3274 private void setCurrentThreadEntry(ControlFlowEntry entry) {
3275 if (!entry.equals(fCurrentEntry)) {
3276 fCurrentEntry = entry;
3277 this.notifyParameterChanged(MyLttngKernelAnalysis.PARAM1);
3278 }
3279 }
3280
3281 private void registerListener() {
3282 final IWorkbench wb = PlatformUI.getWorkbench();
3283
3284 final IWorkbenchPage activePage = wb.getActiveWorkbenchWindow().getActivePage();
3285
3286 /* Add the listener to the control flow view */
3287 view = activePage.findView(ControlFlowView.ID);
3288 if (view != null) {
3289 view.getSite().getWorkbenchWindow().getSelectionService().addPostSelectionListener(selListener);
3290 view.getSite().getWorkbenchWindow().getPartService().addPartListener(partListener);
3291 }
3292 }
3293
3294 }
3295 </pre>
3296
3297 === Register the parameter provider to the analysis ===
3298
3299 To have the parameter provider class register to analysis modules, it must first register through the analysis manager. It can be done in a plugin's activator as follows:
3300
3301 <pre>
3302 @Override
3303 public void start(BundleContext context) throws Exception {
3304 /* ... */
3305 TmfAnalysisManager.registerParameterProvider("my.lttng.kernel.analysis.id", MyLttngKernelParameterProvider.class)
3306 }
3307 </pre>
3308
3309 where '''MyLttngKernelParameterProvider''' will be registered to analysis ''"my.lttng.kernel.analysis.id"''. When the analysis module is created, the new module will register automatically to the singleton parameter provider instance. Only one module is registered to a parameter provider at a given time, the one corresponding to the currently selected trace.
3310
3311 == Providing requirements to analyses ==
3312
3313 === Analysis requirement provider API ===
3314
3315 A requirement defines the needs of an analysis. For example, an analysis could need an event named ''"sched_switch"'' in order to be properly executed. The requirements are represented by the class '''TmfAnalysisRequirement'''. Since '''IAnalysisModule''' extends the '''IAnalysisRequirementProvider''' interface, all analysis modules must provide their requirements. If the analysis module extends '''TmfAbstractAnalysisModule''', it has the choice between overriding the requirements getter ('''IAnalysisRequirementProvider#getAnalysisRequirements()''') or not, since the abstract class returns an empty collection by default (no requirements).
3316
3317 === Requirement values ===
3318
3319 When instantiating a requirement, the developer needs to specify a type to which all the values added to the requirement will be linked. In the earlier example, there would be an ''"event"'' or ''"eventName"'' type. The type is represented by a string, like all values added to the requirement object. With an 'event' type requirement, a trace generator like the LTTng Control could automatically enable the required events. This is possible by calling the '''TmfAnalysisRequirementHelper''' class. Another point we have to take into consideration is the priority level of each value added to the requirement object. The enum '''TmfAnalysisRequirement#ValuePriorityLevel''' gives the choice between '''ValuePriorityLevel#MANDATORY''' and '''ValuePriorityLevel#OPTIONAL'''. That way, we can tell if an analysis can run without a value or not. To add values, one must call '''TmfAnalysisRequirement#addValue()'''.
3320
3321 Moreover, information can be added to requirements. That way, the developer can explicitly give help details at the requirement level instead of at the analysis level (which would just be a general help text). To add information to a requirement, the method '''TmfAnalysisRequirement#addInformation()''' must be called. Adding information is not mandatory.
3322
3323 === Example of providing requirements ===
3324
3325 In this example, we will implement a method that initializes a requirement object and return it in the '''IAnalysisRequirementProvider#getAnalysisRequirements()''' getter. The example method will return a set with two requirements. The first one will indicate the events needed by a specific analysis and the last one will tell on what domain type the analysis applies. In the event type requirement, we will indicate that the analysis needs a mandatory event and an optional one.
3326
3327 <pre>
3328 @Override
3329 public Iterable<TmfAnalysisRequirement> getAnalysisRequirements() {
3330 Set<TmfAnalysisRequirement> requirements = new HashSet<>();
3331
3332 /* Create requirements of type 'event' and 'domain' */
3333 TmfAnalysisRequirement eventRequirement = new TmfAnalysisRequirement("event");
3334 TmfAnalysisRequirement domainRequirement = new TmfAnalysisRequirement("domain");
3335
3336 /* Add the values */
3337 domainRequirement.addValue("kernel", TmfAnalysisRequirement.ValuePriorityLevel.MANDATORY);
3338 eventRequirement.addValue("sched_switch", TmfAnalysisRequirement.ValuePriorityLevel.MANDATORY);
3339 eventRequirement.addValue("sched_wakeup", TmfAnalysisRequirement.ValuePriorityLevel.OPTIONAL);
3340
3341 /* An information about the events */
3342 eventRequirement.addInformation("The event sched_wakeup is optional because it's not properly handled by this analysis yet.");
3343
3344 /* Add them to the set */
3345 requirements.add(domainRequirement);
3346 requirements.add(eventRequirement);
3347
3348 return requirements;
3349 }
3350 </pre>
3351
3352
3353 == TODO ==
3354
3355 Here's a list of features not yet implemented that would improve the analysis module user experience:
3356
3357 * Implement help using the Eclipse Help facility (without forgetting an eventual command line request)
3358 * The abstract class '''TmfAbstractAnalysisModule''' executes an analysis as a job, but nothing compels a developer to do so for an analysis implementing the '''IAnalysisModule''' interface. We should force the execution of the analysis as a job, either from the trace itself or using the TmfAnalysisManager or by some other mean.
3359 * Views and outputs are often registered by the analysis themselves (forcing them often to be in the .ui packages because of the views), because there is no other easy way to do so. We should extend the analysis extension point so that .ui plugins or other third-party plugins can add outputs to a given analysis that resides in the core.
3360 * Improve the user experience with the analysis:
3361 ** Allow the user to select which analyses should be available, per trace or per project.
3362 ** Allow the user to view all available analyses even though he has no imported traces.
3363 ** Allow the user to generate traces for a given analysis, or generate a template to generate the trace that can be sent as parameter to the tracer.
3364 ** Give the user a visual status of the analysis: not executed, in progress, completed, error.
3365 ** Give a small screenshot of the output as icon for it.
3366 ** Allow to specify parameter values from the GUI.
3367 * Add the possibility for an analysis requirement to be composed of another requirement.
3368 * Generate a trace session from analysis requirements.
3369
3370
3371 = Performance Tests =
3372
3373 Performance testing allows to calculate some metrics (CPU time, Memory Usage, etc) that some part of the code takes during its execution. These metrics can then be used as is for information on the system's execution, or they can be compared either with other execution scenarios, or previous runs of the same scenario, for instance, after some optimization has been done on the code.
3374
3375 For automatic performance metric computation, we use the ''org.eclipse.test.performance'' plugin, provided by the Eclipse Test Feature.
3376
3377 == Add performance tests ==
3378
3379 === Where ===
3380
3381 Performance tests are unit tests and they are added to the corresponding unit tests plugin. To separate performance tests from unit tests, a separate source folder, typically named ''perf'', is added to the plug-in.
3382
3383 Tests are to be added to a package under the ''perf'' directory, the package name would typically match the name of the package it is testing. For each package, a class named '''AllPerfTests''' would list all the performance tests classes inside this package. And like for unit tests, a class named '''AllPerfTests''' for the plug-in would list all the packages' '''AllPerfTests''' classes.
3384
3385 When adding performance tests for the first time in a plug-in, the plug-in's '''AllPerfTests''' class should be added to the global list of performance tests, found in package ''org.eclipse.tracecompass.alltests'', in class '''RunAllPerfTests'''. This will ensure that performance tests for the plug-in are run along with the other performance tests
3386
3387 === How ===
3388
3389 TMF is using the org.eclipse.test.performance framework for performance tests. Using this, performance metrics are automatically taken and, if many runs of the tests are run, average and standard deviation are automatically computed. Results can optionally be stored to a database for later use.
3390
3391 Here is an example of how to use the test framework in a performance test:
3392
3393 <pre>
3394 public class AnalysisBenchmark {
3395
3396 private static final String TEST_ID = "org.eclipse.linuxtools#LTTng kernel analysis";
3397 private static final CtfTmfTestTrace testTrace = CtfTmfTestTrace.TRACE2;
3398 private static final int LOOP_COUNT = 10;
3399
3400 /**
3401 * Performance test
3402 */
3403 @Test
3404 public void testTrace() {
3405 assumeTrue(testTrace.exists());
3406
3407 /** Create a new performance meter for this scenario */
3408 Performance perf = Performance.getDefault();
3409 PerformanceMeter pm = perf.createPerformanceMeter(TEST_ID);
3410
3411 /** Optionally, tag this test for summary or global summary on a given dimension */
3412 perf.tagAsSummary(pm, "LTTng Kernel Analysis", Dimension.CPU_TIME);
3413 perf.tagAsGlobalSummary(pm, "LTTng Kernel Analysis", Dimension.CPU_TIME);
3414
3415 /** The test will be run LOOP_COUNT times */
3416 for (int i = 0; i < LOOP_COUNT; i++) {
3417
3418 /** Start each run of the test with new objects to avoid different code paths */
3419 try (IAnalysisModule module = new LttngKernelAnalysisModule();
3420 LttngKernelTrace trace = new LttngKernelTrace()) {
3421 module.setId("test");
3422 trace.initTrace(null, testTrace.getPath(), CtfTmfEvent.class);
3423 module.setTrace(trace);
3424
3425 /** The analysis execution is being tested, so performance metrics
3426 * are taken before and after the execution */
3427 pm.start();
3428 TmfTestHelper.executeAnalysis(module);
3429 pm.stop();
3430
3431 /*
3432 * Delete the supplementary files, so next iteration rebuilds
3433 * the state system.
3434 */
3435 File suppDir = new File(TmfTraceManager.getSupplementaryFileDir(trace));
3436 for (File file : suppDir.listFiles()) {
3437 file.delete();
3438 }
3439
3440 } catch (TmfAnalysisException | TmfTraceException e) {
3441 fail(e.getMessage());
3442 }
3443 }
3444
3445 /** Once the test has been run many times, committing the results will
3446 * calculate average, standard deviation, and, if configured, save the
3447 * data to a database */
3448 pm.commit();
3449 }
3450 }
3451
3452 </pre>
3453
3454 For more information, see [http://wiki.eclipse.org/Performance/Automated_Tests The Eclipse Performance Test How-to]
3455
3456 Some rules to help write performance tests are explained in section [[#ABC of performance testing | ABC of performance testing]].
3457
3458 === Run a performance test ===
3459
3460 Performance tests are unit tests, so, just like unit tests, they can be run by right-clicking on a performance test class and selecting ''Run As'' -> ''Junit Plug-in Test''.
3461
3462 By default, if no database has been configured, results will be displayed in the Console at the end of the test.
3463
3464 Here is the sample output from the test described in the previous section. It shows all the metrics that have been calculated during the test.
3465
3466 <pre>
3467 Scenario 'org.eclipse.linuxtools#LTTng kernel analysis' (average over 10 samples):
3468 System Time: 3.04s (95% in [2.77s, 3.3s]) Measurable effect: 464ms (1.3 SDs) (required sample size for an effect of 5% of mean: 94)
3469 Used Java Heap: -1.43M (95% in [-33.67M, 30.81M]) Measurable effect: 57.01M (1.3 SDs) (required sample size for an effect of 5% of stdev: 6401)
3470 Working Set: 14.43M (95% in [-966.01K, 29.81M]) Measurable effect: 27.19M (1.3 SDs) (required sample size for an effect of 5% of stdev: 6400)
3471 Elapsed Process: 3.04s (95% in [2.77s, 3.3s]) Measurable effect: 464ms (1.3 SDs) (required sample size for an effect of 5% of mean: 94)
3472 Kernel time: 621ms (95% in [586ms, 655ms]) Measurable effect: 60ms (1.3 SDs) (required sample size for an effect of 5% of mean: 39)
3473 CPU Time: 6.06s (95% in [5.02s, 7.09s]) Measurable effect: 1.83s (1.3 SDs) (required sample size for an effect of 5% of mean: 365)
3474 Hard Page Faults: 0 (95% in [0, 0]) Measurable effect: 0 (1.3 SDs) (required sample size for an effect of 5% of stdev: 6400)
3475 Soft Page Faults: 9.27K (95% in [3.28K, 15.27K]) Measurable effect: 10.6K (1.3 SDs) (required sample size for an effect of 5% of mean: 5224)
3476 Text Size: 0 (95% in [0, 0])
3477 Data Size: 0 (95% in [0, 0])
3478 Library Size: 32.5M (95% in [-12.69M, 77.69M]) Measurable effect: 79.91M (1.3 SDs) (required sample size for an effect of 5% of stdev: 6401)
3479 </pre>
3480
3481 Results from performance tests can be saved automatically to a derby database. Derby can be run either in embedded mode, locally on a machine, or on a server. More information on setting up derby for performance tests can be found here: [http://wiki.eclipse.org/Performance/Automated_Tests The Eclipse Performance Test How-to]. The following documentation will show how to configure an Eclipse run configuration to store results on a derby database located on a server.
3482
3483 Note that to store results in a derby database, the ''org.apache.derby'' plug-in must be available within your Eclipse. Since it is an optional dependency, it is not included in the target definition. It can be installed via the '''Orbit''' repository, in ''Help'' -> ''Install new software...''. If the '''Orbit''' repository is not listed, click on the latest one from [http://download.eclipse.org/tools/orbit/downloads/] and copy the link under ''Orbit Build Repository''.
3484
3485 To store the data to a database, it needs to be configured in the run configuration. In ''Run'' -> ''Run configurations..'', under ''Junit Plug-in Test'', find the run configuration that corresponds to the test you wish to run, or create one if it is not present yet.
3486
3487 In the ''Arguments'' tab, in the box under ''VM Arguments'', add on separate lines the following information
3488
3489 <pre>
3490 -Declipse.perf.dbloc=//javaderby.dorsal.polymtl.ca
3491 -Declipse.perf.config=build=mybuild;host=myhost;config=linux;jvm=1.7
3492 </pre>
3493
3494 The ''eclipse.perf.dbloc'' parameter is the url (or filename) of the derby database. The database is by default named ''perfDB'', with username and password ''guest''/''guest''. If the database does not exist, it will be created, initialized and populated.
3495
3496 The ''eclipse.perf.config'' parameter identifies a '''variation''': It typically identifies the build on which is it run (commitId and/or build date, etc), the machine (host) on which it is run, the configuration of the system (for example Linux or Windows), the jvm etc. That parameter is a list of ';' separated key-value pairs. To be backward-compatible with the Eclipse Performance Tests Framework, the 4 keys mentioned above are mandatory, but any key-value pairs can be used.
3497
3498 == ABC of performance testing ==
3499
3500 Here follow some rules to help design good and meaningful performance tests.
3501
3502 === Determine what to test ===
3503
3504 For tests to be significant, it is important to choose what exactly is to be tested and make sure it is reproducible every run. To limit the amount of noise caused by the TMF framework, the performance test code should be tweaked so that only the method under test is run. For instance, a trace should not be "opened" (by calling the ''traceOpened()'' method) to test an analysis, since the ''traceOpened'' method will also trigger the indexing and the execution of all applicable automatic analysis.
3505
3506 For each code path to test, multiple scenarios can be defined. For instance, an analysis could be run on different traces, with different sizes. The results will show how the system scales and/or varies depending on the objects it is executed on.
3507
3508 The number of '''samples''' used to compute the results is also important. The code to test will typically be inside a '''for''' loop that runs exactly the same code each time for a given number of times. All objects used for the test must start in the same state at each iteration of the loop. For instance, any trace used during an execution should be disposed of at the end of the loop, and any supplementary file that may have been generated in the run should be deleted.
3509
3510 Before submitting a performance test to the code review, you should run it a few times (with results in the Console) and see if the standard deviation is not too large and if the results are reproducible.
3511
3512 === Metrics descriptions and considerations ===
3513
3514 CPU time: CPU time represent the total time spent on CPU by the current process, for the time of the test execution. It is the sum of the time spent by all threads. On one hand, it is more significant than the elapsed time, since it should be the same no matter how many CPU cores the computer has. But since it calculates the time of every thread, one has to make sure that only threads related to what is being tested are executed during that time, or else the results will include the times of those other threads. For an application like TMF, it is hard to control all the threads, and empirically, it is found to vary a lot more than the system time from one run to the other.
3515
3516 System time (Elapsed time): The time between the start and the end of the execution. It will vary depending on the parallelization of the threads and the load of the machine.
3517
3518 Kernel time: Time spent in kernel mode
3519
3520 Used Java Heap: It is the difference between the memory used at the beginning of the execution and at the end. This metric may be useful to calculate the overall size occupied by the data generated by the test run, by forcing a garbage collection before taking the metrics at the beginning and at the end of the execution. But it will not show the memory used throughout the execution. There can be a large standard deviation. The reason for this is that when benchmarking methods that trigger tasks in different threads, like signals and/or analysis, these other threads might be in various states at each run of the test, which will impact the memory usage calculated. When using this metric, either make sure the method to test does not trigger external threads or make sure you wait for them to finish.
3521
3522 = Network Tracing =
3523
3524 == Adding a protocol ==
3525
3526 Supporting a new network protocol in TMF is straightforward. Minimal effort is required to support new protocols. In this tutorial, the UDP protocol will be added to the list of supported protocols.
3527
3528 === Architecture ===
3529
3530 All the TMF pcap-related code is divided in three projects (not considering the tests plugins):
3531 * '''org.eclipse.tracecompass.pcap.core''', which contains the parser that will read pcap files and constructs the different packets from a ByteBuffer. It also contains means to build packet streams, which are conversation (list of packets) between two endpoints. To add a protocol, almost all of the work will be in that project.
3532 * '''org.eclipse.tracecompass.tmf.pcap.core''', which contains TMF-specific concepts and act as a wrapper between TMF and the pcap parsing library. It only depends on org.eclipse.tracecompass.tmf.core and org.eclipse.tracecompass.pcap.core. To add a protocol, one file must be edited in this project.
3533 * '''org.eclipse.tracecompass.tmf.pcap.ui''', which contains all TMF pcap UI-specific concepts, such as the views and perspectives. No work is needed in that project.
3534
3535 === UDP Packet Structure ===
3536
3537 The UDP is a transport-layer protocol that does not guarantee message delivery nor in-order message reception. A UDP packet (datagram) has the following [http://en.wikipedia.org/wiki/User_Datagram_Protocol#Packet_structure structure]:
3538
3539 {| class="wikitable" style="margin: 0 auto; text-align: center;"
3540 |-
3541 ! style="border-bottom:none; border-right:none;"| ''Offsets''
3542 ! style="border-left:none;"| Octet
3543 ! colspan="8" | 0
3544 ! colspan="8" | 1
3545 ! colspan="8" | 2
3546 ! colspan="8" | 3
3547 |-
3548 ! style="border-top: none" | Octet
3549 ! <tt>Bit</tt>!!<tt>&nbsp;0</tt>!!<tt>&nbsp;1</tt>!!<tt>&nbsp;2</tt>!!<tt>&nbsp;3</tt>!!<tt>&nbsp;4</tt>!!<tt>&nbsp;5</tt>!!<tt>&nbsp;6</tt>!!<tt>&nbsp;7</tt>!!<tt>&nbsp;8</tt>!!<tt>&nbsp;9</tt>!!<tt>10</tt>!!<tt>11</tt>!!<tt>12</tt>!!<tt>13</tt>!!<tt>14</tt>!!<tt>15</tt>!!<tt>16</tt>!!<tt>17</tt>!!<tt>18</tt>!!<tt>19</tt>!!<tt>20</tt>!!<tt>21</tt>!!<tt>22</tt>!!<tt>23</tt>!!<tt>24</tt>!!<tt>25</tt>!!<tt>26</tt>!!<tt>27</tt>!!<tt>28</tt>!!<tt>29</tt>!!<tt>30</tt>!!<tt>31</tt>
3550 |-
3551 ! 0
3552 !<tt> 0</tt>
3553 | colspan="16" style="background:#fdd;"| Source port || colspan="16"| Destination port
3554 |-
3555 ! 4
3556 !<tt>32</tt>
3557 | colspan="16"| Length || colspan="16" style="background:#fdd;"| Checksum
3558 |}
3559
3560 Knowing that, we can define an UDPPacket class that contains those fields.
3561
3562 === Creating the UDPPacket ===
3563
3564 First, in org.eclipse.tracecompass.pcap.core, create a new package named '''org.eclipse.tracecompass.pcap.core.protocol.name''' with name being the name of the new protocol. In our case name is udp so we create the package '''org.eclipse.tracecompass.pcap.core.protocol.udp'''. All our work is going in this package.
3565
3566 In this package, we create a new class named UDPPacket that extends Packet. All new protocol must define a packet type that extends the abstract class Packet. We also add different fields:
3567 * ''Packet'' '''fChildPacket''', which is the packet encapsulated by this UDP packet, if it exists. This field will be initialized by findChildPacket().
3568 * ''ByteBuffer'' '''fPayload''', which is the payload of this packet. Basically, it is the UDP packet without its header.
3569 * ''int'' '''fSourcePort''', which is an unsigned 16-bits field, that contains the source port of the packet (see packet structure).
3570 * ''int'' '''fDestinationPort''', which is an unsigned 16-bits field, that contains the destination port of the packet (see packet structure).
3571 * ''int'' '''fTotalLength''', which is an unsigned 16-bits field, that contains the total length (header + payload) of the packet.
3572 * ''int'' '''fChecksum''', which is an unsigned 16-bits field, that contains a checksum to verify the integrity of the data.
3573 * ''UDPEndpoint'' '''fSourceEndpoint''', which contains the source endpoint of the UDPPacket. The UDPEndpoint class will be created later in this tutorial.
3574 * ''UDPEndpoint'' '''fDestinationEndpoint''', which contains the destination endpoint of the UDPPacket.
3575 * ''ImmutableMap<String, String>'' '''fFields''', which is a map that contains all the packet fields (see in data structure) which assign a field name with its value. Those values will be displayed on the UI.
3576
3577 We also create the UDPPacket(PcapFile file, @Nullable Packet parent, ByteBuffer packet) constructor. The parameters are:
3578 * ''PcapFile'' '''file''', which is the pcap file to which this packet belongs.
3579 * ''Packet'' '''parent''', which is the packet encasulating this UDPPacket
3580 * ''ByteBuffer'' '''packet''', which is a ByteBuffer that contains all the data necessary to initialize the fields of this UDPPacket. We will retrieve bytes from it during object construction.
3581
3582 The following class is obtained:
3583
3584 <pre>
3585 package org.eclipse.tracecompass.pcap.core.protocol.udp;
3586
3587 import java.nio.ByteBuffer;
3588 import java.util.Map;
3589
3590 import org.eclipse.tracecompass.internal.pcap.core.endpoint.ProtocolEndpoint;
3591 import org.eclipse.tracecompass.internal.pcap.core.packet.BadPacketException;
3592 import org.eclipse.tracecompass.internal.pcap.core.packet.Packet;
3593
3594 public class UDPPacket extends Packet {
3595
3596 private final @Nullable Packet fChildPacket;
3597 private final @Nullable ByteBuffer fPayload;
3598
3599 private final int fSourcePort;
3600 private final int fDestinationPort;
3601 private final int fTotalLength;
3602 private final int fChecksum;
3603
3604 private @Nullable UDPEndpoint fSourceEndpoint;
3605 private @Nullable UDPEndpoint fDestinationEndpoint;
3606
3607 private @Nullable ImmutableMap<String, String> fFields;
3608
3609 /**
3610 * Constructor of the UDP Packet class.
3611 *
3612 * @param file
3613 * The file that contains this packet.
3614 * @param parent
3615 * The parent packet of this packet (the encapsulating packet).
3616 * @param packet
3617 * The entire packet (header and payload).
3618 * @throws BadPacketException
3619 * Thrown when the packet is erroneous.
3620 */
3621 public UDPPacket(PcapFile file, @Nullable Packet parent, ByteBuffer packet) throws BadPacketException {
3622 super(file, parent, PcapProtocol.UDP);
3623 // TODO Auto-generated constructor stub
3624 }
3625
3626
3627 @Override
3628 public Packet getChildPacket() {
3629 // TODO Auto-generated method stub
3630 return null;
3631 }
3632
3633 @Override
3634 public ByteBuffer getPayload() {
3635 // TODO Auto-generated method stub
3636 return null;
3637 }
3638
3639 @Override
3640 public boolean validate() {
3641 // TODO Auto-generated method stub
3642 return false;
3643 }
3644
3645 @Override
3646 protected Packet findChildPacket() throws BadPacketException {
3647 // TODO Auto-generated method stub
3648 return null;
3649 }
3650
3651 @Override
3652 public ProtocolEndpoint getSourceEndpoint() {
3653 // TODO Auto-generated method stub
3654 return null;
3655 }
3656
3657 @Override
3658 public ProtocolEndpoint getDestinationEndpoint() {
3659 // TODO Auto-generated method stub
3660 return null;
3661 }
3662
3663 @Override
3664 public Map<String, String> getFields() {
3665 // TODO Auto-generated method stub
3666 return null;
3667 }
3668
3669 @Override
3670 public String getLocalSummaryString() {
3671 // TODO Auto-generated method stub
3672 return null;
3673 }
3674
3675 @Override
3676 protected String getSignificationString() {
3677 // TODO Auto-generated method stub
3678 return null;
3679 }
3680
3681 @Override
3682 public boolean equals(Object obj) {
3683 // TODO Auto-generated method stub
3684 return false;
3685 }
3686
3687 @Override
3688 public int hashCode() {
3689 // TODO Auto-generated method stub
3690 return 0;
3691 }
3692
3693 }
3694 </pre>
3695
3696 Now, we implement the constructor. It is done in four steps:
3697 * We initialize fSourceEndpoint, fDestinationEndpoint and fFields to null, since those are lazy-loaded. This allows faster construction of the packet and thus faster parsing.
3698 * We initialize fSourcePort, fDestinationPort, fTotalLength, fChecksum using ByteBuffer packet. Thanks to the packet data structure, we can simply retrieve packet.getShort() to get the value. Since there is no unsigned in Java, special care is taken to avoid negative number. We use the utility method ConversionHelper.unsignedShortToInt() to convert it to an integer, and initialize the fields.
3699 * Now that the header is parsed, we take the rest of the ByteBuffer packet to initialize the payload, if there is one. To do this, we simply generate a new ByteBuffer starting from the current position.
3700 * We initialize the field fChildPacket using the method findChildPacket()
3701
3702 The following constructor is obtained:
3703 <pre>
3704 public UDPPacket(PcapFile file, @Nullable Packet parent, ByteBuffer packet) throws BadPacketException {
3705 super(file, parent, Protocol.UDP);
3706
3707 // The endpoints and fFields are lazy loaded. They are defined in the get*Endpoint()
3708 // methods.
3709 fSourceEndpoint = null;
3710 fDestinationEndpoint = null;
3711 fFields = null;
3712
3713 // Initialize the fields from the ByteBuffer
3714 packet.order(ByteOrder.BIG_ENDIAN);
3715 packet.position(0);
3716
3717 fSourcePort = ConversionHelper.unsignedShortToInt(packet.getShort());
3718 fDestinationPort = ConversionHelper.unsignedShortToInt(packet.getShort());
3719 fTotalLength = ConversionHelper.unsignedShortToInt(packet.getShort());
3720 fChecksum = ConversionHelper.unsignedShortToInt(packet.getShort());
3721
3722 // Initialize the payload
3723 if (packet.array().length - packet.position() > 0) {
3724 byte[] array = new byte[packet.array().length - packet.position()];
3725 packet.get(array);
3726
3727 ByteBuffer payload = ByteBuffer.wrap(array);
3728 payload.order(ByteOrder.BIG_ENDIAN);
3729 payload.position(0);
3730 fPayload = payload;
3731 } else {
3732 fPayload = null;
3733 }
3734
3735 // Find child
3736 fChildPacket = findChildPacket();
3737
3738 }
3739 </pre>
3740
3741 Then, we implement the following methods:
3742 * ''public Packet'' '''getChildPacket()''': simple getter of fChildPacket
3743 * ''public ByteBuffer'' '''getPayload()''': simple getter of fPayload
3744 * ''public boolean'' '''validate()''': method that checks if the packet is valid. In our case, the packet is valid if the retrieved checksum fChecksum and the real checksum (that we can compute using the fields and payload of UDPPacket) are the same.
3745 * ''protected Packet'' '''findChildPacket()''': method that create a new packet if a encapsulated protocol is found. For instance, based on the fDestinationPort, it could determine what the encapsulated protocol is and creates a new packet object.
3746 * ''public ProtocolEndpoint'' '''getSourceEndpoint()''': method that initializes and returns the source endpoint.
3747 * ''public ProtocolEndpoint'' '''getDestinationEndpoint()''': method that initializes and returns the destination endpoint.
3748 * ''public Map<String, String>'' '''getFields()''': method that initializes and returns the map containing the fields matched to their value.
3749 * ''public String'' '''getLocalSummaryString()''': method that returns a string summarizing the most important fields of the packet. There is no need to list all the fields, just the most important one. This will be displayed on UI.
3750 * ''protected String'' '''getSignificationString()''': method that returns a string describing the meaning of the packet. If there is no particular meaning, it is possible to return getLocalSummaryString().
3751 * public boolean'' '''equals(Object obj)''': Object's equals method.
3752 * public int'' '''hashCode()''': Object's hashCode method.
3753
3754 We get the following code:
3755 <pre>
3756 @Override
3757 public @Nullable Packet getChildPacket() {
3758 return fChildPacket;
3759 }
3760
3761 @Override
3762 public @Nullable ByteBuffer getPayload() {
3763 return fPayload;
3764 }
3765
3766 /**
3767 * Getter method that returns the UDP Source Port.
3768 *
3769 * @return The source Port.
3770 */
3771 public int getSourcePort() {
3772 return fSourcePort;
3773 }
3774
3775 /**
3776 * Getter method that returns the UDP Destination Port.
3777 *
3778 * @return The destination Port.
3779 */
3780 public int getDestinationPort() {
3781 return fDestinationPort;
3782 }
3783
3784 /**
3785 * {@inheritDoc}
3786 *
3787 * See http://www.iana.org/assignments/service-names-port-numbers/service-
3788 * names-port-numbers.xhtml or
3789 * http://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers
3790 */
3791 @Override
3792 protected @Nullable Packet findChildPacket() throws BadPacketException {
3793 // When more protocols are implemented, we can simply do a switch on the fDestinationPort field to find the child packet.
3794 // For instance, if the destination port is 80, then chances are the HTTP protocol is encapsulated. We can create a new HTTP
3795 // packet (after some verification that it is indeed the HTTP protocol).
3796 ByteBuffer payload = fPayload;
3797 if (payload == null) {
3798 return null;
3799 }
3800
3801 return new UnknownPacket(getPcapFile(), this, payload);
3802 }
3803
3804 @Override
3805 public boolean validate() {
3806 // Not yet implemented. ATM, we consider that all packets are valid.
3807 // TODO Implement it. We can compute the real checksum and compare it to fChecksum.
3808 return true;
3809 }
3810
3811 @Override
3812 public UDPEndpoint getSourceEndpoint() {
3813 @Nullable
3814 UDPEndpoint endpoint = fSourceEndpoint;
3815 if (endpoint == null) {
3816 endpoint = new UDPEndpoint(this, true);
3817 }
3818 fSourceEndpoint = endpoint;
3819 return fSourceEndpoint;
3820 }
3821
3822 @Override
3823 public UDPEndpoint getDestinationEndpoint() {
3824 @Nullable UDPEndpoint endpoint = fDestinationEndpoint;
3825 if (endpoint == null) {
3826 endpoint = new UDPEndpoint(this, false);
3827 }
3828 fDestinationEndpoint = endpoint;
3829 return fDestinationEndpoint;
3830 }
3831
3832 @Override
3833 public Map<String, String> getFields() {
3834 ImmutableMap<String, String> map = fFields;
3835 if (map == null) {
3836 @SuppressWarnings("null")
3837 @NonNull ImmutableMap<String, String> newMap = ImmutableMap.<String, String> builder()
3838 .put("Source Port", String.valueOf(fSourcePort)) //$NON-NLS-1$
3839 .put("Destination Port", String.valueOf(fDestinationPort)) //$NON-NLS-1$
3840 .put("Length", String.valueOf(fTotalLength) + " bytes") //$NON-NLS-1$ //$NON-NLS-2$
3841 .put("Checksum", String.format("%s%04x", "0x", fChecksum)) //$NON-NLS-1$ //$NON-NLS-2$ //$NON-NLS-3$
3842 .build();
3843 fFields = newMap;
3844 return newMap;
3845 }
3846 return map;
3847 }
3848
3849 @Override
3850 public String getLocalSummaryString() {
3851 return "Src Port: " + fSourcePort + ", Dst Port: " + fDestinationPort; //$NON-NLS-1$ //$NON-NLS-2$
3852 }
3853
3854 @Override
3855 protected String getSignificationString() {
3856 return "Source Port: " + fSourcePort + ", Destination Port: " + fDestinationPort; //$NON-NLS-1$ //$NON-NLS-2$
3857 }
3858
3859 @Override
3860 public int hashCode() {
3861 final int prime = 31;
3862 int result = 1;
3863 result = prime * result + fChecksum;
3864 final Packet child = fChildPacket;
3865 if (child != null) {
3866 result = prime * result + child.hashCode();
3867 } else {
3868 result = prime * result;
3869 }
3870 result = prime * result + fDestinationPort;
3871 final ByteBuffer payload = fPayload;
3872 if (payload != null) {
3873 result = prime * result + payload.hashCode();
3874 } else {
3875 result = prime * result;
3876 }
3877 result = prime * result + fSourcePort;
3878 result = prime * result + fTotalLength;
3879 return result;
3880 }
3881
3882 @Override
3883 public boolean equals(@Nullable Object obj) {
3884 if (this == obj) {
3885 return true;
3886 }
3887 if (obj == null) {
3888 return false;
3889 }
3890 if (getClass() != obj.getClass()) {
3891 return false;
3892 }
3893 UDPPacket other = (UDPPacket) obj;
3894 if (fChecksum != other.fChecksum) {
3895 return false;
3896 }
3897 final Packet child = fChildPacket;
3898 if (child != null) {
3899 if (!child.equals(other.fChildPacket)) {
3900 return false;
3901 }
3902 } else {
3903 if (other.fChildPacket != null) {
3904 return false;
3905 }
3906 }
3907 if (fDestinationPort != other.fDestinationPort) {
3908 return false;
3909 }
3910 final ByteBuffer payload = fPayload;
3911 if (payload != null) {
3912 if (!payload.equals(other.fPayload)) {
3913 return false;
3914 }
3915 } else {
3916 if (other.fPayload != null) {
3917 return false;
3918 }
3919 }
3920 if (fSourcePort != other.fSourcePort) {
3921 return false;
3922 }
3923 if (fTotalLength != other.fTotalLength) {
3924 return false;
3925 }
3926 return true;
3927 }
3928 </pre>
3929
3930 The UDPPacket class is implemented. We now have the define the UDPEndpoint.
3931
3932 === Creating the UDPEndpoint ===
3933
3934 For the UDP protocol, an endpoint will be its source or its destination port, depending if it is the source endpoint or destination endpoint. Knowing that, we can create our UDPEndpoint class.
3935
3936 We create in our package a new class named UDPEndpoint that extends ProtocolEndpoint. We also add a field: fPort, which contains the source or destination port. We finally add a constructor public ExampleEndpoint(Packet packet, boolean isSourceEndpoint):
3937 * ''Packet'' '''packet''': the packet to build the endpoint from.
3938 * ''boolean'' '''isSourceEndpoint''': whether the endpoint is the source endpoint or destination endpoint.
3939
3940 We obtain the following unimplemented class:
3941
3942 <pre>
3943 package org.eclipse.tracecompass.pcap.core.protocol.udp;
3944
3945 import org.eclipse.tracecompass.internal.pcap.core.endpoint.ProtocolEndpoint;
3946 import org.eclipse.tracecompass.internal.pcap.core.packet.Packet;
3947
3948 public class UDPEndpoint extends ProtocolEndpoint {
3949
3950 private final int fPort;
3951
3952 public UDPEndpoint(Packet packet, boolean isSourceEndpoint) {
3953 super(packet, isSourceEndpoint);
3954 // TODO Auto-generated constructor stub
3955 }
3956
3957 @Override
3958 public int hashCode() {
3959 // TODO Auto-generated method stub
3960 return 0;
3961 }
3962
3963 @Override
3964 public boolean equals(Object obj) {
3965 // TODO Auto-generated method stub
3966 return false;
3967 }
3968
3969 @Override
3970 public String toString() {
3971 // TODO Auto-generated method stub
3972 return null;
3973 }
3974
3975 }
3976 </pre>
3977
3978 For the constructor, we simply initialize fPort. If isSourceEndpoint is true, then we take packet.getSourcePort(), else we take packet.getDestinationPort().
3979
3980 <pre>
3981 /**
3982 * Constructor of the {@link UDPEndpoint} class. It takes a packet to get
3983 * its endpoint. Since every packet has two endpoints (source and
3984 * destination), the isSourceEndpoint parameter is used to specify which
3985 * endpoint to take.
3986 *
3987 * @param packet
3988 * The packet that contains the endpoints.
3989 * @param isSourceEndpoint
3990 * Whether to take the source or the destination endpoint of the
3991 * packet.
3992 */
3993 public UDPEndpoint(UDPPacket packet, boolean isSourceEndpoint) {
3994 super(packet, isSourceEndpoint);
3995 fPort = isSourceEndpoint ? packet.getSourcePort() : packet.getDestinationPort();
3996 }
3997 </pre>
3998
3999 Then we implement the methods:
4000 * ''public int'' '''hashCode()''': method that returns an integer based on the fields value. In our case, it will return an integer depending on fPort, and the parent endpoint that we can retrieve with getParentEndpoint().
4001 * ''public boolean'' '''equals(Object obj)''': method that returns true if two objects are equals. In our case, two UDPEndpoints are equal if they both have the same fPort and have the same parent endpoint that we can retrieve with getParentEndpoint().
4002 * ''public String'' '''toString()''': method that returns a description of the UDPEndpoint as a string. In our case, it will be a concatenation of the string of the parent endpoint and fPort as a string.
4003
4004 <pre>
4005 @Override
4006 public int hashCode() {
4007 final int prime = 31;
4008 int result = 1;
4009 ProtocolEndpoint endpoint = getParentEndpoint();
4010 if (endpoint == null) {
4011 result = 0;
4012 } else {
4013 result = endpoint.hashCode();
4014 }
4015 result = prime * result + fPort;
4016 return result;
4017 }
4018
4019 @Override
4020 public boolean equals(@Nullable Object obj) {
4021 if (this == obj) {
4022 return true;
4023 }
4024 if (!(obj instanceof UDPEndpoint)) {
4025 return false;
4026 }
4027
4028 UDPEndpoint other = (UDPEndpoint) obj;
4029
4030 // Check on layer
4031 boolean localEquals = (fPort == other.fPort);
4032 if (!localEquals) {
4033 return false;
4034 }
4035
4036 // Check above layers.
4037 ProtocolEndpoint endpoint = getParentEndpoint();
4038 if (endpoint != null) {
4039 return endpoint.equals(other.getParentEndpoint());
4040 }
4041 return true;
4042 }
4043
4044 @Override
4045 public String toString() {
4046 ProtocolEndpoint endpoint = getParentEndpoint();
4047 if (endpoint == null) {
4048 @SuppressWarnings("null")
4049 @NonNull String ret = String.valueOf(fPort);
4050 return ret;
4051 }
4052 return endpoint.toString() + '/' + fPort;
4053 }
4054 </pre>
4055
4056 === Registering the UDP protocol ===
4057
4058 The last step is to register the new protocol. There are three places where the protocol has to be registered. First, the parser has to know that a new protocol has been added. This is defined in the enum org.eclipse.tracecompass.internal.pcap.core.protocol.PcapProtocol. Simply add the protocol name here, along with a few arguments:
4059 * ''String'' '''longname''', which is the long version of name of the protocol. In our case, it is "User Datagram Protocol".
4060 * ''String'' '''shortName''', which is the shortened name of the protocol. In our case, it is "UDP".
4061 * ''Layer'' '''layer''', which is the layer to which the protocol belongs in the OSI model. In our case, this is the layer 4.
4062 * ''boolean'' '''supportsStream''', which defines whether or not the protocol supports packet streams. In our case, this is set to true.
4063
4064 Thus, the following line is added in the PcapProtocol enum:
4065 <pre>
4066 UDP("User Datagram Protocol", "udp", Layer.LAYER_4, true),
4067 </pre>
4068
4069 Also, TMF has to know about the new protocol. This is defined in org.eclipse.tracecompass.internal.tmf.pcap.core.protocol.TmfPcapProtocol. We simply add it, with a reference to the corresponding protocol in PcapProtocol. Thus, the following line is added in the TmfPcapProtocol enum:
4070 <pre>
4071 UDP(PcapProtocol.UDP),
4072 </pre>
4073
4074 You will also have to update the ''ProtocolConversion'' class to register the protocol in the switch statements. Thus, for UDP, we add:
4075 <pre>
4076 case UDP:
4077 return TmfPcapProtocol.UDP;
4078 </pre>
4079 and
4080 <pre>
4081 case UDP:
4082 return PcapProtocol.UDP;
4083 </pre>
4084
4085 Finally, all the protocols that could be the parent of the new protocol (in our case, IPv4 and IPv6) have to be notified of the new protocol. This is done by modifying the findChildPacket() method of the packet class of those protocols. For instance, in IPv4Packet, we add a case in the switch statement of findChildPacket, if the Protocol number matches UDP's protocol number at the network layer:
4086 <pre>
4087 @Override
4088 protected @Nullable Packet findChildPacket() throws BadPacketException {
4089 ByteBuffer payload = fPayload;
4090 if (payload == null) {
4091 return null;
4092 }
4093
4094 switch (fIpDatagramProtocol) {
4095 case IPProtocolNumberHelper.PROTOCOL_NUMBER_TCP:
4096 return new TCPPacket(getPcapFile(), this, payload);
4097 case IPProtocolNumberHelper.PROTOCOL_NUMBER_UDP:
4098 return new UDPPacket(getPcapFile(), this, payload);
4099 default:
4100 return new UnknownPacket(getPcapFile(), this, payload);
4101 }
4102 }
4103 </pre>
4104
4105 The new protocol has been added. Running TMF should work just fine, and the new protocol is now recognized.
4106
4107 == Adding stream-based views ==
4108
4109 To add a stream-based View, simply monitor the TmfPacketStreamSelectedSignal in your view. It contains the new stream that you can retrieve with signal.getStream(). You must then make an event request to the current trace to get the events, and use the stream to filter the events of interest. Therefore, you must also monitor TmfTraceOpenedSignal, TmfTraceClosedSignal and TmfTraceSelectedSignal. Examples of stream-based views include a view that represents the packets as a sequence diagram, or that shows the TCP connection state based on the packets SYN/ACK/FIN/RST flags. A (very very very early) draft of such a view can be found at https://git.eclipse.org/r/#/c/31054/.
4110
4111 == TODO ==
4112
4113 * Add more protocols. At the moment, only four protocols are supported. The following protocols would need to be implemented: ARP, SLL, WLAN, USB, IPv6, ICMP, ICMPv6, IGMP, IGMPv6, SCTP, DNS, FTP, HTTP, RTP, SIP, SSH and Telnet. Other VoIP protocols would be nice.
4114 * Add a network graph view. It would be useful to produce graphs that are meaningful to network engineers, and that they could use (for presentation purpose, for instance). We could use the XML-based analysis to do that!
4115 * Add a Stream Diagram view. This view would represent a stream as a Sequence Diagram. It would be updated when a TmfNewPacketStreamSignal is thrown. It would be easy to see the packet exchange and the time delta between each packet. Also, when a packet is selected in the Stream Diagram, it should be selected in the event table and its content should be shown in the Properties View. See https://git.eclipse.org/r/#/c/31054/ for a draft of such a view.
4116 * Make adding protocol more "plugin-ish", via extension points for instance. This would make it easier to support new protocols, without modifying the source code.
4117 * Control dumpcap directly from eclipse, similar to how LTTng is controlled in the Control View.
4118 * Support pcapng. See: http://www.winpcap.org/ntar/draft/PCAP-DumpFileFormat.html for the file format.
4119 * Add SWTBOT tests to org.eclipse.tracecompass.tmf.pcap.ui
4120 * Add a Raw Viewer, similar to Wireshark. We could use the “Show Raw” in the event editor to do that.
4121 * Externalize strings in org.eclipse.tracecompass.pcap.core. At the moment, all the strings are hardcoded. It would be good to externalize them all.
This page took 0.142685 seconds and 5 git commands to generate.