In lttng_userspace_probe_location_function_serialize: Comparing a
pointer against NULL using an operator such as < or >=.
`binary_fd` is now a fd_handle instance rather than a "raw" fd. All
instances of `binary_fd` are renamed to `binary_fd_handle` to prevent
such errors in the future.
Fix: evaluation: dereference before NULL check in create_from_payload
An evaluation payload view is created from the view passed to
lttng_evaluation_create_from_payload. Since a view contains a const
copy of the _fds array, it must be initialized as the declaration site.
However, src_view is checked for NULL after the initalization. Coverity
rightfully warns that:
Fix: extraneous empty/inactive flush on rotation out of a trace chunk
Observed issue
==============
A test (tests/regression/tools/tracefile-limits/test_tracefile_count)
occasionally fails on ppc64. The trace validation steps in the fails in
the case where the trace file count limit is set to 1.
Examining the resulting trace shows that the last packet of data
produced by the test application appears to be missing.
The test case enables a channel in "overwrite" mode. Normally, this
would guarantee that the last data produced will always be available in
the resulting trace.
Cause
=====
An empty/inactive flush is performed when rotating "out" of a non-null
trace chunk to ensure that the trace chunk contained at least one
packet.
Looking at the test's resulting trace and by following the consumerd
logs, we see that the test application runs on one CPU for most of its
lifetime. The stream file is repeatedly replaced to make room for the
latest data.
Eventually, the application is migrated to another CPU. A number of
packets are written to this new stream. The session is then stopped
which causes an active flush to occur to close the current packet of
all streams (see `ust_app_stop_trace_all`).
Then, when the session is destroyed, an empty/inactive flush is
performed to ensure that at least one packet was produced in the current
trace chunk [1].
At the moment of writing this empty packet, the consumer daemon sees
that there is not enough space left in the stream file to honour the
trace file size restriction. It thus overwrites the file resulting in
the loss of the last events to replace them with the empty "end of
chunk" packet that occupies a single page.
While the problem is not specific to PowerPC 64, it has a lot more
chances to occur there as pages are typically configured to be of 64kb
length. Due to current implementation limitations, empty packets have
a size of one page.
In other words, 4kb pages typically fit in the space left in the file,
causing the problem to not be easily reproducible on x64.
Note that while the file size limit is specified as "3 * PAGE_SIZE" in
the test, it is rounded-up to 512kb to accomodate at least one
sub-buffer.
Solution
========
[1] This empty/inactive flush is no longer necessary since f96af312b
as an "open packet" (which performs an empty/inactive flush) is
performed when a stream enters a non-null trace chunk. There is no
concern that a trace chunk will be left empty unless this initial flush
fails (see patch comments and work-around).
The empty flush that was performed for data streams is converted
into an active flush under most circumstances; the packet is simply
closed.
Fix: relayd: double unlock on viewer stream creation error
viewer_stream_create must be called with the relay stream's
lock held since 9edaf114d. A call to pthread_mutex_unlock
was forgotten in the error path of viewer_stream_create resulting
in a double-unlock in some error scenarios.
Fix: relayd: live connection fails to open file during clear
Issue observed
==============
A `session clear` test occasionaly fails on the CI (very rarely, and
more often on PowerPC executors for some reason) with babeltrace
reporting that the live connection was closed by the remote side:
PASS: tools/clear/test_ust 276 - Waiting for live viewers on url: net://localhost
07-21 16:17:07.058 23855 23855 E PLUGIN/SRC.CTF.LTTNG-LIVE/VIEWER lttng_live_recv@viewer-connection.c:198 [lttng-live] Remote side has closed connection
07-21 16:17:07.058 23855 23855 E PLUGIN/SRC.CTF.LTTNG-LIVE/VIEWER lttng_live_session_get_new_streams@viewer-connection.c:1701 [lttng-live] Error receiving get new streams reply
07-21 16:17:07.058 23855 23855 E PLUGIN/SRC.CTF.LTTNG-LIVE lttng_live_msg_iter_next@lttng-live.c:1665 [lttng-live] Error preparing the next batch of messages: live-iter-status=LTTNG_LIVE_ITERATOR_STATUS_ERROR
07-21 16:17:07.058 23855 23855 W LIB/MSG-ITER bt_message_iterator_next@iterator.c:864 Component input port message iterator's "next" method failed: iter-addr=0x1014d6a8, iter-upstream-comp-name="lttng-live", iter-upstream-comp-log-level=WARNING, iter-upstream-comp-class-type=SOURCE, iter-upstream-comp-class-name="lttng-live", iter-upstream-comp-class-partial-descr="Connect to an LTTng relay daemon", iter-upstream-port-type=OUTPUT, iter-upstream-port-name="out", status=ERROR
07-21 16:17:07.059 23855 23855 E PLUGIN/FLT.UTILS.MUXER muxer_upstream_msg_iter_next@muxer.c:454 [muxer] Upstream iterator's next method returned an error: status=ERROR
07-21 16:17:07.059 23855 23855 E PLUGIN/FLT.UTILS.MUXER validate_muxer_upstream_msg_iters@muxer.c:991 [muxer] Cannot validate muxer's upstream message iterator wrapper: muxer-msg-iter-addr=0x1014d668, muxer-upstream-msg-iter-wrap-addr=0x1014e210
07-21 16:17:07.059 23855 23855 E PLUGIN/FLT.UTILS.MUXER muxer_msg_iter_next@muxer.c:1415 [muxer] Cannot get next message: comp-addr=0x1014cca8, muxer-comp-addr=0x1015afd8, muxer-msg-iter-addr=0x1014d668, msg-iter-addr=0x1014d598, status=ERROR
07-21 16:17:07.059 23855 23855 W LIB/MSG-ITER bt_message_iterator_next@iterator.c:864 Component input port message iterator's "next" method failed: iter-addr=0x1014d598, iter-upstream-comp-name="muxer", iter-upstream-comp-log-level=WARNING, iter-upstream-comp-class-type=FILTER, iter-upstream-comp-class-name="muxer", iter-upstream-comp-class-partial-descr="Sort messages from multiple inpu", iter-upstream-port-type=OUTPUT, iter-upstream-port-name="out", status=ERROR
07-21 16:17:07.059 23855 23855 W LIB/GRAPH consume_graph_sink@graph.c:473 Component's "consume" method failed: status=ERROR, comp-addr=0x1014d128, comp-name="pretty", comp-log-level=WARNING, comp-class-type=SINK, comp-class-name="pretty", comp-class-partial-descr="Pretty-print messages (`text` fo", comp-class-is-frozen=1, comp-class-so-handle-addr=0x10159dd8, comp-class-so-handle-path="/home/jenkins/workspace/lttng-tools_stable-2.12_portbuild/arch/powerpc/babeltrace_version/stable-2.0/build/std/conf/agents/liburcu_version/stable-0.12/test_type/base/deps/build/lib/babeltrace2/plugins/babeltrace-plugin-text.so", comp-input-port-count=1, comp-output-port-count=0
07-21 16:17:07.059 23855 23855 E CLI cmd_run@babeltrace2.c:2548 Graph failed to complete successfully
ERROR: [Babeltrace CLI] (babeltrace2.c:2548)
Graph failed to complete successfully
CAUSED BY [libbabeltrace2] (graph.c:473)
Component's "consume" method failed: status=ERROR, comp-addr=0x1014d128,
comp-name="pretty", comp-log-level=WARNING, comp-class-type=SINK,
comp-class-name="pretty", comp-class-partial-descr="Pretty-print messages
(`text` fo", comp-class-is-frozen=1, comp-class-so-handle-addr=0x10159dd8,
comp-class-so-handle-path="/home/jenkins/workspace/lttng-tools_stable-2.12_portbuild/arch/powerpc/babeltrace_version/stable-2.0/build/std/conf/agents/liburcu_version/stable-0.12/test_type/base/deps/build/lib/babeltrace2/plugins/babeltrace-plugin-text.so",
comp-input-port-count=1, comp-output-port-count=0
CAUSED BY [libbabeltrace2] (iterator.c:864)
Component input port message iterator's "next" method failed:
iter-addr=0x1014d598, iter-upstream-comp-name="muxer",
iter-upstream-comp-log-level=WARNING, iter-upstream-comp-class-type=FILTER,
iter-upstream-comp-class-name="muxer",
iter-upstream-comp-class-partial-descr="Sort messages from multiple inpu",
iter-upstream-port-type=OUTPUT, iter-upstream-port-name="out", status=ERROR
CAUSED BY [muxer: 'filter.utils.muxer'] (muxer.c:991)
Cannot validate muxer's upstream message iterator wrapper:
muxer-msg-iter-addr=0x1014d668, muxer-upstream-msg-iter-wrap-addr=0x1014e210
CAUSED BY [muxer: 'filter.utils.muxer'] (muxer.c:454)
Upstream iterator's next method returned an error: status=ERROR
CAUSED BY [libbabeltrace2] (iterator.c:864)
Component input port message iterator's "next" method failed:
iter-addr=0x1014d6a8, iter-upstream-comp-name="lttng-live",
iter-upstream-comp-log-level=WARNING, iter-upstream-comp-class-type=SOURCE,
iter-upstream-comp-class-name="lttng-live",
iter-upstream-comp-class-partial-descr="Connect to an LTTng relay daemon",
iter-upstream-port-type=OUTPUT, iter-upstream-port-name="out", status=ERROR
CAUSED BY [lttng-live: 'source.ctf.lttng-live'] (lttng-live.c:1665)
Error preparing the next batch of messages:
live-iter-status=LTTNG_LIVE_ITERATOR_STATUS_ERROR
CAUSED BY [lttng-live: 'source.ctf.lttng-live'] (viewer-connection.c:1701)
Error receiving get new streams reply
CAUSED BY [lttng-live: 'source.ctf.lttng-live'] (viewer-connection.c:198)
Remote side has closed connection
ok 277 - Clear session J7WXjh7fmMleTCfE
PASS: tools/clear/test_ust 277 - Clear session J7WXjh7fmMleTCfE
ok 278 - Clear session J7WXjh7fmMleTCfE
PASS: tools/clear/test_ust 278 - Clear session J7WXjh7fmMleTCfE
ok 279 - Stop lttng tracing for session J7WXjh7fmMleTCfE
PASS: tools/clear/test_ust 279 - Stop lttng tracing for session J7WXjh7fmMleTCfE
ok 280 - Destroy session J7WXjh7fmMleTCfE
PASS: tools/clear/test_ust 280 - Destroy session J7WXjh7fmMleTCfE
# Wait for viewer to exit
not ok 281 - Babeltrace succeeds
Cause
=====
Looking at the relay daemon logs, it appears that the live client
requests an enumeration of the available streams while a rotation is
ongoing (clear).
Ultimately, this results in the relay daemon attempting to open a
non-existing file:
PERROR - 16:33:59.242388809 [734380/734387]: Failed to open fs handle to ust/uid/1000/64-bit/chan_0, open() returned: No such file or directory (in fd_tracker_open_fs_handle() at fd-tracker.c:550)
The logs indicate that this file existed at some point. However, it
was unlinked and its newest instance was created in a trace chunk
named "20200727T163359-0400-1".
This chunk name is a temporary name used until the original trace
chunk can be unlinked (cleared) and the newest can be moved in its
place.
The file is being opened as part of the creation of a viewer stream
when make_viewer_stream() fails to find it. This implies that, somehow,
an outdated trace chunk is being used to open the viewer stream's file.
The reason why is that make_viewer_stream is called with the
viewer session's current trace chunk. During a rotation, the use of the
viewer session's current trace chunk is touchy due to the way the
switch-over to a new chunk is handled.
How viewer session/stream trace chunks are rotated
--------------------------------------------------
The viewer polls the relay daemon for new data/metadata to consume using
the `GET_NEXT_INDEX` and `GET_METADATA` commands. Both commands will
indicate to the viewer that it should try again later if a rotation is
ongoing on the "side" of the relay session.
When a rotation is not ongoing, the relay compares the `id` of the
target viewer stream's trace chunk with the relay session's current
trace chunk. If those `id`s don't match, the viewer session's current
trace chunk is then updated to a copy of the relay session's trace
chunk. The viewer stream's files are then closed and re-opened in the
context of the viewer session's now-updated trace chunk.
While the live protocol allows `GET_NEXT_INDEX` and `GET_METADATA` to
communicate to the viewer that it should "retry" later, there is no such
provisions made for the paths that lead to the creation of the viewer
streams. This means viewer streams must be created even if a rotation is
ongoing.
Solution
========
If a rotation is ongoing at the moment of the creation of the viewer
streams, we wish to use a copy of the trace chunk being used by the
relay stream. This way, we can be sure that the streams' files exist.
It is okay for viewer streams to hold references to different copies of
the trace chunks since no user-visible actions are performed when the
reference to those chunks is released. This is different from relay
streams where the detection of the completion of a rotation is done when
the last relay stream releases its reference to a specific trace chunk
instance.
Known drawbacks
===============
None beyond a slight increase (temporary until the next rotation) in the
number of FDs used when a client connects during a session rotation.
Note
====
Since make_viewer_streams now acts on relay and viewer counterparts of
the session and stream objects, the various variables are prefixed with
`relay` and `viewer` to make the code easier to understand.
The locking period of the relay stream is extended to most of the
iteration in make_viewer_stream() rather than only in
viewer_stream_create(). As a result, callers of viewer_stream_create()
must hold the relay stream's lock.
Jonathan Rajotte [Tue, 28 Jul 2020 14:26:02 +0000 (10:26 -0400)]
Fix: sessiond: unchecked return value
From Coverity:
CID 1431048 (#1 of 1): Unchecked return value (CHECKED_RETURN)
1. check_return: Calling lttng_dynamic_buffer_set_size without
checking return value (as is done elsewhere 32 out of 40 times).
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com> Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Id15632e3932052cf7dce5d57a08ac6efc84fc92f
Jonathan Rajotte [Tue, 28 Jul 2020 14:22:28 +0000 (10:22 -0400)]
Fix: common: unchecked return value
From Coverity:
CID 1431050 (#1 of 1): Unchecked return value (CHECKED_RETURN)
1. check_return: Calling lttng_dynamic_buffer_set_size without
checking return value (as is done elsewhere 32 out of 40 times).
Solution
========
Since we are inside a the clear operation for the payload, ignore the
return value.
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com> Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I45c1054de7252aab4b77f330102ff4f489f20a6d
Jonathan Rajotte [Tue, 28 Jul 2020 14:19:30 +0000 (10:19 -0400)]
Fix: common: improper use of negative return
From Coverity:
CID 1431053 (#1 of 2): Improper use of negative value (NEGATIVE_RETURNS)
5. negative_returns: fd_count is passed to a parameter that cannot be
negative.
Solution
========
Check return value for fd_count and goto error if negative.
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com> Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Ibdb2f065f0ebe51efae9125630a17730386395ac
Jonathan Rajotte [Tue, 28 Jul 2020 14:15:04 +0000 (10:15 -0400)]
Fix: sessiond: unchecked return value
From coverity:
CID 1431054 (#1 of 1): Unchecked return value (CHECKED_RETURN)
1. check_return: Calling lttng_dynamic_buffer_set_size without
checking return value (as is done elsewhere 32 out of 40 times).
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com> Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I6b5b18d90f194c8081b91b3fbb80dccd29ba19ca
payload: use fd_handle instead of raw file descriptors
Using raw file descriptors with lttng_payloads introduces scary file
descriptor corner-cases when mixing with asynchroneous communication and
lttng_payloads.
Since an lttng_payload doesn't own its file descriptors, attempting it
is easy to fall into a situation where a file descriptor is referenced
by an lttng_payload while the owner is destroyed.
For instance, a userspace probe could be destroyed while its description
is waiting to be sent to a client.
The various use sites of the payload/payload_view APIs are adjusted.
Utilities to send/recv fds through unix sockets using the payload and
payload view interfaces are added as part of this commit as they use
the payload's fd_handles.
An fd_handle allows the reference counting of file descriptors which may
be shared by multiple objects. There is no synchronization imposed (or
provided) for the use of the underlying file descriptors as this utility
meant to be used on file descriptors where this would not make
sense (eventfd, dir fd, etc.)
uprobe: transmit binary file descritptor through lttng_payload
Adapt the userspace probe objects to use the lttng_payload interface.
This streamlines the acquisition of the file descriptors when those
objects are serialized.
File descriptors are transmitted in both directions between liblttng-ctl
and the session daemon making it possible (and safe) to compare
userspace probe instances.
Currently the event listing API does not allow us to express userspace
probe locations that contain a file descriptor. This is an unfortunate
consequence of returning a "flat" array to list events.
Indeed, we can't store a file descriptor in the userspace probe
locations returned to the user in this API since the destructors of the
probe locations are never called. The user simply free()'s the returned
array, which would leak the file descriptors.
The consequence of this is that we can't allow the creation of event
rules using a probe location returned by an lttng_list_events() call.
This is not unsolvable, but I'm not sure if there really is a use-case
for this.
Fix: payload view: payload view always refers to parent's position
A payload view's fd iterator must point to the root view's fd iterator
and not necessarily its parent's. This would cause the iterator
to be reset when views were nested more than two levels deep.
common: add lttng_payload_view fd count accessor and buffer init
Allow the initialization of a payload view from a subset of a dynamic
buffer (echoing the lttng_buffer_view API) and add an accessor for the
fd count property.
The payload utils are moved from libsessiond-comm to libcommon since
they present a circular dependancy: a payload uses the dynamic buffer
utilities while the actions, triggers, and conditions make use of the
lttng_payload utilities.
sessiond: prepare client replies through an lttng_payload
Modify the command context structure to contain an lttng_payload. This
allows commands to return a payload which contains a file descriptor
without accessing the socket directly.
An interesting side-benefit is that, in practice, this eliminates all
dynamic allocations from the client communications beyond the first
command served. The command context is re-used and the reply buffer is
allocated once and not released (only its size is reset to 0).
Fix: sessiond: don't negate error code on list error
Listing errors are already negative. Negating in the error path
causes error codes to be interpreted as a number of events and
cause a communication error further on.
Jonathan Rajotte [Fri, 10 Jul 2020 12:46:04 +0000 (08:46 -0400)]
unix: add non block send and receive flavors for fd passing
These will be used by the notification subsystem.
It is important to note that based on our current knowledge the sending
/receiving of fds is an all or nothing scenario. The fds are actually
part of the control message instead of the `payload`. On the receiving
side, a reception of N fds will only yield a "read" count of 1 bytes on
reception.
Albeit we don't have to account for the partial send/receive, we have to
manage the EAGAIN/EWOULDBLOCK scenario off non-blocking socket.
The caller of these function must handle the following scenario:
ret < 0 -> error
ret == 0 (Nothing received/sent)
ret > 0 -> success
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com> Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Iabf8de876991c9f1c5b46ea609f9af961c4a7ab9
Michael Jeanson [Mon, 13 Jul 2020 19:41:01 +0000 (15:41 -0400)]
tests: return the proper TAP exit code
The C TAP library provides the 'exit_status()' function that will return
the proper exit code according to the number of tests that succeeded or
failed.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I0de2349609eb34b1c5e58f09012c1db0126923c0
Tests: live/test_{lttng_,}kernel: use lttng_test_filter_event instead of sched_switch
Background
==========
These tests currently rely on system load (the `sched_switch` event) to
generate trace data.
Issue
=====
This is can be problematic for the `test_kernel`
test case because it has a fixed sized buffer to store the trace:
#define mmap_size 524288
This caused this test failure to randomly happen on my machine:
ok 7 - Get one index per stream
# mmap_size not big enough
not ok 8 - Get one data packet for stream 0, offset 0, len 4096
# Failed test (live_test.c:main() at line 709)
[error] Error detaching viewer session
not ok 9 - Detach viewer session
# Failed test (live_test.c:main() at line 715)
Solution
========
Using the `lttng_test_filter_event` event to control the size and
number of the event expected in the trace rather then depending on how
many Electon apps are currently fighting for my CPUs.
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com> Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I15d500d5becf9c5e526ae11ff0b2a2f4f6b753ac
Fix: consumer: Move sanity check within `consumer_subbuffer` functions
The sanity check on the number bytes written by the `consumer_subbuffer`
callback was not correct channel configured in the splice output type in
a live session.
To simplify this, move checks in the callback themselves so they can be
specialized.
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com> Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I4e47a305860684c461ba7ffffd5e3bb3a21990b0
Jonathan Rajotte [Tue, 21 Jul 2020 15:00:40 +0000 (11:00 -0400)]
Fix: use sys/types.h for ssize_t on Cygwin
Observed issue
==============
On cygwin worker:
In file included from snapshot.c:10:
../../src/common/snapshot.h:33:1: error: unknown type name `ssize_t`; did you mean `_ssize_t`?
33 | ssize_t lttng_snapshot_output_create_from_buffer(
| ^~~~~~~
| _ssize_t
snapshot.c:128:9: error: conflicting types for `lttng_snapshot_output_create_from_buffer`
128 | ssize_t lttng_snapshot_output_create_from_buffer(
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from snapshot.c:10:
../../src/common/snapshot.h:33:9: note: previous declaration of `lttng_snapshot_output_create_from_buffer` was here
33 | ssize_t lttng_snapshot_output_create_from_buffer(
|
Solution
========
Include sys/types.h.
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com> Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I1df58ffb6df02d6957e1e4eac6ebfb297f3e3bb0
Fix: consumerd: packet sent before channel rotation
Issue observed
==============
A clear test occasionally fails with the following output:
# Test ust streaming rotate-clear
# Parameters: tracing_active=0, clear_twice=1, rotate_before=0, rotate_after=0, buffer_type=uid
ok 605 - Create session S0BXcJKWrmAwNSzm with uri:net://localhost and opts:
PASS: tools/clear/test_ust 605 - Create session S0BXcJKWrmAwNSzm with uri:net://localhost and opts:
ok 606 - Enable channel chan for session S0BXcJKWrmAwNSzm
PASS: tools/clear/test_ust 606 - Enable channel chan for session S0BXcJKWrmAwNSzm
ok 607 - Enable ust event tp:tptest for session S0BXcJKWrmAwNSzm
PASS: tools/clear/test_ust 607 - Enable ust event tp:tptest for session S0BXcJKWrmAwNSzm
ok 608 - Start tracing for session S0BXcJKWrmAwNSzm
PASS: tools/clear/test_ust 608 - Start tracing for session S0BXcJKWrmAwNSzm
ok 609 - Rotate session S0BXcJKWrmAwNSzm
PASS: tools/clear/test_ust 609 - Rotate session S0BXcJKWrmAwNSzm
ok 610 - Stop lttng tracing for session S0BXcJKWrmAwNSzm
PASS: tools/clear/test_ust 610 - Stop lttng tracing for session S0BXcJKWrmAwNSzm
ok 611 - Clear session S0BXcJKWrmAwNSzm
PASS: tools/clear/test_ust 611 - Clear session S0BXcJKWrmAwNSzm
ok 612 - Clear session S0BXcJKWrmAwNSzm
PASS: tools/clear/test_ust 612 - Clear session S0BXcJKWrmAwNSzm
Error: Relayd rotate streams replied error 97
Error: Relayd rotate stream failed. Cleaning up relayd 33
Error: Relayd send index failed. Cleaning up relayd 33.
Error: Rotate channel failed
Error: Stream 76 relayd ID 33 unknown. Can't write index.
Error: Stream 74 relayd ID 33 unknown. Can't write index.
Error: Stream 72 relayd ID 33 unknown. Can't write index.
ok 613 - Start tracing for session S0BXcJKWrmAwNSzm
PASS: tools/clear/test_ust 613 - Start tracing for session S0BXcJKWrmAwNSzm
not ok 614 - Stop lttng tracing for session S0BXcJKWrmAwNSzm
FAIL: tools/clear/test_ust 614 - Stop lttng tracing for session S0BXcJKWrmAwNSzm
# Failed test 'Stop lttng tracing for session S0BXcJKWrmAwNSzm'
# in ./tools/clear//../../../utils/utils.sh:stop_lttng_tracing_opt() at line 1311.
ok 615 - Validate trace for event tp:tptest, 1 events
PASS: tools/clear/test_ust 615 - Validate trace for event tp:tptest, 1 events
not ok 616 - Read a total of 1 events, expected 4
FAIL: tools/clear/test_ust 616 - Read a total of 1 events, expected 4
# Failed test 'Read a total of 1 events, expected 4'
# in ./tools/clear//../../../utils/utils.sh:validate_trace_count() at line 1764.
Error: Failed to perform an implicit rotation as part of the destruction of session "S0BXcJKWrmAwNSzm": Unknown error code
not ok 617 - Destroy session S0BXcJKWrmAwNSzm
FAIL: tools/clear/test_ust 617 - Destroy session S0BXcJKWrmAwNSzm
# Failed test 'Destroy session S0BXcJKWrmAwNSzm'
# in ./tools/clear//../../../utils/utils.sh:destroy_lttng_session() at line 1347.
# Test ust streaming clear-rotate
Looking at the relay daemon log when the problem is reproduced,
we see:
Error: Protocol error: received a packet for a stream that doesn't have a current trace chunk: stream_id = 1, channel_name = chan_0
Cause
=====
The "rotate channel" consumer command iterates over a channel's streams
to perform a rotation and open a new packet when necessary
(see comments).
In the case where a channel is associated with a relay daemon, the
rotation positions are accumulated to send a single "rotate channel
streams" command to the relay daemon. This is done to reduce the time
needed to complete a rotation when tracing to a relay daemon through an
high-latency network connection.
Unfortunately, this causes packets to be opened before the rotation
command was sent to the relay daemon as the "open packet" command
is performed during the iteration on the streams.
Solution
========
Streams for which a packet should be opened are accumulated into an
array of stream pointers. The "open packet" is performed after a
successful rotation of the streams as a second "pass".
Fix: relayd: wrong specifier used in DBG format string
`len` is of type uint64_t while the format string specifies a size of
`%zd`. This results in a warning on most 32-bit architectures.
In file included from ../../../src/common/common.h:12:0,
from live.c:33:
live.c: In function `viewer_get_metadata`:
../../../src/common/error.h:161:35: warning: format `%zd` expects
argument of type `signed size_t`, but argument 6 has type `uint64_t
{aka long long unsigned int}` [-Wformat=]
#define DBG(fmt, args...) _ERRMSG("DEBUG1", PRINT_DBG, fmt, ## args)
^
../../../src/common/error.h:136:51: note: in definition of macro `__lttng_print`
fprintf((type) == PRINT_MSG ? stdout : stderr, fmt, ## args); \
^~~
../../../src/common/error.h:161:27: note: in expansion of macro `_ERRMSG`
#define DBG(fmt, args...) _ERRMSG("DEBUG1", PRINT_DBG, fmt, ## args)
^~~~~~~
live.c:2051:4: note: in expansion of macro `DBG`
DBG("Failed to read metadata: requested = %zd, got = %zd",
^~~
Add a test that validates that the relay daemon performs a
metadata stream rotation after a clear.
The test enables a single event and launches a test application to
trigger it 10 times. Once the 10 event occurrences have been seen
by the live client, the session is cleared.
Then, two new events (statedump start and end) are enabled. This causes
new event descriptions to be appended to the metadata stream.
After a clear, the relay daemon should rotate the metadata stream
and send the new contents to the client. The client will then
be able to decode the statedump start/end events.
Fix: relayd: send_viewer_streams sends stack data in padding
A single stack-allocated instance of `struct lttng_viewer_stream` is
used to send the various streams to the live viewer. This structure
contains a path and channel name which remain uninitialized beyond the
null terminator.
Clean-up: consumer: move open packet to post_consume
Move the "open packet" step of read_subbuffer to a post-consume callback
as this only needs to be done for data streams; it does not belong in
the core of the read_subbuffer template method.
Fix: stream intersection fails on snapshot of cleared session
Observed issue
==============
In the following scenario:
lttng create --snapshot
lttng enable-event -u -a
lttng start
taskset -c 0 <tracepoint producing app>
lttng clear
taskset -c 0 <tracepoint producing app>
lttng snapshot record
lttng destroy
When using the stream-intersection mode, babetrace complains that the
time range for the intersection is invalid since the begin timestamp is
after the end timestamp.
This is caused by the presence of "inactive" streams for which no events
are recorded between the clear action and the recording of the snapshot.
These streams have a begin timestamp roughly equal to the moment when
the snapshot was taken (i.e. the end timestamp). Babeltrace, in
stream-intersection mode, attempts to use the latest beginning timestamp
of all streams as the start of the intersection and the earliest end
timestamp as the end boundary.
The packet beginning timestamps of the buffers are initialized on
creation (on the first "start" of a tracing session). When a "clear" is
performed on a session, all open packets are closed and the existing
contents are purged.
If a stream is inactive, it is possible for no packet to be "opened"
until a snapshot of the tracing session is recorded.
Solution
========
A new consumer command, "open channel packets" is added. This command
performs a "flush empty" operation on all streams of a channel.
This command is invoked after a clear (after the tracing is re-started)
and on start. This ensures that streams are opened as soon as possible
after a clear, a rotation, or a session start.
Known drawbacks
===============
In the case of an inactive stream, this results an extra empty packet at
the beginning of the inactive streams (typically 4kB) in the snapshots.
In the case of an active stream, this change will cause the first packet
to be empty or contain few events. If the stream is active enough to
wrap-around, that empty packet will simply be overwritten.
Fix: relayd: viewer metadata is not rotated after a session clear
Issue observed
==============
Following a session clear, babeltrace sometimes doesn't receive
the content of the metadata that is announced in the get_metadata
reply header.
This causes babeltrace2 to assert (a fix has been submitted) since
the protocol state becomes de-synchronized, causing babeltrace to
interpret follow-up replies as garbage.
This was occasionally observed on the CI when running the "clear" tests.
Cause
=====
There are no provisions made for rotating a viewer metadata stream when
a clear is performed on a live session. This means that a client can
request metadata that is only present in a newer chunk.
In the case of the crashes observed on the CI, the relay daemon attempts
to service a get_new_metadata request of size `4096`. It then fails to
read the data (as it was never present in the original trace chunk).
The relay daemon does not interpret the `0` returned by lttng_read() as
an error and sends a reply announcing `4096`bytes of metadata and no
payload.
Solution
========
Two fixes are rolled into this patch.
First, the return of lttng_read is checked for `0` and that situation is
handled as an error. However, this still leaves the problem of the
metadata stream not being rotated.
Secondly, the metadata relay_stream is checked for a rotation on every
`get_metadata` command. If a rotation has been detected, a viewer
rotation is performed on the metadata stream (very similar to the data
stream).
This solves the problem, but it leaves a case which the protocol does
not account for.
Essentially, the following can take place:
- relayd sets the "NEW_METADATA" flag as part of a `get_next_index`
query reply
- A rotation of the metadata stream occurs, no data is sent.
- client requests metadata
- metadata sent > received (was reset to 0 as part of the rotation)
In this scenario, the current implementation returned NO_NEW_METADATA,
but it is erroneous. Returning this guarantees to the viewer that it
will be able to decode all data packets that follow (until new metadata
is signalled, if ever).
Ideally, we would return a `RETRY` code, as is done by the data stream
handler when it detects that a rotation is taking place. Unfortunately,
such a code doesn't exist for the `get_metadata` command.
We return ̀ OK` with a length of 0, which is technically correct since
viewers are supposed to fetch metadata until the relay daemon returns
the `NO_NEW_METADATA` status code. However, supporting this has required
changes to babeltrace2's lttng-live source component.
I'm anticipating that most implementations don't handle the 0-length
case any better.
Known drawbacks
===============
Older viewers may not handle `OK` replies with a length of 0 gracefully.
Sending `NO_NEW_METADATA` is not an option as it breaks the guarantee
that all necessary metadata will be received before `NO_NEW_METADATA`.
Fix: post-clear trace chunk has a late beginning packet
Observed issue
==============
In the following scenario:
- create a regular session
- enable an event that occurs on only one core
- start tracing
- trace for a while
- stop tracing
- clear the session
- start
- trace for a while (referred to the "active period" later on)
- stop
The resulting trace will contain a very short stream intersection as the
active stream will contain packets spanning the entire active period.
However, all other streams will contain a single packet at the end of
the active period with a duration of 0 ns.
This presents two problems:
1) This makes the stream intersection mode of viewers unusable,
2) This misleads the user into thinking that tracing was not active
for some buffers.
Cause
=====
The packet beginning timestamps of the buffers are initialized on
creation (on the first "start" of a tracing session). When a "clear" is
performed on a session, all open packets are closed and the existing
contents are purged.
If a stream is inactive, it is possible for no packet to be "opened"
until the "stop" of the tracing session.
On stop, a "flush_empty" is performed. Such a flush opens a packet
(if it was not already the case), closes it, and marks the packet as
being ready for consumption.
Solution
========
Attempt to flush an empty packet as close to the rotation point as
possible. In the event where a stream remains inactive after the
rotation point, this ensures that the new trace chunk has a beginning
timestamp set at the begining of the trace chunk instead of only
creating an empty packet when the trace chunk is stopped.
This indicates to the viewers that the stream was being recorded, but
more importantly it allows viewers to determine a useable trace
intersection.
This presents a problem in the case where the ring-buffer is completely
full.
Consider the following scenario:
- The consumption of data is slow (slow network, for instance),
- The ring buffer is full,
- A rotation is initiated,
- The flush below does nothing (no space left to open a new packet),
- The other streams rotate very soon, and new data is produced in the
new chunk,
- This stream completes its rotation long after the rotation was
initiated
- The session is stopped before any event can be produced in this
stream's buffers.
The resulting trace chunk will have a single packet temporaly at the end
of the trace chunk for this stream making the stream intersection more
narrow than it should be.
To work-around this, an empty flush is performed after the first
consumption of a packet during a rotation if the initial flush failed.
The idea is that consuming a packet frees enough space to switch packets
in this scenario and allows the tracer to "stamp" the beginning of the
new trace chunk at the earliest possible point.
Note that metadata streams are always skipped when opening a packet.
This is done for two reasons:
1) Timestamps are not relevant to the metadata stream
2) Inserting an empty packet in the metadata stream after a rotation
breaks the use of "clear" in live mode.
The contents of the metadata streams of successive chunks must be
strict superset of one another as live clients only receive the
information appended to a metadata stream (i.e. the parts it
already has received can't change).
If a flush_empty was performed after a clear/rotation, it would
result in an empty packet being inserted at the beginning of the
metadata stream that wasn't present in the first chunk.
This would cause the live client and relay daemon to have
mismatching copies of the metadata stream.
Known drawbacks
===============
In the case of an inactive stream, this results in the completed trace
chunk archive containing an extra empty packet at the beginning of the
stream (typically 4kB).
In the case of an active stream, this change will cause the first packet
to be empty or contain few events.
Those are all efficiency losses that are inevitable (AFAIK) given the
current buffer control APIs. It will be possible to recoup those losses
if an API allowing the consumer daemon to open a new packet is
introduced.
As noted in the comments, this patch is not final. The flush after the
rotation should not be open-coded in lttng_consumer_read_subbuffer. It
should be a data-stream specific "post-consume" step.
Fix: kconsumer: missing wait for metadata thread in do_sync_metadata
The `do_sync_metadata` function is invoked everytime a data sub-buffer
is consumed in live mode.
In the user space case, lttng_ustconsumer_sync_metadata() returns
EAGAIN (positive) when there is new metadata to consume. This causes the
"metadata rendez-vous" synchronization to take place. However, the
kernel variant of this function returns 0 when there is new data to
consume, causing the "rendez-vous" to be skipped.
I have not observed an issue caused by this first-hand, but the check
appears bogus and skips over an essential synchronization step.
This check has been in place since at least 2013, although the callees
and their return values may have changed at some point in the past.
Solution
--------
The user space and kernel code paths mix various return code conventions
(negative errno, positive errno, 0/-1) which makes it difficult to
understand the final return codes and most likely lead to this confusion
in the first place.
Moreover, returning EAGAIN to indicate that data is ready to be consumed
is not appropriate in view of the existing conventions in the code base.
An explicit `enum sync_metadata_status` is returned by the domains'
sync_metadata operations which allows the common code to handle the
various conditions in a straight-forward manner and for the
"rendez-vous" to take place in the kernel case.
Reported-by: Francis Deslauriers <francis.deslauriers@efficios.com> Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Ib022eee97054c0b376853dd05593e3b94bc9a8ca
Clean-up: relayd: unused tcp keep alive config return value
gcc 10.1.0 reports:
tcp_keep_alive.c: In function \u2018tcp_keep_alive_init\u2019:
tcp_keep_alive.c:519:9: warning: \u2018removed_return.58\u2019 is used uninitialized in this function [-Wuninitialized]
519 | return tcp_keep_alive_init_config(&support, &config);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
tcp_keep_alive_init is a constructor and, as such, should not return a
value. GCC warns that its return value is unused (although with a
somewhat strange warning message).
The return value of tcp_keep_alive_init_config is ignored as all
error paths log a suitable error and not much more can be done.
Fix: tests: interrupting get_next_notification causes test to fail
Attaching a debugger to the `notification` test application causes
tests to fail when the application is waiting for a notification
as the function will return an `INTERRUPTED` status code.
Jonathan Rajotte [Thu, 25 Jun 2020 20:52:33 +0000 (16:52 -0400)]
Fix: consumer: dangling chunk on buffer allocation failure
Observed issue
==============
Using docker the /dev/shm mount size is 64 MB by default. On system
with many threads (256), combined with the default subbuffer count and
subbufer size, allocation failure of the subbuffer is inevitable since
the /dev/shm mountpoint is too small.
When a user try to destroy the problematic session, the destroy
command hangs.
# Force the size of /dev/shm, make sure to remount it to its orignal
# size when done testing.
sudo mount -o remount,size=1G /dev/shm
lttng create
lttng enable-channel --subbuf-size 500M -u test
lttng enable-event -u -a -c test
lttng start
# Run an app;
../lttng-ust/doc/examples/easy-ust/sample
lttng-sessiond should output:
Error: ask_channel_creation consumer command failed
Error: Error creating UST channel "test" on the consumer daemon
The hang is caused by the check of ongoing rotation which never finishes.
The consumer reports that the trace chunk still exists. This is caused
by an imbalance of the reference count for the trace chunk on close.
This is caused by a missing lttng_trace_chunk_put in destroy_channel
which is called on the error path for the consumer channel creation.
Solution
========
Call lttng_trace_chunk_put if the channel->trace_chunk is assigned
inside the channel_destroy function.
The error handling in ust_app_channel_create is also reworked since the
return value for ust_app_channel_send would be squashed for scenario
where contexts are present.
Known drawbacks
=========
None
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com> Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Idf19b1d306cf347619ccfda1621d91d8303f3c7c
Philippe Proulx [Tue, 5 May 2020 19:37:30 +0000 (15:37 -0400)]
Convert `README.md` to `README.adoc`
Asciidoctor offers more features than Markdown which this README can
benefit from (attributes, table of contents, description lists, and
non-HTML internal anchors to name a few).
On GitHub, the table of contents is placed right after the preamble,
before the first section's heading. Otherwise, it's placed on the left
for increased readability (not supported by GitHub).
Signed-off-by: Philippe Proulx <eeppeliteloop@gmail.com> Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Id4965abaa7151cb0fc3caca290fcf5d3669f0b80
Jonathan Rajotte [Wed, 17 Jun 2020 19:55:36 +0000 (15:55 -0400)]
Fix: invalid discarded events on start/stop without event production
Observed issue
==============
On consecutive start/stop command sequence the reported discarded event
count is N * CPU, where N is the number of start/stop pair executed.
Note that no event generation occurred between each start/stop pair.
lttng start
lttng stop
Tracing stopped for session auto-20200616-094338
lttng start
lttng stop
Waiting for data availability
Warning: 4 events were discarded, please refer to the documentation on channel configuration.
Tracing stopped for session auto-20200616-094338
lttng start
lttng stop
Waiting for data availability
Warning: 8 events were discarded, please refer to the documentation on channel configuration.
Tracing stopped for session auto-20200616-094338
The issue was bisected down to:
commit 6f9449c22eef59294cf1e1dc3610a5cbf14baec0 (HEAD)
Author: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Date: Sun May 10 18:00:26 2020 -0400
consumerd: refactor: split read_subbuf into sub-operations
[...]
Cause
=====
The discarded event local variable, in `consumer_stream_update_stats()`
is initialized with the subbuffer sequence count instead of the
subbuffer discarded event count.
Solution
========
Use the subbuffer discarded event count to initialized the variable.
Known drawbacks
=========
None
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com> Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I5ff213d0464cdb591b550f6e610bf15085b18888
Fix: consumerd: user space metadata not regenerated
Observed Issue
==============
The LTTng-IVC tests fail on the `regenerate metadata` tests which
essentially:
- Setups a user space session
- Enables events
- Traces an application
- Stops tracing
- Validates the trace
- Truncates the metadata file (empties it)
- Starts tracing
- Regenerates the metadata
- Stops the session
- Validates the trace
The last trace validation step fails on an empty file (locally) or
a garbled file (remote).
The in-tree tests did no catch any of this since they essentially don't
test much. They verify that the command works (returns 0) but do not
validate any of its effects.
The issue was bisected down to:
commit 6f9449c22eef59294cf1e1dc3610a5cbf14baec0 (HEAD)
Author: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Date: Sun May 10 18:00:26 2020 -0400
consumerd: refactor: split read_subbuf into sub-operations
[...]
Cause
=====
The commit that introduced the issue refactored the sub-buffer
consumption loop to eliminate code duplications between the user space
and kernel consumer daemons.
In doing so, it eleminated a metadata version check from the consumption
path.
The consumption of a metadata sub-buffer follows those relevant
high-level steps:
- `get` the sub-buffer
- /!\ user space specific /!\
- if the `get` fails, attempt to flush the metadata cache's
contents to the ring-buffer
- populate `stream_subbuffer` properties (size, version, etc.)
- check the sub-buffer's version against the last known metadata
version (pre-consume step)
- if they don't match, a metadata regeneration occurred: reset the
metadata consumed position
- consume (actual write/send)
- `put` sub-buffer
[...]
As shown above, the user space consumer must manage the flushing of the
metadata cache explicitly as opposed to the kernel domain for which the
tracer performs the flushing implicitly through the `get` operation.
When the user space consumer encounters a `get` failure, it checks
if all the metadata cache was flushed (consumed position != cache size),
and flushes any remaining contents.
However, the metadata version could have changed and yielded an
identical cache size: a regeneration without any new metadata will
yield the same cache size.
Since 6f9449c22, the metadata version check is only performed after
a successful `get`. This means that after a regeneration, `get`
never succeeds (there is seemingly nothing to consume), and the
metadata version check is never performed.
Therefore, the metadata stream is never put in the `reset` mode,
effectively not regenerating the data.
Note that producing new metadata (e.g. a newly registering app
announcing new events) would work around the problem here.
Solution
========
Add a metadata version check when failing to `get` a metadata
sub-buffer. This is done in `commit_one_metadata_packet()` when the
cache size is seen to be equal to the consumed position.
When this occurs, `consumer_stream_metadata_set_version()`, a new
consumer stream method, is invoked which sets the new metadata version,
sets the `reset` flag, and discards any previously bucketized metadata.
The metadata cache's consumed position is also reset, allowing the
cache flush to take place.
`metadata_stream_reset_cache()` is renamed to
`metadata_stream_reset_cache_consumed_position()` since its name is
misleading and since it is used as part of the fix.
Know drawbacks
==============
None.
Change-Id: I3b933c8293f409f860074bd49bebd8d1248b6341 Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com> Reported-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Ovidiu Panait [Mon, 18 May 2020 13:39:26 +0000 (16:39 +0300)]
tests: gen-ust-events-ns/tp.h: Fix build with musl libc
Fix the following build error with musl libc:
In file included from ../../../../../lttng-tools-2.12.0/tests/utils/testapp/gen-ust-events-ns/tp.h:14,
from ../../../../../lttng-tools-2.12.0/tests/utils/testapp/gen-ust-events-ns/tp.c:10:
../../../../../lttng-tools-2.12.0/tests/utils/testapp/gen-ust-events-ns/tp.h:17:10: error: unknown type name 'ino_t'; did you mean 'int8_t'?
17 | TP_ARGS(ino_t, ns_ino),
| ^~~~~
../../../../../lttng-tools-2.12.0/tests/utils/testapp/gen-ust-events-ns/tp.h:17:10: error: unknown type name 'ino_t'; did you mean 'int8_t'?
17 | TP_ARGS(ino_t, ns_ino),
| ^~~~~
../../../../../lttng-tools-2.12.0/tests/utils/testapp/gen-ust-events-ns/./tp.h:17:2: error: unknown type name 'ino_t'; did you mean 'int8_t'?
17 | TP_ARGS(ino_t, ns_ino),
| ^~~~~~~
../../../../../lttng-tools-2.12.0/tests/utils/testapp/gen-ust-events-ns/./tp.h:17:2: error: unknown type name 'ino_t'; did you mean 'int8_t'?
17 | TP_ARGS(ino_t, ns_ino),
| ^~~~~~~
Simon Marchi [Mon, 2 Dec 2019 21:41:52 +0000 (16:41 -0500)]
actions: introduce action group
This patch introduces action groups as a new kind of action.
When creating a trigger, it is only possible to attach a single action.
Action groups allow users to attach more than one action.
A group is created using lttng_action_group_create. Actions are added
to it using lttng_action_group_add_action. The group can then be used
in a trigger, like any other action.
The operations required to be implemented by actions (serialize,
create_from_buffer, validate) are implemented by executing the operation
on all elements.
Current limitations are:
- To avoid any cycle, it is not possible to add a group inside a group.
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com> Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com> Signed-off-by: Simon Marchi <simon.marchi@efficios.com>
Change-Id: I2ae6aed21d9a6b45510d37a435773b1bd7d76528
lttng_action objects will be shared with the action executor subsystem
which executes them asynchronously. This introduces an ambiguous
ownership since triggers could be unregistered while an action is
executing (or pending execution).
Also ease the object ownership management of the group action sub
actions. This allow clients to add multiple time the same action to an
action group without any problem.
The currently user-visible 'destroy' method simply calls the 'put'
method which preserves the expected behaviour for current users (both
internal and public) of the API.
Simon Marchi [Mon, 2 Dec 2019 20:20:38 +0000 (15:20 -0500)]
actions: introduce snapshot session action
This patch introduces the API for the "snapshot session" action.
A snapshot session action is created using the
lttng_action_snapshot_session_create function. It is mandatory to set a
session name using lttng_action_snapshot_session_set_session_name before
using the action in a trigger.
It is possible, but optional, to provide a snapshot name with
lttng_action_snapshot_session_set_snapshot_name.
The patch adds the code for serializing the action and deserializing it
on the sessiond side, but not the code for executing it.
Change-Id: I2b76680d44bf69eb705f2a238fffef2519b82534 Signed-off-by: Simon Marchi <simon.marchi@efficios.com> Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
liblttng-ctl: use lttng_payload for serialize/create_from_buffer
Some objects used in the sessiond <-> liblttng-ctl communication (e.g.
such as userspace probe event rule) contain file descriptors that
must be carried accross process boundaries (fd passing).
Since those objects are often nested within a higher-level object
hierarchy, it makes sense to adapt the existing
serialize/create_from_buffer interface to use an lttng_payload.
An lttng_payload contains a dynamic buffer and an array of file
descriptors. Objects are expected to push their file descriptors in the
payload in the same way they currently push the bytes of their binary
representation in a dynamic buffer.
Conversely, an lttng_payload_view interface is added and contains a
buffer_view and an iterator which allows objects to ̀ pop` a file
descriptors when appropriate.
Tests are added to validate the FD consumption behaviour depending
on the origin of payload views (root view or descendant view).
These APIs are used by the upcoming "snapshot session" action
used to trigger a snapshot on a given condition.
The internal API is added to transmit a snapshot output as part of an
action while the public API is added to clean-up the current
snapshot_output API which will be used by the client to create the
snapshot_session actions.
For instance, with set_local_path, it is no longer necessary to create a
local snapshot output by calling lttng_snapshot_output_set_ctrl_url
with a "file://" protocol.
Signed-off-by: Simon Marchi <simon.marchi@efficios.com> Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com> Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I00f9521faf9f66890ad6ea9a05ad7f6468f805f8
Simon Marchi [Mon, 2 Dec 2019 20:01:50 +0000 (15:01 -0500)]
actions: introduce rotate session action
This patch introduces the API for the "rotate session" action.
A rotate session action is created using the
lttng_action_rotate_session_create function. It is mandatory to set a
session name using lttng_action_rotate_session_set_session_name before
using the action in a trigger.
The patch adds the code for serializing the action and deserializing it
on the sessiond side, but not the code for executing it.
Change-Id: I8ed1a71a00deaa6abafaff703a8980c2c7598bda Signed-off-by: Simon Marchi <simon.marchi@efficios.com> Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Simon Marchi [Mon, 2 Dec 2019 19:53:11 +0000 (14:53 -0500)]
actions: introduce stop session action
This patch introduces the API for the "stop session" action.
A stop session action is created using the
lttng_action_stop_session_create function. It is mandatory to set a
session name using lttng_action_stop_session_set_session_name before
using the action in a trigger.
The patch adds the code for serializing the action and deserializing it
on the sessiond side, but not the code for executing it.
Change-Id: Ie00d744340f85f15a333680cf86d3882bd612d1a Signed-off-by: Simon Marchi <simon.marchi@efficios.com> Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Simon Marchi [Fri, 29 Nov 2019 21:48:45 +0000 (16:48 -0500)]
actions: introduce start session action
This patch introduces the API for the "start session" action.
A start session action is created using the
lttng_action_start_session_create function. It is mandatory to set a
session name using lttng_action_start_session_set_session_name before
using the action in a trigger.
The patch adds the code for serializing the action and deserializing it
on the sessiond side, but not the code for executing it.
Change-Id: I90598e25a461ccabcf7dc327aaa73b3d35e203af Signed-off-by: Simon Marchi <simon.marchi@efficios.com> Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Fix: consumerd: live client receives incomplete metadata
Observed issue
==============
Babeltrace 1.5.x and Babeltrace 2.x can both report errors (albeit
differently) when using the "lttng-live" protocol that imply that the
metadata they received is incomplete.
For instance, babeltrace 1.5.3 reports the following error:
```
[error] Error creating AST
[error] [Context] Cannot open_mmap_trace of format ctf.
[error] Error adding trace
[warning] [Context] Cannot open_trace of format lttng-live at path net://localhost:xxxx/host/session/live_session.
[warning] [Context] cannot open trace "net://localhost:xxxx/host/session/live_session" for reading.
[error] opening trace "net://localhost:xxxx/host/session/live_session" for reading.
[error] none of the specified trace paths could be opened.
```
While debugging both viewers, I noticed that both were attempting to
receive the available metadata before consuming the "data" streams'
content.
Typically, the following exchange between the relay daemon and the
lttng-live client occurs when the problem is observed:
When the lttng-live client receives the LTTNG_VIEWER_NO_NEW_METADATA
status code, it attempts to parse all the metadata it has received
since the last LTTNG_VIEWER_NO_NEW_METADATA reply. In effect, it is
expected that this forms a logical unit of metadata that is parseable
on its own.
If this is the first time metadata is received for that trace, the
metadata is expected to contain a trace declaration, packet header
declaration, etc.
If metadata was already received, it is expected that the newly parsed
declarations can be "appended" to the existing trace schema.
It appears that the relay daemon sends the
LTTNG_VIEWER_NO_NEW_METADATA while the metadata it has sent up to that
point is not parseable on its own.
The live protocol description does not require or imply that a viewer
should attempt to parse metadata packets until it hopefully succeeds
at some point. Anyhow:
1) This would make it impossible for a live viewer to correctly
handle a corrupted metadata stream beyond retrying forever,
2) This behaviour is not implemented by the two reference
implementations of the protocol.
Cause
=====
The relay daemon provides a guarantee that it will send any available
metadata before allowing a data stream packet to be served to the
client.
In other words, a client requesting a data packet will receive the
LTTNG_VIEWER_FLAG_NEW_METADATA status code (and no data) if it
attempts to get a data stream packet while the relay daemon has
metadata already available.
This guarantee is properly enforced as far as I can tell. However,
looking at the consumer daemon implementation, it appears that
metadata packets are sent as soon as they are available.
A metadata packet is not guaranteed to be parseable on its own. For
instance, it can end in the middle the an event declaration.
Hence, this hints at a race involving the tracer, the consumer daemon,
the relay daemon, and the lttng-live client.
Consider the following scenario:
- Metadata packets (sub-buffers) are configured to be 4kB in size,
- a large number of kernel events are enabled (e.g. --kernel --all),
- the network connection between the consumer and relay daemons is
slow
1) The kernel tracer will produce enough TSDL metadata to fill the
first sub-buffer of the "metadata" ring-buffer and signal the
consumer daemon that a buffer is ready. The tracer then starts
writing the remaining data in the following available sub-buffers.
2) The consumer daemon metadata thread is woken up and consumes the
first metadata sub-buffer and sends it to the relay daemon.
3) A live client establishes an lttng-live connection to the relay
daemon and attempts to consume the available metadata. It receives
the first packet and, since the relay daemon doesn't know about any
follow-up metadata, receives LTTNG_VIEWER_NO_NEW_METADATA on the
next attempt.
4) Having received LTTNG_VIEWER_NO_NEW_METADATA, the lttng-live client
attempts to parse the metadata it has received and fails.
This scenario is easy to reproduce by inserting a "sleep(1)" at
src/bin/lttng-relayd/main.c:1978 (as of this revision). This simulates
a relay daemon that would be slow to receive/process metadata packets
from the consumer daemon.
This problem similarly applies to the user space tracer.
Solution
========
Having no knowledge of TSDL, the relay daemon can't "bundle" packets
of metadata until they form a "parseable unit" to send to the consumer
daemon.
To provide the parseability guarantee expected by the viewers, and by
the relay daemon implicitly, we need to ensure that the consumer
daemons only send "parseable units" of metadata to the relay daemon.
Unfortunately, the consumer daemons do not know how to parse TSDL
either. In fact, only the metadata producers are able to provide the
boundaries of the "parseable units" of metadata.
The general idea of the fix is to accumulate metadata up to a point
where a "parseable unit" boundary has been identified and send that
content in one request to the relay daemon. Note that the solution
described here only concerns the live mode. In other cases, the
mechanisms described are simply bypassed.
A "metadata bucket" is added to lttng_consumer_stream when it is
created from a live channel. This bucket is filled until the
consumption position reaches the "parseable unit" end position.
A refresher about the handling of metadata in live mode
-------------------------------------------------------
Three "events" are of interest here and can cause metadata to be
consumed more or less indirectly:
1) A metadata packet is closed, causing the metadata thread to wake
up
2) The live timer expires
3) A data sub-buffer is closed, causing the data thread to wake-up
1) The first case is simple and happens regardless of whether or not
the tracing session is in live mode or not. Metadata is always
consumed by the metadata thread in the same way. However, this
scenario can be "caused" by (2) and (3). See [1]. A sub-buffer is
"acquired" from the metadata ring-buffer and sent to the relayd
daemon as the payload of a "RELAYD_SEND_METADATA" command.
2) When the live timer expires [2], the 'check_stream' function is
called on all data streams of the session. As its name clearly
implies, this function is responsible for flushing all streams or
sending a "live beacon" (called an "empty index" in the code) if
there is no data to flush. Any flushed data will result in (3).
3) When a data sub-buffer is ready to be consumed, [1] is invoked
by the data thread. This function acquires a sub-buffer and sends
it to the relay daemon through the data connection.
Then, an important synchronization step takes place. The index of
the newly-sent packet will be sent through the control
connection. The relay daemon waits for both the data packet and its
matching index before making the new packet visible to live
viewers.
Since a data packet could contain data that requires "newer"
metadata to be decoded, the data thread flushes the metadata stream
and enters a "waiting" phase to pause until all metadata present in
the metadata ring buffer has been consumed [3].
At the end of this waiting phase, the data thread sends the data
packet's index to the relay daemon, allowing the relayd to make it
visible to its live clients.
How to identify a "parseable unit" boundary?
--------------------------------------------
In the case of the kernel domain, the kernel tracer produces the
actual TSDL descriptions directly. The TSDL metadata is serialized to
a metadata cache and is flushed "just in time" to the metadata
ring-buffer when a "get next" operation is performed.
There is no way, from user space, to query whether or not the metadata
cache of the kernel tracer is empty. Hence, a new
RING_RING_BUFFER_GET_NEXT_SUBBUF_METADATA_CHECK command was added to
query whether or not the kernel tracer's metadata cache is empty when
acquiring a sub-buffer.
This allows the consumer daemon to identify a "coherent" position in
the metadata stream that is safe to use as a "parseable unit"
boundary.
As for the user space domain, since the session daemon is responsible
for generating the TSDL representation of the metadata, there is no
need to change LTTng-ust APIs.
The session daemon generates coherent units of metadata and adds them
to its "registry" at once (protected by the registry's lock). It then
flushes the contents to the consumer daemon and waits for that data to
be consumed before proceeding further.
On the consumer daemon side, the metadata cache is filled with the
newly-produced contents. This is done atomically with respect to
accesses to the metadata cache as all accesses happen through a
dedicated metadata cache lock.
When the consumer's metadata polling thread is woken-up, it will
attempt to acquire (`get_next`) a sub-buffer from the metadata stream
ring-buffer. If it fails, it will flush a sub-buffer's worth of
metadata to the ring-buffer and attempt to acquire a sub-buffer again.
At this point, it is possible to determine if that sub-buffer is the
last one of a parseable metadata unit: the cache must be empty and the
ring-buffer must be empty following the consumption of this
sub-buffer. When those conditions are met, the resulting metadata
`stream_subbuffer` is tagged as being `coherent`.
Metadata bucket
---------------
A helper interface, metadata_bucket, is introduced as part of this
fix. A metadata_bucket is `fill`ed with `stream_subbuffer`s, and is
eventually `flushed` when it is filled by a `coherent` sub-buffer.
As older versions of LTTng-modules must remain supported, this new
helper is not used when the
RING_RING_BUFFER_GET_NEXT_SUBBUF_METADATA_CHECK operation is not
available. When the operation is available, the metadata stream's
bucketization is enabled, causing a bucket to be created and the
`consume` callback to be swapped.
The `consume` callback of the metadata streams is replaced by a new
implementation when the metadata bucketization is activated on the
stream. This implementation returns the padded size of the consumed
sub-buffer when they could be added to the bucket. When the bucket is
flushed, the regular `mmap`-based consumption function is called with
the bucket's contents.
Known drawbacks
===============
This implementation causes the consumer daemon to buffer the whole
initial unit of metadata before sending it. In practice, this is not
expected to be a problem since the largest metadata files we have seen
in real use are a couple of megabytes wide.
Beyond the (temporary) memory use, this causes the metadata thread to
block while this potentially large chunk of metadata is sent (rather
than blocking while sending 4kb at a time).
The second point is just a consequence of existing shortcomings of the
consumerd; slow IO should not affect other unrelated streams. The
fundamental problem is that blocking IO is used and we should switch
to non-blocking communication if this is a problem (as is done in the
relay daemon).
The first point is more problematic given the existing tracer APIs.
If the tracer could provide the boundary of a "parseable unit" of
metadata, we could send the header of the RELAYD_SEND_METADATA command
with that size and send the various metadata packets as they are made
available. This would make no difference to the relay daemon as it is
not blocking on that socket and will not make the metadata size change
visible to the "live server" until it has all been received.
This size can't be determined right now since it could exceed the
total size of the "metadata" ring buffer. In other words, we can't wait
for the production of metadata to complete before starting to consume.
Finally, while implementing this fix, I also realized that the
computation of the rotation position of the metadata streams is
erroneous. The rotation code makes use of the ring-buffer's positions
to determine the rotation position. However, since both user space and
kernel domains make use of a "cache" behind the ring-buffer, that
cached content must be taken into account when computing the metadata
stream's rotation position.
consumerd: refactor: split read_subbuf into sub-operations
The read_subbuf code paths intertwine domain-specific operations and
metadata/data-specific logic which makes modifications error prone and
introduces a fair amount of code duplication.
lttng_consumer_read_subbuffer is effectively turned into a template
method invoking overridable callbacks making most of the consumption
logic domain and data/metadata agnostic.
The goal is not to extensively clean-up that code path. A follow-up
fix introduces metadata buffering logic which would not reasonably fit
in the current scheme. This clean-up makes it easier to safely
introduce those changes.
No changes in behaviour are intended by this change.
consumerd: move rotation logic to domain-agnostic read path
The "rotation ready" logic is duplicated in both user space and kernel
specializations of the read subbuffer functions.
It is moved to the domain-agnostic caller where it is needed only
once. This makes it easier to implement a follow-up fix and reduces
code duplication.
sessiond: enforce mmap output type for kernel metadata channel
A follow-up fix causes the consumer daemon to accumulate metadata
packets into a complete "unit" that can be parsed before sending it to
the relay daemon.
The consumer daemon will also need to extract the contents of the
metadata cache when computing a rotation position (follow-up fix too).
Hence, it is not possible to rely on the splice back-end as the
consumer daemon may need to accumulate more content than can be backed
by the ring buffer's underlying pages.
In both cases, the splice output mode could still be used when
combined with a memfd, but I see no tangible benefit. Moreover, it
would require a 3.17 kernel.
Curiously the kernel metadata channel configuration appears to be
hard-coded twice; once in the ltt_kernel_session's
ltt_kernel_metadata, and once again in
kernel_consumer_add_metadata(). kernel_consumer_add_metadata is
modified to use the kernel session's metadata configuration.
consumerd: tag metadata channel as being part of a live session
metadata channels that are part of a live session must be handled
differently than when they are part of non-live sessions since
complete "metadata units" must be accumulated before they are
forwarded to a relay daemon.
This allows a follow-up fix to use this information since the
live_timer_interval of a metadata channel is always 0.
consumerd: pass channel instance to stream creation function
Both callsites of consumer_allocate_stream() set the stream's "chan"
pointer after the creation. Pass the channel directly to the stream
creation function so it can initialize the stream according to the
channel's settings.
consumerd: move address computation from on_read_subbuffer_mmap
The computation of the subbuffer's address is moved outside of
lttng_consumer_on_read_subbuffer_mmap to make it usable with a regular
buffer. This facilitates an upcoming change.
Moreover this has the benefit of isolating domain-specific logic from
this function which is supposed to be domain-agnostic.
Add a wrapper for RING_RING_BUFFER_GET_NEXT_SUBBUF_METADATA_CHECK
which gets the next metadata subbuffer and returns a boolean flag
indicating whether the metadata is guaranteed to be in a consistent
state at the end of this sub-buffer (can be parsed).
Jonathan Rajotte [Thu, 21 May 2020 00:53:45 +0000 (20:53 -0400)]
Fix: common: fs_handle_seek returns negative value on success
Observed issue
==============
Babeltrace 1/2 fails to fetch data from a live session.
Error:
PLUGIN/SRC.CTF.LTTNG-LIVE/VIEWER lttng_live_get_stream_bytes@viewer-connection.c:1593 [lttng-live] Received get_data_packet response: error
PLUGIN/CTF/MSG-ITER request_medium_bytes@msg-iter.c:546 [lttng-live] User function failed: status=ERROR
PLUGIN/CTF/MSG-ITER ctf_msg_iter_get_next_message@msg-iter.c:2881 [lttng-live] Cannot handle state: msg-it-addr=0x562d87521a40, state=DSCOPE_TRACE_PACKET_HEADER_BEGIN
PLUGIN/SRC.CTF.LTTNG-LIVE lttng_live_iterator_next_handle_one_active_data_stream@lttng-live.c:821 [lttng-live] CTF message iterator failed to get next message: msg-iter=0x562d87521a40, msg-iter-status=ERROR
PLUGIN/SRC.CTF.LTTNG-LIVE lttng_live_msg_iter_next@lttng-live.c:1499 [lttng-live] Error preparing the next batch of messages: live-iter-status=LTTNG_LIVE_ITERATOR_STATUS_ERROR
LIB/MSG-ITER bt_message_iterator_next@iterator.c:865 Component input port message iterator's "next" method failed: iter-addr=0x562d8751ab70, iter-upstream-comp-name="lttng-live", iter-upstream-comp-log-level=WARNING, iter-upstream-comp-class-type=SOURCE, iter-upstream-comp-class-name="lttng-live", iter-upstream-comp-class-partial-descr="Connect to an LTTng relay daemon", iter-upstream-port-type=OUTPUT, iter-upstream-port-name="out", status=ERROR
PLUGIN/FLT.UTILS.MUXER muxer_upstream_msg_iter_next@muxer.c:446 [muxer] Upstream iterator's next method returned an error: status=ERROR
PLUGIN/FLT.UTILS.MUXER validate_muxer_upstream_msg_iters@muxer.c:989 [muxer] Cannot validate muxer's upstream message iterator wrapper: muxer-msg-iter-addr=0x562d87515280, muxer-upstream-msg-iter-wrap-addr=0x562d8751ca90
PLUGIN/FLT.UTILS.MUXER muxer_msg_iter_next@muxer.c:1417 [muxer] Cannot get next message: comp-addr=0x562d8751a260, muxer-comp-addr=0x562d8751a2e0, muxer-msg-iter-addr=0x562d87515280, msg-iter-addr=0x562d8751aa90, status=ERROR
LIB/MSG-ITER bt_message_iterator_next@iterator.c:865 Component input port message iterator's "next" method failed: iter-addr=0x562d8751aa90, iter-upstream-comp-name="muxer", iter-upstream-comp-log-level=WARNING, iter-upstream-comp-class-type=FILTER, iter-upstream-comp-class-name="muxer", iter-upstream-comp-class-partial-descr="Sort messages from multiple inpu", iter-upstream-port-type=OUTPUT, iter-upstream-port-name="out", status=ERROR
LIB/GRAPH consume_graph_sink@graph.c:462 Component's "consume" method failed: status=ERROR, comp-addr=0x562d8751a3d0, comp-name="pretty", comp-log-level=WARNING, comp-class-type=SINK, comp-class-name="pretty", comp-class-partial-descr="Pretty-print messages (`text` fo", comp-class-is-frozen=0, comp-class-so-handle-addr=0x562d8751a110, comp-class-so-handle-path="/usr/local/lib/babeltrace2/plugins/babeltrace-plugin-text.so", comp-input-port-count=1, comp-output-port-count=0
CLI cmd_run@babeltrace2.c:2529 Graph failed to complete successfully
PLUGIN/SRC.CTF.LTTNG-LIVE/VIEWER lttng_live_session_detach@viewer-connection.c:1211 [lttng-live] Unknown detach return code 0
The relevant relayd log:
DEBUG2 - Relay get data packet (in viewer_get_packet() at live.c:1770)
PERROR - Failed to seek file system handle of viewer stream 4 to offset 2244861952: Success (in viewer_get_packet() at live.c:1810)
DEBUG1 - Sent 262156 bytes for stream 4 (in viewer_get_packet() at live.c:1852)
Cause
=====
The fs_handle_seek function calls lseek on a stream file of ~2.5GB. The
return value of the lseek call is downcasted from off_t (signed 64 bit
on my system) to int. The resulting value is negative and force an error
at the call sites.
Solution
========
Use off_t as the return type.
Note that current call sites of fs_handle_seek already expect an off_t
return value.
On 32bit system without _FILE_OFFSET_BITS=64 defined at compile time
(-D_FILE_OFFSET_BITS=64) the lseek operation could return EOVERFLOW for
stream file bigger then 2,147,483,647 bytes. Anybody working on a 32bit
system should be aware of the limitation of working in 32bit.
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com> Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Ib6c7bb3c9c402bdfbe9b447b1f8f298de6058caa