tracing point of /sys/kernel/debug/tracing/events/nvme - debugging

What does the events means in /sys/kernel/debug/tracing/events/nvme? There are four events in this directory:
nvme_async_event
nvme_complete_rq
nvme_setup_cmd
nvme_sq

Related

AnyLogic population cannot enter process -> location of agents missing

I created a model in which transporters should arrive at the same time as a dataset I collected in real-life. But when I run my model, the following Error occurs:
Exception during discrete event execution:
root.<population>[0]:
This agent isn't located in any space
My steps until now:
Create a population with agents of the type of my agents and fill them with the values of my data
create an event which checks every minute for all objects of my population if their arrival time equals the models time
Write a function that enters them into my process and on the road to come to my factory
Therefor every transporter exists already before being entered into my process so that the event can check the condition.
My problem:
When the condition is true and the object should enter my process an error occurs:
Exception during discrete event execution:
root.<population>[0]:
This agent isn't located in any space
Other times when I run the model, this error occurs:
root.mplkws[-1]:
This agent is already defined as agent living in space 'Discrete 2D' and can't have behaviour for space 'Continuous'
I don't understand why they dont have their initial space already. Everything was created in the Main method and I dont know how and where to change the populations agents location
I tried to set the space on the enter block with agent.setSpace(getSpace())but nothing changed.

How to detect sender and destination of a notification in dbus-monitor?

My goal is to filter notifications coming from different applications (mainly from different browser window).
I found that with the help of the dbus-monitor I can write a small script that could filter the notification messages that I am interested in.
The filter script is working well, but I have a small problem:
I am starting with the
dbus-monitor "interface='org.freedesktop.Notifications', destination=':1.40'"
command. I have to added the "destination=':1.40'" because on Ubuntu 20.04 I always got twice the same notification.
The following output of
dbus-monitor --profile "interface='org.freedesktop.Notifications'"
demonstrate the reason:
type timestamp serial sender destination path interface member
# in_reply_to
mc 1612194356.476927 7 :1.227 :1.56 /org/freedesktop/Notifications org.freedesktop.Notifications Notify
mc 1612194356.483161 188 :1.56 :1.40 /org/freedesktop/Notifications org.freedesktop.Notifications Notify
As you can see the sender :1.277 sends to :1.56 first than this will be the sender to :1.40 destination. (Simply notify-send hello test message was sent)
My script is working on that way, but every time system boot up, I have to check the destination number and modify my script accordingly to get worked.
I have two questions:
how to discover the destination string automatically? (:1.40 in the above example)
how to prevent system sending the same message twice? (If this question would be answered, than the question under point 1. became pointless.)

Consul: When do events get removed from the event list?

In consul's documentation, it states:
This endpoint returns the most recent events known by the agent.
What does it mean exactly by most recent? Most 100 recent events? 1000? Events fired in the last 7 days?
Is there a way for me to configure this?
My concern is this event list could grow infinitely large if older events are not removed within a reasonable amount of time (which can vary across different applications).
So after some digging into the source code of consul. I found out it is max 256
https://github.com/hashicorp/consul/blob/94835a2715892f48ffa9f81a9a32808d544b1ca5/agent/agent.go#L221
eventBuf: make([]*UserEvent, 256),
On below you can see the rotation
https://github.com/hashicorp/consul/blob/94835a2715892f48ffa9f81a9a32808d544b1ca5/agent/user_event.go#L229
a.eventBuf[idx] = msg
a.eventIndex = (idx + 1) % len(a.eventBuf)
Below code shows that the data is pulled from the same buffer only
https://github.com/hashicorp/consul/blob/94835a2715892f48ffa9f81a9a32808d544b1ca5/agent/user_event.go#L235
func (a *Agent) UserEvents() []*UserEvent {
So you can safely assume, this will be max 256

Get current no from prooph event store

I try to update a projection from event store. The following line will load all events:
$events = $this->eventStore->load(new StreamName('mystream'));
Currently i try to load only not handled events by passing the fromNumber parameter:
$events = $this->eventStore->load(new StreamName('mystream'), 10);
This will load all events eg from 15 to 40. But i found no way to figure out which is the current/highest "no" of the results. But this is necessary for me to load only from this entry on the next time.
If the database is truncated (with restarted sequences) this is not a real problem cause i know that the events will start with 1. But if the primary key starts with a number higher than 1 can not figure out which event has which number in the event store
When you are using pdo-event-store, you have a key _position in the event metadata after loading, so your read model can track which position was the last you were working on. Other then that, if you are working with proophs event-store projections, you don't need to take care of that at all. The projector will track the current event position for all needed streams internally, you just need to provide callbacks for each event where you need to do something.

Kafka Streams: Handle Aging of events in a stream on window expiry

I'm currently using kafka streams to collate related events within a window. In case if all the related events don't arrive within a window, is there a way in Kafka streams where we get a handle to the events that are expired. This would assist in handling/ notifying the downstream application that all the related events didn't arrive for collation. Appreciate your response.
Below are the examples
Example-1:
- GroupID: g1
- Events arrival: E1,10am; E2 10:01am and E3 10:02am
- Window: Session Window of inactivity duration of 5 mins.
- Result: All the events are collated successfully.
Example-2:
- Events arrival: E1,10am; E2 10:01am and E3 don't arrive
- Window: Session Window of inactivity duration of 5 mins.
- Result: Trigger an action OR get notified via a listener for partial
collation upon window expiry for E1 and E2 at 10:06 am
Windows in Kafka Streams "don't expire" but are kept open to allow the handling of late arriving data.
Compare How to send final kafka-streams aggregation result of a time windowed KTable?
It's not possible to register any call-back,
not for the case that "stream time" advances and passed "window end time"
not for the case that a window if finally dropped (ie, after retention period did pass)
Have not tried it, but seems like window final results might do it
https://kafka.apache.org/24/documentation/streams/developer-guide/dsl-api.html#window-final-results
The idea is to check if all events have arrived when the window closes and trigger some action if this is not the case.

Resources