I am writing an application that is processing windows ETW events.
My application creates a logman session and configures the events to be written to .etl log files.
My application reads these .etl files, processes them, and then deletes the file.
Problem: It is possible for this processing step to fail (see the code below).
I need a mechanism to write-back the ETW events to an .etl file, to be processed later.
How can I write a Microsoft.Diagnostics.Tracing.TraceEvent object (obtained via DynamicTraceEventParser callback delegate) back to disk (ideally, back to an .etl file?)
The solutions that come to mind:
Write the failed events back to an .etl file -- this is the desired solution
Simply retain the entire event file in the event of any individual event processing errors, and let the successful records get processed twice. -- unfortunately our processing operation is not idempotent, so this will not work. (We can't allow records to be processed twice)
Write the event back to ETW event store-- I am open to this but would prefer option 1
The (simplified) code:
// For each .etl file
foreach (string fullFileName in eventFileNames)
{
// Load the .etl file
using (var source = new ETWTraceEventSource(fileFullName))
{
var parser = new DynamicTraceEventParser(source);
parser.All += delegate (TraceEvent traceEvent)
{
bool proccessingSucceeded = ProcessEvent(traceEvent); // Processing code that can fail
if (!proccessingSucceeded)
{
// TODO: figure out a way to write this event back to disk (.etl file) to be processed later
}
};
source.Process();
}
}
Related
I am adding jobs to a batch in a loop that is fed from an (3rd party) api. It sometimes loads all data and sometimes it takes time to get the data and the batch closes before the whole data is fetched.
do {
$page++;
$response = $api->getProducts(['page'=>$page])->json();
foreach ($response['products'] as $product) {
$batch->add(new \App\Jobs\ProcessJob($product));
}
$total_pages = $response['params']['total_pages'];
} while ($page < $total_pages);
It seems like in some cases the \App\Jobs\ProcessJob works faster than the the feeding class and the batch closes before the feeder adds all the needed data. How can I keep the batch open until the feeder class finishes feeding till the last page?
After some sifting around, I found out the batch actually does not close itself. My setting in queue config retry_after was set to short and the job running the batch was timed out by that setting. Now it does not timeout and works till the end.
This is a design-related question. I have a situation where my application receives events from 2 different sources it is registered with and the application should handle events from these 2 sources parallelly. My Application is already handling events from one source using the buffered channel (where events are queued up and processed one after another). Now I am in a situation where the application needs to handle the events from a different source and I cannot use the same channel here because the Application may have to handle events from these 2 sources parallelly. I am thinking of using another buffered channel to handle the events from the second event source. But I am concerned about the same resource being used to process 2 events parallelly. Even though we use channel we need to again apply sync while processing these events.
Could you please suggest me a better way, any patterns I can use, or a design to handle this situation?
This is the code I have now to handle event from one source
for event := range thisObj.channel {
log.Printf("Found a new event '%s' to process at the state %s", event, thisObj.currentState)
stateins := thisObj.statesMap[thisObj.currentState]
// This is separate go routine. Hence acuire the lock before calling a state to process the event.
thisObj.mu.Lock()
stateins.ProcessState(event.EventType, event.EventData)
thisObj.mu.Unlock()
}
Here thisObj.channel is being created at start up and the events are being added in a separate method. Currently this method is reading events from the channel and processing the events.
You can use the for select pattern to read process your events from a single channel and process them in parallel:
var events *EventsChannel
for {
select {
case eventA = <-events:
// handle A
case eventB = <-events:
// handle B
default:
// unknown type, error case
}
}
I am new to CAPL and trying to read DTCs periodically using CAPL script and log them to .blf file so they can be analyzed later.
After some research I decided to store all read and identified DTCs to system variable (which I defined as integer array dtcArr with fixed size of 500) as a way to output read DTCs, since system variables are logged also when the logging is started and can be shown from the logs later. Simply using write command to output it to file doesn't help much since that can't be shown in CANalyzer/CANoe on analysis later if I got that right. Basically I'm filling up dtcArr with all read DTCs values in order I read them.
Seems that using associative arrays for system variables is not possible (e.g. using DTC name text as a key), is there a better way to do this?
Easy to achieve. You need a script in the output loop, which makes DTC read requests to the target module. It must also handles any continuation frames that need to be sent:
variables
{
msTimer can1_ms_timer;
long ms_interval = 1000; /* Request rate */
message CAN1.* can1_dtc_req_frame = {id=???, dir=tx, byte(0)=0x??, byte(1)=0x??, byte(2)=0x??, etc.};
}
on key ctrlF1
{
write("Starting Read DTC Information from module ???");
setTimer(can1_ms_timer, ms_interval);
}
on timer can1_ms_timer
{
output(can1_dtc_req_frame);
setTimer(can1_ms_timer, ms_interval);
}
on message CAN1.ResponseModule???
{
/* Handle sending continuation frame */
output(this);
}
Now the modules responses shall be stored in the logging file, and you can processes them any way you choose. For my setup, I have a second CAPL script which converts trouble codes to my own custom CAN signals so I can plot their status values in the graph view.
on message CAN1.ResponseModule???
{
/* Process trouble code response */
}
If an ETL file is being written to by an active ETW session, is it safe to simultaneously consume events from it via OpenTrace/ProcessTrace?
In the absence of documentation I could find, I had assumed that ETL files were not updated atomically, and that it was first necessary to stop a session before calling OpenTrace to read events from it.
However, OpenTrace does appear to succeed even if the session is still active -- I see from Process Monitor's handle view the ETL files in use by active ETW sessions are opened with a sharing mode of READ|DELETE. Can we infer from this that OpenTrace/ProcessTrace will always return sensible results even for an ETL file used by an active ETW session? Does Windows use locking or some other mechanism to ensure consumers always get a consistent view of the file?
You can't read events live from a .etl file.
But you can read live events from a named session, and if you specify that you are in fact doing REALTIME reading.
//Initialize an EVENT_TRACE_LOGFILE to indicate the name of the session we want to read from
EVENT_TRACE_LOGFILE trace;
ZeroMemory(&trace, sizeof(trace));
trace.ProcessTraceMode = PROCESS_TRACE_MODE_REAL_TIME; //we're reading a realtime
trace.LoggerName = KERNEL_LOGGER_NAME; //i.e. "NT Kernel Logger"
trace.EventCallback = RealtimeEventCallback;
//Open the tracing session
TRACEHANDLE th = OpenTrace(trace);
if (th == INVALID_PROCESSTRACE_HANDLE)
ThrowLastWin32Error();
//Begin processing events
DWORD res = ProcessTrace(&th, 1, nil, nil);
if (res != ERROR_SUCCESS)
ThrowLastWin32Error();
CloseTrace(th);
There are a couple of these constant named loggers - defined in EvntProv.h:
KERNEL_LOGGER_NAME = "NT Kernel Logger";
GLOBAL_LOGGER_NAME = "GlobalLogger";
EVENT_LOGGER_NAME = "EventLog";
DIAG_LOGGER_NAME = "DiagLog";
The other way you can start a "named" logging session is with:
xperf -start fooLoggerName -on 55F22359-9BEC-45EC-A742-311A71EEC91D
This starts a session named "fooLoggerName" for provider guid 55F22359-9BEC-45EC-A742-311A71EEC91D.
Does anyone know how to detect when the user changes the current input source in OSX?
I can call TISCopyCurrentKeyboardInputSource() to find out which input source ID is being used like this:
TISInputSourceRef isource = TISCopyCurrentKeyboardInputSource();
if ( isource == NULL )
{
cerr << "Couldn't get the current input source\n.";
return -1;
}
CFStringRef id = (CFStringRef)TISGetInputSourceProperty(
isource,
kTISPropertyInputSourceID);
CFRelease(isource);
If my input source is "German", then id ends up being "com.apple.keylayout.German", which is mostly what I want. Except:
The results of TISCopyCurrentKeyboardInputSource() doesn't change once my process starts? In particular, I can call TISCopyCurrentKeyboardInputSource() in a loop and switch my input source, but TISCopyCurrentKeyboardInputSource() keeps returning the input source that my process started with.
I'd really like to be notified when the input source changes. Is there any way of doing this? To get a notification or an event of some kind telling me that the input source has been changed?
You can observe the NSTextInputContextKeyboardSelectionDidChangeNotification notification posted by NSTextInputContext to the default Cocoa notification center. Alternatively, you can observe the kTISNotifySelectedKeyboardInputSourceChanged notification delivered via the Core Foundation distributed notification center.
However, any such change starts in a system process external to your app. The system then notifies the frameworks in each app process. The frameworks can only receive such notifications when it is allowed to run its event loop. Likewise, if you're observing the distributed notification yourself, that can only happen when the event loop (or at least the main thread's run loop) is allowed to run.
So, that explains why running a loop which repeatedly checks the result of TISCopyCurrentKeyboardInputSource() doesn't work. You're not allowing the frameworks to monitor the channel over which it would be informed of the change. If, rather than a loop, you were to use a repeating timer with a low enough frequency that other stuff has a chance to run, and you returned control to the app's event loop, you would see the result of TISCopyCurrentKeyboardInputSource() changing.