Logman flush buffers to file periodically - ndis

I am troubleshooting NDIS miniport filter driver causing BSOD randomly. I enabled driver verifier for my driver. I am also trying to collect my driver trace logs by logman using the following command -
logman create trace myndis -p {MY_GUID} -ct system -f bincirc -max 5000 -o
C:\DriverTrace.etl
But the problem is that logman is not writing to file unless the trace is explicitly stopped. When BSOD occurs, I think tracing is killed instead of stopping by OS which leads my output etl file to be empty.
I tried -ft ::1 so that buffers can be flushed every 1 second to the file but that does not seem to work. I have also tried -rt flag but isn't helpful. How can I achieve logman writing to etl file constantly ?

Related

How can I collect performance data similar as pidstat using lttng?

By the performance data I mean like cpu, memory and disk usages for each process. I'm using Ubuntu 14.04 LTS, ans I installed the PPA version, but follow the instruction about how to use lttngTop I couldn't get it run: it always return error like "can't open trace file". Is there any other way to collect these data or could anyone tell me where I got wrong? Thanks.
Which commands did you use to setup your LTTng session? Did you enable the required event contexts? It's not immediately obvious, but lttngtop needs some event contexts that are not enabled by default. From the man page:
LTTngTop requires that the pid, procname, tid and ppid context
information are enabled during tracing.
This means you should issue a command like this before lttng start:
lttng add-context -k -t pid -t procname -t tid -t ppid

Spring transaction hangs for iptables command

As part of error handling for our processes, we have tried to disable the communication between the process to the database machine listener port using the following iptables command
iptables -A INPUT -p tcp --destination-port <database-listener-port> -s <database-host-ip> -j DROP
However, this cause the process to get stuck with the following log coming from AbstractPlatformTransactionManager::getTransaction
DEBUG: Creating new transaction with name [<Transaction-Name>]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT; ''
Enabling it, later on with 'iptables -F' makes the transaction 'get back to life' again and the connection is being retrieved and ends successfully.
We are most concern with the fact all connection timeout configuration were not activated (?) and therefore we so such hangs, none of our connection pool defaults (see below) has such infinite timeout (we tried also giving a small default for the abandonedConnectionTimeout but it didn't help and we returned for the true default we believe should be in production) and we expected some kind of cancel operation should be performed.
abandonedConnectionTimeout=0
acquireIncrement=5
acquireRetryAttempts=3
checkoutTimeout=5000
idleConnectionTestPeriod=60
inactivityTimeout=1800
inactivityTimeoutforNonUsedConnection=1800
validateConnection=true
Thanks for any assist on this matter.

How to wait in script until device is connected

I have a Sky wireless sensor node and a script which prints the output from the node.
sudo ./serialdump-linux -b115200 /dev/tmotesky1
If I start this script before my pc detects the node, I get the following error:
/dev/tmotesky1: No such file or directory
But if I wait for example 20 seconds, I miss the initial prints (which are important).
Is there a way to detect if the /dev/tmotesky1 exists?
Something like
while [ ! -f /dev/tmotesky1 ] ; do sleep 1; print 'Waiting...'; done
Thanks in advance!
Your code indicates that you are using Linux where you can use the hotplugging mechanism.
On generic systems, you can write an udev rule (--> see with udevadmin monitor -e what happens when you attach the device) which starts e.g. a program or writes something into a pipe. When systemd is used, you can start a service (see man systemd.device).
On small/embedded systems it is possible to write a custom /sbin/hotplug program (set in /proc/sys/kernel/hotplug) instead of using udev.

How can the strace logs of the ever running binary from rcS can be logged?

I am trying to do a profiling on my embedded Linux box. This is running a software.
I want to do a profiling on my software using strace.
The application is the main software that keeps running forever.
How can I run the strace and log the outputs to a file.
In my rcS script.
I run the application like this
./my_app
Now, with strace.
strace ./my_app -> I want to log these outputs on a file, and I should be able to access the file without killing the application. Remember this application never terminates.
Please help!
Instead of a target filename, use the -p option to strace to specify the process ID of an already running process you wish to attach to.
Chris is actually right. Strace takes the -p option, which enables you to attach to a running process just by specifying the processes PID.
Let's say your 'my_app' process runs with PID 2301 (you can see the PID by logging into your device and us 'ps'). Try doing 'strace -p 2301', and you will see all system calls for that PID. You can throw it to a file by redirecting everywhere: 'strace -p 2301 > /tmp/my_app-strace'.
Hope this helps.

JMeter load test thread dump

I am using JMeter for load testing and some of my threads just hang. I want to do a thread dump but none of the following work from my linux machine
First get JMeter process id using
jps -l
Then did
sudo -u <username> jstack <pid>
and get the following msg
15141: Unable to open socket file: target process not responding or HotSpot VM not loaded
The -F option can be used when the target process is not responding
even
kill -3 15141
comes up with nothing
After lot of googling and trial and error found the solution
To take thread dumps, start JMeter using command line.
Open terminal (A)
$ cd /media/9260C06E60C05A9D/Downloads/jakarta-jmeter-2.4/bin
$ ./jmeter > temp
In another terminal (B)
Get the process id of JMeter
$ jps -l
$ kill -QUIT 21735
Now check temp file for thread dump.
In order to use jstack make sure the user and group user are the same as the user running jstack

Resources