I'm debugging a script in WinDbg with .childdbg 1. (The script runs various test cases of software in infinite loop. This way I catch rare crashes.)
I need not to attach to specific child processes (for performance reasons and because they are third-party and crash often).
If I could specify them by process name, that would solve my problem. If you can propose an other debugger that can do what I need, I will be grateful.
NOTE: Configuring debugger to attach to specific processes via GFlags is not a solution in this specific case.
If you have activated .childdbg 1, you can make use of sxe cpr. With the -c switch, you can execute a command. Something like .if (yourcondition) {.detach} .else {g} could help.
Perhaps the cpr:ProcessName option is really helpful for you. It supports wildcard filters. I've never used it until now, see Controlling Exceptions and Events in WinDbg help.
I used the following .NET program to perform a test:
static void Main()
{
Console.WriteLine("Attach and press Enter");
Console.ReadLine();
Process.Start("Notepad.exe");
Process.Start("calc.exe");
Process.Start("Notepad.exe");
Process.Start("calc.exe");
Console.WriteLine("Started 4 processes");
Console.ReadLine();
}
I started the program under the debugger and did the following:
0:004> .childdbg 1
Processes created by the current process will be debugged
0:004> sxe -c ".detach;g" cpr:calc
0:004> g
...
77da12fb cc int 3
1:009> |
0 id: 1fe0 create name: DebugChildProcesses.exe
. 1 id: f60 child name: notepad.exe
1:009> g
...
77da12fb cc int 3
2:011> |
0 id: 1fe0 create name: DebugChildProcesses.exe
1 id: f60 child name: notepad.exe
. 2 id: 1d68 child name: notepad.exe
2:011> g
As you can see, the debugger is attached to Notepad only.
Unfortunately, you cannot use multiple sxe cpr:process commands. Whenever you use it again, it will overwrite the previous settings.
In that case, you need to use a generic CPR handler and do the rest inside the command. In my tests, !peb did not work well at that time, so I couldn't use it. However, WinDbg has already switched to that process, therefore |. gives us the process name along with some other information. To extract the process name only, .foreach /pS 6 (token {|.}) { .echo ${token}} worked for me.
With this, you can build trickier commands like
.foreach /pS 6 (token {|.}) {
.if (0==$scmp("${token}","notepad.exe")) {.echo "It's Notepad!"}
.elsif (0==$scmp("${token}","calc.exe")) {.echo "Do some math!"}
}
(formatted for readability, remove the line breaks)
When you try to combine this with sxe, you run into some nasty string escaping problems. Replace all quotes inside your command by \" to make it work:
sxe -c"
.foreach /pS 6 (token {|.}) {
.if (0==$scmp(\"${token}\",\"notepad.exe\")) {.echo \"It's Notepad!\"}
.elsif (0==$scmp(\"${token}\",\"calc.exe\")) {.echo \"Do some math!\"}
}
" cpr
(formatted for readability, remove the line breaks)
Now you can do whatever you like, e.g. .detach or .kill, just replace the .echo command in above example. You can execute several commands by separating them via semicolon (;) as usual.
BTW: If you use sxe cpr, you might perhaps want to turn off the initial process breakpoint by sxd ibp.
Related
Here is the case;
There is this app called "termux" on android which allows me to use a terminal on android, and one of the addons are androids API's like sensors, tts engines, etc.
I wanted to make a script in ruby using this app, specifically this api, but there is a catch:
The script:
require('json')
JSON.parse(%x'termux-sensor -s "BMI160 Gyro" -n 1')
-s = Name or partially the name of the sensor
-n = Count of times the command will run
returns me:
{
"BMI160 Gyroscope" => {
"values" => [
-0.03...,
0.00...,
1.54...
]
}
}
I didn't copied and pasted the values, but that's not the point, the point is that this command takes almost a full second the load, but there is a way to "make it faster"
If I use the argument "-d" and not use "-n", I can specify the time in milliseconds to delay between data being sent in STDOUT, it also takes a full second to load, but when it loads, the delay works like charm
And since I didn't specify a 'n' number of times, it never stops, and there is the problem
How can I retrieve the data continuously in ruby??
I thought about using another thread so it won't stop my program, but how can I tell ruby to return the last X lines of the STDOUT from a command that hasn't and will not ever stop since "%x'command'" in ruby waits for a return?
If I understood you need to connect to stdout from a long running process.
see if this works for your scenario using IO.popen:
# by running this program
# and open another terminal
# and start writing some data into data.txt
# you will see it appearing in this program output
# $ date >> data.txt
io_obj = IO.popen('tail -f ./data.txt')
while !io_obj.eof?
puts io_obj.readline
end
I found out a built in module that saved me called PTY and the spawn#method plus thread management helped me to keep a variable updated with the command values each time the command outputted new bytes
I have a minimal reproducible case where a worker thread is not yielding to the main thread, or IOW, the scheduler is not triggering a context switch to the main thread, or IOOW, the task in the worker thread is blocking the main thread.
I'm on Windows 10.0.18362 and using ActiveState Perl 5.28.
foo.pl:
use strict;
use File::Basename;
use threads;
use Data::Dumper;
# sub yield {
# sleep(1);
# }
sub wait_for_notepad {
my #notepads = ();
while (!scalar(#notepads)) {
#notepads = grep /notepad\.exe/, qx(tasklist);
print(Dumper(\#notepads));
# yield();
}
}
sub main {
my $thr = threads->create(sub {
# yield();
system("cmd /c \"notepad.exe\"");
});
my $script_dir = dirname($0);
my $output = qx(perl $script_dir\\bar.pl);
print("$output");
wait_for_notepad();
system("taskkill /im notepad.exe /f /t");
$thr->join();
}
main();
bar.pl (place in the same directory as foo.pl)
use strict;
print("hello from bar.pl\n");
Then run with perl path/to/foo.pl.
The docs mention a threads->yield(), but it also mentions that it's a no-op on most systems, which seems to be how it's behaving for me. Another option I found was to use a sleep() right before the worker thread calls system() (the commented parts of the code). Are there any other alternatives that would help me tell the worker thread to yield to the main thread?
Edit:
Sorry for being unclear. The exact sequence of behavior I'm seeing from this is when the script is ran, notepad opens up. I'm expecting to see bar.pl print "hello from bar.pl" in the console concurrently, but that does not happen. The main thread sits there blocked for several seconds with nothing printed to stdout until I interactively close notepad. Is this expected behavior? Perhaps there's a better way to achieve parallel/multi processing in perl, or I'm just doing this wrong.
Edit:
Also, if I edit the line where I have my $output = qx(perl $script_dir\\bar.pl); to system("perl $script_dir\\bar.pl"), the problem disappears and no blocking occurs.
Edit:
Title may be misleading. Sorry about that. I'm bad at making titles when it's late in the day =P. By "yield", I mean some worker thread gets interrupted to give some time to the main thread. But that's weird too b.c. if each thread is running on their own core then you'd get parallel processing anyway, so "yielding" makes no sense in this context. Anyway, never mind all that; in any case, the title should've been something more like "process deadlock when using perl threads api with no synchronization primitives". That's probably more accurate here.
I am trying to debug a C++ program using GDB. I have set 15 breakpoints. Most of the breakpoints are in different files. After the first 5 breakpoints, it became difficult to remember what line of code any given breakpoint refers to.
I struggle quite a bit simply trying to recall what a given breakpoint refers to. I find this quite distracting. I was wondering if there is a way to tell gdb to display code around a certain breakpoint.
Something like this - $(gdb) code 3 shows 30 lines of code around breakpoint 3. Is this possible today. Could you please show me how?
I run gdb in tui mode, and I also keep emacs open to edit my source files.
You can use gdb within emacs.
In emacs, type M-x gdb, after entering the name of the executable, type M-x gdb-many-windows. It brings up an IDE-like interface, with access to debugger, locals, source, input/output, stack frame and breakpoints.
You can find a reference and snapshot here.
I don't think you can do it exactly like this in gdb as such, but it can be scripted in gdb python.
This crude script should help:
import gdb
class Listbreak (gdb.Command):
""" listbreak n Lists code around breakpoint """
def __init__ (self):
super(Listbreak, self).__init__ ("listbreak", gdb.COMMAND_DATA)
def invoke (self, arg, from_tty):
printed = 0
for bp in gdb.breakpoints():
if bp.number == int(arg[0]):
printed = 1
print ("Code around breakpoint " + arg[0] + " (" + bp.location + "):")
gdb.execute("list " + bp.location)
if printed == 0:
print ("No such breakpoint")
Listbreak()
Copy this to listbreak.py, source it in gdb (source listbreak.py), then use it like this:
listbreak 2
This example routine generates two Throw::nocatch warning messages in the kernel window. Can they be handled somehow?
The example consists of this code in a file "test.m" created in C:\Temp:
Needs["JLink`"];
$FrontEndLaunchCommand = "Mathematica.exe";
UseFrontEnd[NotebookWrite[CreateDocument[], "Testing"]];
Then these commands pasted and run at the Windows Command Prompt:
PATH = C:\Program Files\Wolfram Research\Mathematica\8.0\;%PATH%
start MathKernel -noprompt -initfile "C:\Temp\test.m"
Addendum
The reason for using UseFrontEnd as opposed to UsingFrontEnd is that an interactive front end may be required to preserve output and messages from notebooks that are usually run interactively. For example, with C:\Temp\test.m modified like so:
Needs["JLink`"];
$FrontEndLaunchCommand="Mathematica.exe";
UseFrontEnd[
nb = NotebookOpen["C:\\Temp\\run.nb"];
SelectionMove[nb, Next, Cell];
SelectionEvaluate[nb];
];
Pause[10];
CloseFrontEnd[];
and a notebook C:\Temp\run.nb created with a single cell containing:
x1 = 0; While[x1 < 1000000,
If[Mod[x1, 100000] == 0,
Print["x1=" <> ToString[x1]]]; x1++];
NotebookSave[EvaluationNotebook[]];
NotebookClose[EvaluationNotebook[]];
this code, launched from a Windows Command Prompt, will run interactively and save its output. This is not possible to achieve using UsingFrontEnd or MathKernel -script "C:\Temp\test.m".
During the initialization, the kernel code is in a mode which prevents aborts.
Throw/Catch are implemented with Abort, therefore they do not work during initialization.
A simple example that shows the problem is to put this in your test.m file:
Catch[Throw[test]];
Similarly, functions like TimeConstrained, MemoryConstrained, Break, the Trace family, Abort and those that depend upon it (like certain data paclets) will have problems like this during initialization.
A possible solution to your problem might be to consider the -script option:
math.exe -script test.m
Also, note that in version 8 there is a documented function called UsingFrontEnd, which does what UseFrontEnd did, but is auto-configured, so this:
Needs["JLink`"];
UsingFrontEnd[NotebookWrite[CreateDocument[], "Testing"]];
should be all you need in your test.m file.
See also: Mathematica Scripts
Addendum
One possible solution to use the -script and UsingFrontEnd is to use the 'run.m script
included below. This does require setting up a 'Test' kernel in the kernel configuration options (basically a clone of the 'Local' kernel settings).
The script includes two utility functions, NotebookEvaluatingQ and NotebookPauseForEvaluation, which help the script to wait for the client notebook to finish evaluating before saving it. The upside of this approach is that all the evaluation control code is in the 'run.m' script, so the client notebook does not need to have a NotebookSave[EvaluationNotebook[]] statement at the end.
NotebookPauseForEvaluation[nb_] := Module[{},While[NotebookEvaluatingQ[nb],Pause[.25]]]
NotebookEvaluatingQ[nb_]:=Module[{},
SelectionMove[nb,All,Notebook];
Or##Map["Evaluating"/.#&,Developer`CellInformation[nb]]
]
UsingFrontEnd[
nb = NotebookOpen["c:\\users\\arnoudb\\run.nb"];
SetOptions[nb,Evaluator->"Test"];
SelectionMove[nb,All,Notebook];
SelectionEvaluate[nb];
NotebookPauseForEvaluation[nb];
NotebookSave[nb];
]
I hope this is useful in some way to you. It could use a few more improvements like resetting the notebook's kernel to its original and closing the notebook after saving it,
but this code should work for this particular purpose.
On a side note, I tried one other approach, using this:
UsingFrontEnd[ NotebookEvaluate[ "c:\\users\\arnoudb\\run.nb", InsertResults->True ] ]
But this is kicking the kernel terminal session into a dialog mode, which seems like a bug
to me (I'll check into this and get this reported if this is a valid issue).
I have a Windows executable (whoami) which is crashing every so often. It's called from another process to get details about the current user and domain. I'd like to know what parameters are passed when it fails.
Does anyone know of an appropriate way to wrap the process and write it's command line arguments to log while still calling the process?
Say the command is used like this:
'whoami.exe /all'
I'd like a script to exist instead of the whoami.exe (with the same filename) which will write this invocation to log and then pass on the call to the actual process.
From a batch file:
echo Parameters: %* >> logfile.txt
whoami.exe %*
With the caveat that you can have problems if the parameters contain spaces (and you passed the in escaping with "), because the command-line parser basically de-escapes them and they should be re-escaped before passed to an other executable.
You didn't note which programming language. It is not doable from a .bat file if that's what you wanted, but you can do it in any programming language. Example in C:
int main(int argc, void **argv)
{
// dump contents of argv to some log file
int i=0;
for (i=0; i<argc; i++)
printf("Argument #%d: %s\n", argv[i]);
// run the 'real' program, giving it the rest of argv vector (1+)
// for example spawn, exec or system() functions can do it
return 0; // or you can do a blocking call, and pick the return value from the program
}
I don't think using a "script" will work, since the intermediate should have a .exe extension for your ploy to work.
I would write a very small command line program to do this; something like the following (written in Delphi/Virtual Pascal so it will result in a Win32 executable, but any compiled language should do):
program PassThrough;
uses
Dos; // Imports the Exec routine
const
PassTo = 'Original.exe'; // The program you really want to call
var
CommandLine: String;
i: Integer;
f: Text;
begin
CommandLine := '';
for i := 1 to ParamCount do
CommandLine := CommandLine + ParamStr(i) + ' ';
Assign(f,'Passthrough.log');
Append(f);
Writeln(f, CommandLine); // Write a line in the log
Close(f);
Exec(PassTo, CommandLine); // Run the intended program
end.
Can't you just change the calling program to log the parameters it used to call the process, and the exit code?
This would be way easier than trying to dig into whoami.exe
Look for whoami.exe, BACK IT UP, replace it with your own executable and see do whatever you like with it's parameters (maybe save them in a text file).
If you can reproduce the crash, use Process Explorer before crashed process is terminated to see its command line.
http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx