Pywin32: win32api.SendMessage to a DOS box program not possible? - winapi

Is it possible to use win32api.SendMessage to send characters to a program which seems to be running in some sort of DOS box?
In my Windows Task Manager I see a process called ntvdm.exe (apparently that is the "Virtual DOS Machine"). It looks like wowexec.exe (= "windows on windows") and my target.exe are both "inside" that ntvdm.exe, since they have no own PID in the Task Manager. Instead they are shown with an indent below ntvdm.exe.
I have tried to target all possible window handles for my target.exe (from parent = 0 down to every child) via win32api.SendMessage(<mywindowhandle>, win32con.WM_CHAR, 0x41, 0) but the 'A' never arrives in the program. SendMessage works in other programs, such as notepad and notepad++. Only the DOS program is causing me headaches.
Using shell = win32com.client.Dispatch("WScript.Shell") however in combination with shell.AppAcitvate (using the PID of ntvdm.exe) and shell.SendKeys works! Doesn't that send "WM_CHAR" messages in the background as well?

In order to support a myriad of different application types, Windows NT has a fairly complex architecture. You're apparently assuming WM_CHAR messages are keystrokes. This is very much a Win16-way of thinking. The WM stands for Window Message; it's a keystroke event for applications with a window and a message pump.
Console programs on the other hand do not use window message pumps; they have Unix-style Standard In and Standard Out. shell.sendKeys understands the difference.
This also means a console program does not have a window handle. The PID is a process identifier, so not a window handle. A process can have 0, 1 or more window handles, so for every window handle there's a (generally non-unique) PID but not vice versa.
SendKeys works because the shell knows all this.

Related

kill child process - exec.Command

How do you kill child processes?
I have a long running application starting a new process with "exec.Command":
// ...I am a long running application in the background
// Now I am starting a child process, that should be killed togeter with the parent application.
cmd := exec.Command("sh", "-c", execThis)
// create a new process group
// cmd.SysProcAttr = &syscall.SysProcAttr{Setpgid: true}
Now if I kill <pid of long running application in the background> it does not kill the child process - do you know how?
There are quite a few things to be teased apart here.
First, there's what the OS itself does. Then, once we know what the OS is and what it does, there's what the program does.
What the OS does is, obviously, OS-dependent. POSIX-flavored OSes have two kinds of kill though: plain kill and process-group-based kill, or killpg. The killpg variety of this function is the only one that sends a signal to an entire process group; plain kill just sends a signal to a single process.
When a program is run from a controlling terminal, keyboard signals (^C, ^Z, etc) get sent to the foreground process group of that control terminal (see the linked page for a reasonably good description of these and note that BSD/macOS has ^T and SIGINFO as well). But if the signals are being sent from some other program, rather than from a controlling terminal, it is up to that program whether to call killpg or kill, and what signal(s) to send.
Some signals cannot be caught. This is the case for SIGKILL and SIGSTOP. These signals should not be sent willy-nilly; they should be reserved to a last resort. Instead, programs that want another program to stop should generally send one of SIGINT, SIGTERM, SIGHUP, or (rarely) SIGQUIT. Go tends to tie SIGQUIT to debug (in that the runtime on POSIX systems makes ^\ dump the stacks of the various goroutines) so that one is not a good choice. However, it's not up to the Go program you write here, which can only try to catch the signal. The choice of what to send is up to the sender.
The "Go way" to catch the signal is to use a goroutine and a channel. The signal.Notify function turns an OS-level signal into an event on the channel. What you do not (and cannot) know is whether the signal reached your process through kill or killpg (though if it came from a controlling terminal interaction, the POSIX-y kernel sent it via the equivalent of killpg). If you want to propagate that signal on your own, simply use the notification event to invoke code that makes an OS-level kill call. When using the os/exec package, use cmd.Process.Signal: note that this invokes the POSIX kill, not its killpg, but you would not want to use killpg here since we're assuming a non-process-group signal in the first place (a pgroup-based signal presumably needs no propagation).
There is no fully portable way to send a signal to a POSIX process group (which is not surprising, since this isn't portable to non-POSIX systems). Sadly, there's no direct Unix or POSIX specific way to do that either, it seems, in Go.
On non-POSIX systems, everything is quite different. See the discussion near the front of the os/signal package.

How do debuggers bypass Image File Execution Options when launching their debugee?

I'm doing some poking around in Windows internals for my general edification, and I'm trying to understand the mechanism behind Image File Execution Options. Specifically, I've set a Debugger entry for calc.exe, with "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" -NoLogo -NoProfile -NoExit -Command "& { start-process -filepath $args[0] -argumentlist $args[1..($args.Length - 1)] -nonewwindow -wait}" as the payload. This results in recursion, with many powershell instances being launched, which makes sense given that I'm intercepting their calls to calc.exe.
That begs the question, though: how do normal debuggers launch a program under test without causing this sort of recursive behavior?
It is anyway a good question about Windows internals, but the reason it has my interest right now is that it has become a practical question for me. At somewhere that I do paid work are three computers, each with a different Windows version and even different debuggers, for which using this IFEO trick results in the debugger debugging itself, apparently trapped in this very same circularity that troubles the OP.
How do debuggers usually avoid this circularity? Well, they themselves don't. Windows avoids it for them.
But let's look first at the circularity. Simple demonstrations are hardly ever helped by PowerShell concoctions and calc.exe is not what it used to be. Let's instead set the Debugger value for notepad.exe to c:\windows\system32\cmd.exe /k. Windows will interpret this as meaning that attempting to run notepad.exe should ordinarily run c:\windows\system32\cmd.exe /k notepad.exe instead. CMD will interpret this as meaning to run notepad.exe and hang about. But this execution too of notepad.exe will be turned into c:\windows\system32\cmd.exe /k notepad.exe, and so on. The Task Manager will soon show you many hundreds of cmd.exe instances. (The good news it that they're all on the one console and can be killed together.)
The OP's question is then why does CMD and its /k (or /c) switch for running a child go circular in a Debugger value but WinDbg, for instance, does not.
In one sense, the answer is down to one bit in an undocumented structure, the PS_CREATE_INFO, that's exchanged between user and kernel modes for the NtCreateUserProcess function. This structure has become fairly well known in some circles, not that they ever seem to say how. I think the structure dates from Windows Vista, but it's not known from Microsoft's public symbol files until Windows 8 and even then not from the kernel but from such things as the Internet Explorer component URLMON.DLL.
Anyway, in the modern form of the PS_CREATE_INFO structure, the 0x04 bit at offset 0x08 (32-bit) or 0x10 (64-bit) controls whether the kernel checks for the Debugger value. Symbol files tell us this bit is known to Microsoft as IFEOSkipDebugger. If this bit is clear and there's a Debugger value, then NtCreateUserProcess fails. Other feedback through the PS_CREATE_INFO structure tells KERNELBASE, for its handling of CreateProcessInternalW, to have its own look at the Debugger value and call NtCreateUserProcess again but for (presumably) some other executable and command line.
When instead the bit is set, the kernel doesn't care about the Debugger value and NtCreateUserProcess can succeed. How the bit ordinarily gets set is by KERNELBASE because the caller is asking not just to create a process but is specifically asking to be the debugger of the new process, i.e., has set either DEBUG_PROCESS or DEBUG_ONLY_THIS_PROCESS in the process creation flags. This is what I mean by saying that debuggers don't do anything themselves to avoid the circularity. Windows does it for them just for their wanting to debug the executable.
One way to look at the Debugger value as an Image File Execution Option for an executable X is that the value's presence means X cannot execute except under a debugger and the value's content may tell how to do that. As hackers have long noticed, and the kernel's programmers will have noticed well before, the content need not specify a debugger and the value can be adapted so that attempts to run X instead run Y. Less noticed is that Y won't be able to run X unless Y debugs X (or disables the Debugger value). Also less noticed is that not all attempts to run X will instead run Y: a debugger's attempt to run X as a debuggee will not be diverted.
TLDR of Geoff's great answer - use DEBUG_PROCESS or DEBUG_ONLY_THIS_PROCESS to bypass the Debugger / Image File Execution Options (IFEO) global flag and avoid the recursion.
In C#, using the excellent Vanara.PInvoke.Kernel32 NuGet:
var startupInfo = new STARTUPINFO();
var creationFlags = Kernel32.CREATE_PROCESS.DEBUG_ONLY_THIS_PROCESS;
CreateProcess(path, null, null, null, false, creationFlags, null, null, startupInfo, out var pi);
DebugActiveProcessStop(pi.dwProcessId);
Note that DebugActiveProcessStop was key for me (couldn't see a window when opened notepad.exe otherwise) - and makes sense anyway if your program is not really a debugger and you just want the bypass.

fork()/exec() in XWindow application

How to execute xterm from XWindow program, insert it into my window, but continue execution both while xterm is active and after it was closed?
In my XWindows (XLib over XCB) application I want to execute xterm -Into <handle>. So that my window contains xterm window in it. Unfortunately something wrong is happening.
pseudo code:
if (fork() == 0) {
pipe = popen('xterm -Into ' + handle);
while (feof(pipe)) gets(pipe);
exit(0);
}
I tired system() and execvp() as well. Every thing is fine until I exit from bash that runs in xterm, then my program exits. I guess that connection to X server is lost because it is shared between parent and child.
UPDATE: here is what is shown on terminal after program exits (or rather crashes).
XIO: fatal IO error 11 (Resource temporarily unavailable) on X server ":0.0"
after 59 requests (59 known processed) with 1 events remaining.
[xcb] Unknown sequence number while processing queue
[xcb] Most likely this is a multi-threaded client and XInitThreads has not been called
[xcb] Aborting, sorry about that.
y: ../../src/xcb_io.c:274: poll_for_event: Assertion `!xcb_xlib_threads_sequence_lost' failed.
Aborted
One possibility is that you are terminating due to the SIGCHLD signal not
being ignored and causing your program to abort.
signal(SIGCHLD, SIG_IGN);
Another is, as you suspect something actively closing the X session. Just
closing the socket itself should not matter but are you using a library that
registers an atexit call it could cause an issue.
Since from your snippet,
it looks like you don't actually care about the stdout of the xterm, a
better way to do it would be to actuall close fd's 0,1,2. Also since it looks
like you don't need to do anything in the child process after xterm
terminates you can use 'exec' rather than 'popen' to fully replace the
child process with that of the xterm including any cleanup handlers that
were left around. Though, I am not sure how pruned your snippet is from what you want to do as obviously the call to 'gets' is not what you want.
to make sure the X connection is closed, you can set its close on exec flag
with the following. (this will work on POSIX systems where the x connection
number is the fd of the server socket)
fcntl(XConnectionNumber(display), F_SETFD, fcntl(XConnectionNumber(display), F_GETFD) | FD_CLOEXEC);
Also note that 'popen' itself forks in the background in addition to your fork, I think you probably want to do an execvp there then use waitpid(... , WNOHANG) to check for the childs termination in your main X11 loop if you care to know when it exited.

Interrupt MATLAB programmatically on Windows

When using MATLAB through the GUI, I can interrupt a computation by pressing Ctrl-C.
Is there a way to do the same programmatically when using MATLAB through the MATLAB Engine C API?
On Unix systems there is a solution: send a SIGINT signal. This will not kill MATLAB. It'll only interrupt the computation. I am looking for a solution that works on Windows.
Clarifications (seeing that the only answerer misunderstood):
I am looking for a way to interrupt any MATLAB calculation, without having control over the MATLAB code that is being run. I'm looking for the programmatic equivalent of pressing Ctrl-C in at the MATLAB command window, on Windows systems. This is for a Mathematica-MATLAB interface: I need to forward interrupts from Mathematica to MATLAB. As mentioned above, I already have a working implementation on Unix; this question is about how to do it on Windows.
One way would be to make the MATLAB Engine session visible, prior to executing long computations. That way if you want to interrupt execution, you just bring the visible command window into focus and hit Ctrl-C.
This can be done using the engSetVisible function
Here is a quick example I tried using MATLAB COM Automation. The process should be similar since MATLAB Engine is implemented using COM on Windows (pipes are used on Unix instead).
The scripting is done in Powershell:
# create MATLAB automation server
$m = New-Object -ComObject matlab.application
$m | Get-Member
# make the command window visible
$m.Visible = $true
# execute some long computation: pause(10)
$m.Feval('disp', 0,[ref]$null, 'Press Ctrl-C to interrupt...')
$m.Feval('pause', 0,[ref]$null, 10)
# close and cleanup
$m.Quit()
$m = $null
Remove-Variable m
During the pause, you can break it by hitting Ctrl+c in the command window:
There isn't a direct way: all of those routines have to be unwound and their workspaces
cleaned up, which might invoke exit handlers, and so on.
The closest I can think of is to have your main routine have a try/catch
and then when you wish to abort, error() the particular string that the
catch is keyed for, and when you detect it, bail out cleanly from your
main routine.

How can a C/C++ program put itself into background?

What's the best way for a running C or C++ program that's been launched from the command line to put itself into the background, equivalent to if the user had launched from the unix shell with '&' at the end of the command? (But the user didn't.) It's a GUI app and doesn't need any shell I/O, so there's no reason to tie up the shell after launch. But I want a shell command launch to be auto-backgrounded without the '&' (or on Windows).
Ideally, I want a solution that would work on any of Linux, OS X, and Windows. (Or separate solutions that I can select with #ifdef.) It's ok to assume that this should be done right at the beginning of execution, as opposed to somewhere in the middle.
One solution is to have the main program be a script that launches the real binary, carefully putting it into the background. But it seems unsatisfying to need these coupled shell/binary pairs.
Another solution is to immediately launch another executed version (with 'system' or CreateProcess), with the same command line arguments, but putting the child in the background and then having the parent exit. But this seems clunky compared to the process putting itself into background.
Edited after a few answers: Yes, a fork() (or system(), or CreateProcess on Windows) is one way to sort of do this, that I hinted at in my original question. But all of these solutions make a SECOND process that is backgrounded, and then terminate the original process. I was wondering if there was a way to put the EXISTING process into the background. One difference is that if the app was launched from a script that recorded its process id (perhaps for later killing or other purpose), the newly forked or created process will have a different id and so will not be controllable by any launching script, if you see what I'm getting at.
Edit #2:
fork() isn't a good solution for OS X, where the man page for 'fork' says that it's unsafe if certain frameworks or libraries are being used. I tried it, and my app complains loudly at runtime: "The process has forked and you cannot use this CoreFoundation functionality safely. You MUST exec()."
I was intrigued by daemon(), but when I tried it on OS X, it gave the same error message, so I assume that it's just a fancy wrapper for fork() and has the same restrictions.
Excuse the OS X centrism, it just happens to be the system in front of me at the moment. But I am indeed looking for a solution to all three platforms.
My advice: don't do this, at least not under Linux/UNIX.
GUI programs under Linux/UNIX traditionally do not auto-background themselves. While this may occasionally be annoying to newbies, it has a number of advantages:
Makes it easy to capture standard error in case of core dumps / other problems that need debugging.
Makes it easy for a shell script to run the program and wait until it's completed.
Makes it easy for a shell script to run the program in the background and get its process id:
gui-program &
pid=$!
# do something with $pid later, such as check if the program is still running
If your program forks itself, this behavior will break.
"Scriptability" is useful in so many unexpected circumstances, even with GUI programs, that I would hesitate to explicitly break these behaviors.
Windows is another story. AFAIK, Windows programs automatically run in the background--even when invoked from a command shell--unless they explicitly request access to the command window.
On Linux, daemon() is what you're looking for, if I understand you correctly.
The way it's typically done on Unix-like OSes is to fork() at the beginning and exit from the parent. This won't work on Windows, but is much more elegant than launching another process where forking exists.
Three things need doing,
fork
setsid
redirect STDIN, STDOUT and STDERR to /dev/null
This applies to POSIX systems (all the ones you mention claim to be POSIX (but Windows stops at the claiming bit))
On UNIX, you need to fork twice in a row and let the parent die.
A process cannot put itself into the background, because it isn't the one in charge of background vs. foreground. That would be the shell, which is waiting for process exit. If you launch a process with an ampersand "&" at the end, then the shell does not wait for process exit.
But the only way the process can escape the shell is to fork off another child and then let its original self exit back to the waiting shell.
From the shell, you can background a process with Control-Z, then type "bg".
Backgrounding a process is a shell function, not an OS function.
If you want an app to start in the background, the typical trick is to write a shell script to launch it that launches it in the background.
#! /bin/sh
/path/to/myGuiApplication &
To followup on your edited question:
I was wondering if there was a way to put the EXISTING process into the background.
In a Unix-like OS, there really is not a way to do this that I know of. The shell is blocked because it is executing one of the variants of a wait() call, waiting for the child process to exit. There is not a way for the child process to remain running but somehow cause the shell's wait() to return with a "please stop watching me" status. The reason you have the child fork and exit the original is so the shell will return from wait().
Here is some pseudocode for Linux/UNIX:
initialization_code()
if(failure) exit(1)
if( fork() > 0 ) exit(0)
setsid()
setup_signal_handlers()
for(fd=0; fd<NOFILE; fd++) close(fd)
open("/dev/null", O_RDONLY)
open("/dev/null", O_WRONLY)
open("/dev/null", o_WRONLY)
chdir("/")
And congratulations, your program continues as an independent "daemonized" process without a controlling TTY and without any standard input or output.
Now, in Windows you simply build your program as a Win32 application with WinMain() instead of main(), and it runs without a console automatically. If you want to run as a service, you'll have to look that up because I've never written one and I don't really know how they work.
You edited your question, but you may still be missing the point that your question is a syntax error of sorts -- if the process wasn't put in the background to begin with and you want the PID to stay the same, you can't ignore the fact that the program which started the process is waiting on that PID and that is pretty much the definition of being in the foreground.
I think you need to think about why you want to both put something in the background and keep the PID the same. I suggest you probably don't need both of those constraints.
As others mentioned, fork() is how to do it on *nix. You can get fork() on Windows by using MingW or Cygwin libraries. But those will require you to switch to using GCC as your compiler.
In pure Windows world, you'd use CreateProcess (or one of its derivatives CreateProcessAsUser, CreateProcessWithLogonW).
The simplest form of backgrounding is:
if (fork() != 0) exit(0);
In Unix, if you want to background an disassociate from the tty completely, you would do:
Close all descriptors which may access a tty (usually 0, 1, and 2).
if (fork() != 0) exit(0);
setpgroup(0,getpid()); /* Might be necessary to prevent a SIGHUP on shell exit. */
signal(SIGHUP,SIG_IGN); /* just in case, same as using nohup to launch program. */
fd=open("/dev/tty",O_RDWR);
ioctl(fd,TIOCNOTTY,0); /* Disassociates from the terminal */
close(fd);
if (fork() != 0) exit(0); /* just for good measure */
That should fully daemonize your program.
The most common way of doing this under Linux is via forking. The same should work on Mac, as for Windows I'm not 100% sure but I believe they have something similar.
Basically what happens is the process splits itself into two processes, and then the original one exits (returning control to the shell or whatever), and the second process continues to run in the background.
I'm not sure about Windows, but on UNIX-like systems, you can fork() then setsid() the forked process to move it into a new process group that is not connected to a terminal.
Under Windows, the closing thing you're going to get to fork() is loading your program as a Windows service, I think.
Here is a link to an intro article on Windows services...
CodeProject: Simple Windows Service Sample
So, as you say, just fork()ing will not do the trick. What you must do is fork() and then re-exec(), as this code sample does:
#include stdio.h>
#include <unistd.h>
#include <string.h>
#include <CoreFoundation/CoreFoundation.h>
int main(int argc, char **argv)
{
int i, j;
for (i=1; i<argc; i++)
if (strcmp(argv[i], "--daemon") == 0)
{
for (j = i+1; j<argc; j++)
argv[j-1] = argv[j];
argv[argc - 1] = NULL;
if (fork()) return 0;
execv(argv[0], argv);
return 0;
}
sleep(1);
CFRunLoopRun();
CFStringRef hello = CFSTR("Hello, world!");
printf("str: %s\n", CFStringGetCStringPtr(hello, CFStringGetFastestEncoding(hello)));
return 0;
}
The loop is to check for a --daemon argument, and if it is present, remove it before re-execing so an infinite loop is avoided.
I don't think this will work if the binary is put into the path because argv[0] is not necessarily a full path, so it will need to be modified.
/**Deamonize*/
pid_t pid;
pid = fork(); /**father makes a little deamon(son)*/
if(pid>0)
exit(0); /**father dies*/
while(1){
printf("Hello I'm your little deamon %d\n",pid); /**The child deamon goes on*/
sleep(1)
}
/** try 'nohup' in linux(usage: nohup <command> &) */
In Unix, I have learned to do that using fork().
If you want to put a running process into the background, fork it twice.
I was trying the solution.
Only one fork is needed from the parent process.
The most important point is that, after fork, the parent process must die by calling _exit(0); and NOT by calling exit(0);
When _exit(0); is used, the command prompt immediately returns on the shell.
This is the trick.
If you need a script to have the PID of the program, you can still get it after a fork.
When you fork, save the PID of the child in the parent process. When you exit the parent process, either output the PID to STD{OUT,ERR} or simply have a return pid; statement at the end of main(). A calling script can then get the pid of the program, although it requires a certain knowledge of how the program works.

Resources