How can I read the stderr of a GUI process that is launched by another process on Windows? - windows

I want to see the text a program prints to stderr.
I can't see it because the program can only be launched by its launcher so nothing is shown when it's run from the command prompt; the program runs and then the command prompt immediately returns for input instead of blocking and displaying the output.
Can you think of any technique, even an ugly hack, that will let me read what's being printed to stderr?
I have a few ideas but I don't know how to carry them out easily:
Read the memory of the process at the right location
Modify the memory of the process to change the stderr handle to one I can read
Hook the API calls that write to stderr
Hook CreateProcess of the launcher and change the output handles

Related

Redirect windows console output from another process to file

I have a DLL which does as plugin into the TS3 client.
The problem is, the plugin provokes a crash that I cannot seem to find in my code directly.
So my idea is to set the Console Output of the console windows in the background (-console option) to a file, because when the program crashes there is no way I can read the console output as the console disappears immediately.
Is there a way to set the output of a crashing console to a file?
Because so far, when using the stdout operator (ts3client_win64 -console > output.txt) it does not write anything to the file. (I assume it cannot close the file handle and loses all its data?) But I want to keep the console output when the crash occurs.
It also has to be said that I can not just run it with a batch file and a pause statement, because when starting the application it opens its own console window (and thats the one of which I want the output).
It is the type of crash one would get from failed (safe) string operations in C like strtok_s or strcpy_s.

How to limit the buffer size of a pipe (windows)?

I am trying to control and read the output of a 3rd party console application, of which source code I cannot change.
I want to use QProcess for this, but this should not matter, as the issue is the same when just using cmd:
The 3rd party app seems to never call flush().
Therefore, directly calling it in cmd.exe works fine(Output appears in cmd window), but when calling e.g.
3rdPartyApp.exe > Output.txt
Output.txt stays empty until 3rdPartyApp.exe terminates or quits.
After 3rdPartyApp.exe quits or is terminated, all stdout can be found in Output.txt .
Question:
What can I do to create an environment where the buffer size of the pipe is limited, like when calling directly in cmd.exe, which seems to limit the buffer size to one line?
What you can do is create your own Console type application that will run 3rd party process and "handle" buffering of it.
Then instead redirecting output of 3rd party application, you simply redirect output of proxy console app.
How to do it? You can read it here: https://www.codeproject.com/Articles/16163/Real-Time-Console-Output-Redirection
The author provides explaination and ready to use Console app called RTConsole.exe.
I had big time issue to get unbuffered output from 3rd party pyhton executable spawned from my .NET app and this RTConsole.exe saved me.

Windows: Can the return value of a process running C++ code be accessed if you don't run from command line?

I know that you can access the return value of the process using the command line or by having one process create and run the other. However, if I just make an *.exe and double click it, does the return value go anywhere that I can access? If so, where? Could I change any settings so that, if my process returns EXIT_FAILURE, Windows will handle things differently than if it returns EXIT_SUCCESS?
No, I don't think anything retains the exit value of a process started in that way. When you double click on a shortcut or executable, Explorer creates the process and then immediately closes the handles because it no longer cares what happens.
You could write a program that calls OpenProcess on the process of interest while it's running. (It would have to have a way to discover the process ID before the process exits.) OpenProcess will give you a handle to the process. The program could then wait on that handle. When the process exits, the program could use the handle to retrieve the status code and do whatever it is you want it to do.

Run script in background?

Simple question: Is there a way to run a script in the background with out terminal running?
More detail and background: I had an app that read an apps .log file and puled information from it, then provide information and statistics from the information in the log.
An update to the app changed the way the .log file was written and delete information and duplicates the log in a manner that i have been unable to predict.
the app that was designed to interface with the log was not coded to check for such changes. so when it attempts to gather information after the log change it stops working.
A "hack" has been devised to run a tail -f, then hexed the app to point at the new file.
(The "hack" works)
I would like to run the tail in the background so that the user doesn't interrupt it... breaking it...
-sorry for the (possibly) longer than needed description. BUt i figured a more detailed question would get me a precise answer.
Thanks in advance!
~¥oseph
The answer depends on if you need to be able to re-connect to the process after exiting the shell. If the process is non-interactive and can simply be left alone, then "nohup program &" should do the trick. But that won't let you continue to interact with the program after you've closed the shell.
If it's a interactive program, then your best bet is to use screen or one of the other terminal-multiplexers. You start "screen" which gives you a new shell, in this you start whatever program you want, the usual way, say "nano myfile.txt".
When you want to close the shell, but leave the program running, you press C-a d ('Detach') to detach from screen. it keeps running, but in the background, and will keep running even if you log out.
When you then later want to reconnect to screen you open a new shell and type "screen -r" (reconnect), this leaves you right where you where.
Screen also lets you run several different shells in a single terminal-window and is a neat tool overall. Check it out.

Can I Force MATLAB to quit after user presses Control-C?

I'm running MATLAB (command line version) from a shell script, and I'd like it to preserve shell behavior where if you press Ctrl-C it exits. But instead it wants to keep control of the terminal and I (or my poor users after me) have to type quit(1) to make it quit and tell the shell it failed.
You can't intercept Ctrl-C with a try/catch block... any other ideas? Anything I could do from the shell side to intercept the keystrokes before they get to MATLAB?
onCleanup seems like an option, but then I'd have to make the whole script thing into a function (it's already a dynamically generated try/catch block thing in a Makefile). But if that's the only thing that will work, then I'll do it...
Use onCleanup:
I wanted to do the same thing but after I read this thread I used onCleanup successfully. My problem was I had a server in Matlab that when pressing CTRL+C would keep listening on the port it was started on -> second run I would get a bind error.
You can try:
stty quit ^C
but I have no matlab to test it.

Resources