I have a C program which spawns a pthread to act as an interactive terminal: reading lines from stdin & acting upon them. The program acts as a kind of shell, fork-ing off processes; each process so created has its stdin re-directed using a call to freopen() before using exec to load the new executable.
Before the interactive thread is started, all works fine. Once it has started (or, more specifically, whenever it is waiting for input), calls to freopen() hang. Is there some way to avoid this problem?
The solution that is working for me can be found in R.'s answer to Is close/fclose on stdin guaranteed to be correct?
Basically, the idea is to open the file you want to redirect to, duplicate that to stdin, and then close the file description just opened.
Related
I'm trying to redirect panic/standard error back to bash terminal with the following go code:
if err := syscall.Dup2(-1,int(os.Stderr.Fd())); err != nil {
log.Fatalf("Failed to redirect stderr to bash: %v",err)
}
But the err gives me a "bad file descriptor", probably because of the -1 I used in the first argument. I chose the value -1 because I found that int(file.(*os.File).Fd()) returned a -1 when file.Close() has been called.
As for what I'm trying to do. Elsewhere in my program, I had called a syscall.Dup2(int(file.Fd()), int(os.Stderr.Fd())), which logs stderr to an external file. But I want stderr to point to bash terminal on occasion.
https://golang.org/pkg/syscall/ Doesn't give a verbose explanation of syscall.Dup2. I started poking around what file.Fd() returns and how it is used, but also didn't fully understand.
Can anyone tell me how to redirect stderr back to bash terminal?
You have to save the original stderr somewhere before overwriting it with your first Dup2. (In general, it’s a bad idea to use the dup2 feature of closing the target, since it can’t report errors therefrom.) Then you can Dup2 that back to int(os.Stderr.Fd()) (aka 2).
Can anyone tell me how to redirect stderr back to bash terminal?
In general, that is impossible, because a Unix program can be started (or run) without any terminal. For example, it could be started by a crontab(5) job, or thru some at or ssh command, etc. Think also of your program being run with redirections or in a pipeline, or of your program being run in a server (e.g. inside a data center); then it is likely to not have any terminal.
The common practice is for the user of your program to perhaps redirect stderr (and probably not to a terminal, but more likely to some file). Your user would use its shell for that purpose (e.g. run yourprogram 2> /tmp/errorfile; read the documentation of bash about redirections)
Terminals are quite complex stuff. You could read the TTY demystified page. See also pty(7) and termios(3). The usual way to handle terminals (on Unix) is by using the ncurses library (which has been wrapped as goncurses in Go).
Elsewhere in my program, I had called a syscall.Dup2(int(file.Fd()), int(os.Stderr.Fd())), which logs stderr to an external file.
That is really a bad idea. Your user expects his/her stderr to stay the same (and would have redirected the stderr in his/her shell if so needed). Conventionally, you should not mess that standard streams in your program (and leave them to what they are).
A file descriptor is some small positive-or-zero index (into the file descriptor table of your process). System calls like dup2(2) are expecting valid file descriptors, and Go's syscall.Dup2 is just wrapping that dup2(2).
On Linux, you can query the file descriptor table of some process of pid 1234 by looking into the /proc/1234/fd/ directory. See proc(5) for more.
If you are absolutely certain that your program is running in a terminal, you might open /dev/tty to get it. See tty(4) for more. However, I don't recommend doing that (because you'll better design your program to be runnable outside of any terminal).
You may want to read some Linux programming book, such as ALP.
For logging purposes, Go provides its log package. See also syslog(3) and the log/syslog package of Go.
PS. I don't know Windows, but I believe it also can start programs without any terminal, e.g. as a background process. So even on Windows I would try to avoid doing that (redirection of stderr to a terminal).
I don't know why, but syscall.Dup2(0,int(os.Stderr.Fd())) returns panic stderr back to terminal.
My understanding of the linux operating system is weak. So I don't understand the significance of 0 in this context and the linux documentation.
Also, I haven't attempted this approach on a windows machine, so not sure what will happen there. I hope other people give better answers.
I've got a Windows CLI EXE that prints to the console when I run it.
This is not a program I can modify.
I wrapped this in a gradle Exec task, and it clearly is running, but nothing is getting printed to the screen. I had not configured anything special with the output.
I ran the program directly again but used 1> and 2> to redirect stdout and stderr to files.
Because this program takes 3 hours to run I hit Ctrl-C after a while and opened the redirect files.
None of the usual output was in the files.
Could it be using backspace or some other mechanism to prevent output from capture? The output does not clear on the actual console. Any ideas would be helpful.
I found another program by the same author that does not take as long, so I was able to let it finish.
The program does in fact write to stdout, but it doesn't flush until the very end. Which would be 3 hours for the program in the question! I would have thought that flushing would impact the console as well as a redirect stream, but it appears to only impact redirects. This makes sense if you wanted to have an animated progress spinner or something like that. Since I can't update the program code.
It looks like I'm just stuck with no progress updates.
I have to write a process launcher which starts another process and reads its standard error up to a certain status flag but exits afterwards. The application that is started must keep running. I can succesfully redirect stderr to a pipe and read it from the launcher. My concern is what happens after the launcher terminates. The read end of the pipe is then closed and the application tries writing to a broken pipe. How does one undo the redirection after the child process is started?
What makes the problem even more challenging is that the launcher is cross platform and must be implemented in both POSIX and WinAPI.
Any suggestions on any of the platforms is much appreciated.
In case the parent process (the launcher) exits you will end up (1) with an orphan process and (2) with a broken pipe for stderr. Both these things sound bad...
Some ideas:
Redirect the stderr to a file (i imagine you would want to keep all error messages anyway). The parent/launcher/or any other monitoring process can read from the file as much as needed, at any time, and without getting I/O errors.
Detach the program from the launcher. Usually this is done using fork() followed by an execve() in the forked child process.
I am trying to make a simple ruby script. However, when I run it, the command line opens, and closes almost immediately. I had the same problem with a visual basic console application, so I'm not sure if this is a problem with command prompt.
I am running Windows 8 with Ruby 1.9.3. Any help is appreciated.
This is a common symptom when developing command line applications on Windows, especially when using IDEs.
The correct way to solve the problem is to open the command line prompt or PowerShell manually, navigate to the directory where the program is located and execute it manually via the command line:
ruby your_program.rb
This is how command line programs were designed to be executed from the start. When you run your code from an IDE, it opens a terminal and tells it to execute your program. However, once your program has finished executing, the terminal has nothing to do anymore and thus closes.
However, if you open the terminal, then you the one telling it what to do, not the IDE, and thus the terminal expects more input from you even after the program has finished. It doesn't close because you haven't told it to close.
You can also use this workaround at the end of your Ruby script:
gets
This will read a line from standard input and discard it. It prevents your program, and thus the terminal, from finishing until you've pressed return.
Similar workarounds can be used in any language such as C and C++, but I don't think they are solving the actual problem.
However, don't let this discourage you! Feel free to use gets while you are learning. It's a really convenient workaround and you should use it.
Just be aware that these kinds of hacks aren't supposed to show up in production code.
Are you running from the command line or as an executable. Try placing a busy loop at the end to see the output or wait for keyboard input. If you run outside a command line the command line exits upon completion of the script.
You can use command lsof to get file descriptors for all running processes, but what I would like to do is to close some of those descriptors without being inside that process. This can be done on Windows, so you can easily unblock some application.
Is there any command or function for that?
I don't know why you are trying to do this, but you should be able to attach to the process using gdb and then call close() on the fd. Example:
In one shell: cat
In another shell:
$pidof cat
7213
$gdb -p 7213
...
lots of output
...
(gdb)
Now you tell gdb to execute close(0):
(gdb) p close(0)
$1 = 0
(gdb) c
Continuing.
Program exited with code 01.
(gdb)
In the first shell I get this output:
cat: -: Bad file descriptor
cat: closing standard input: Bad file descriptor
I don't think so but lsof gives you the PID of the process that has opened the file, so what you can do is entirely kill the process or at least send a signal to let it exit.
In Windows you can use a program to do it because someone wrote a program that inserts a device driver into the running kernel to do it. By the way it can be dangerous to do this, because after you close a handle that a broken application was using, the application doesn't know that the handle was closed, and when the application opens some other unrelated object it doesn't know that the same handle might now refer to some other unrelated object. You really want to kill the broken application as soon as possible.
In Linux surely you can use the same kind of technique. Write a program that inserts a module into the running kernel. Communicate with the module and tell it which handles to close. It will be equally dangerous to do so.
I doubt it. File descriptors are process-local, stdout is 1 to all processes, yet they still reference unique streams of course.
Perhaps more detail would be useful, about the blocking problem you're trying to solve.
There is much less need to do this on Unix than on Windows.
On Windows, most programs tend to "lock" (actually deny sharing) the files they open, so they cannot be read/written/deleted by another program.
On Unix, most of the time this does not happen. File locking on Unix is mostly advisory, and will only block other locking attempts, not normal read/write/delete operations. You can even remove the current directory of a process.
About the only situation this comes up in normal usage in Unix is when trying to umount a filesystem (any reference at all to the mounted filesystem can block the umount).