Bash Input and Output Redirects - bash

I have a Java program that I'm running using a Bash script on Mac OS X. I have two files - a FIFO that allows me to pipe commands into the program, and an output log file.
The Bash script consists of the following code:
#!/bin/bash
java -jar file.jar <./run/command-fifo >>./run/server.log 2>&1 &
echo $! >| ./run/server.pid
I honestly can't remember why I used >| in the third line (I just know that it works). In the java line, the first < redirects the fifo file to standard input. The >> should redirect standard output to the file, and the 2>&1 should redirect standard error to it as well. It then runs in the background.
The problem is that nothing is ever written to the server.log file. The command-fifo file is read, but the log is not written. The program IS writing to standard output (if I run it on its own it works fine).
I also tried script as suggested in this question but it didn't work either:
script -q /dev/null java -Xmx4G -Xms4G -jar current.jar --nogui <./run/command-fifo >>./run/server.log 2>&1 &
Anyone have ideas to get this to write to the log properly?
FOLLOWUP: I should explain a bit more of how the software works for this explanation to make sense. There are three parts at work here:
A plist that launchd uses to start the program at boot by calling the launcher script
A launcher script that handles kill signals and waits for the pid of the java process
A start script, called by the launcher script, that launches the program and saves its pid
The script given above is the start script. This launches the java process, echoes its pid to a file, then returns. The launcher script (not given here) then waits for the pid to exit before terminating. If it terminates, launchd automatically relaunches the launcher script.
Launchd has a feature that can set the standard output path for the file it launches. Essentially, it redirects stdout of the launcher script to the given file.
I did this, and lo and behold, it works. By changing the start script line to the following:
java -jar file.jar <>./run/command-fifo &
it allows standard output to be captured by launchd and written to the file. It's a bit different solution, but it does in fact work. It's strange because the launcher script technically has nothing to output since the java process is in the background, but however it works, it does i fact work somehow.
Of course, I'd prefer to explicitly redirect the file's standard output into a file (in other scripts there may be cases where there are more than one and I need to keep them separate). I'm still going to experiment and try to find a solution.

I think #torek's comment about buffering is probably right on the money. You can force your java process to line-buffer its output using the stdbuf utility:
#!/bin/bash
stdbuf -oL java -jar file.jar <./run/command-fifo >>./run/server.log 2>&1 &
echo $! >| ./run/server.pid
Regarding the >| operator, #torek is also correct. Here is the bash manual entry.

Related

Redirect entire bash session to log file

I am familiar with the ability to pipe and redirect the IO of individual processes when running them in bash. However, is there a way to redirect stdio for an entire bash session?
Ideally, I would like to transparently pipe all stdout and stderr of all processes spawned by bash into tee to duplicate into a file the printed output displayed to the user. No matter what processes are run within that bash session, I could then go back later and look over the output.
Even more ideally, this should be the case for simple interactive programs that take options from stdin, but not for heavily interactive programs like vim.
The best I've found so far is: whenever the user opens a new terminal, run the command:
bash --login -i > >(tee ~/bash_$$.log) 2>&1
This will immediately start an interactive child shell in that new shell, and tee all stdin and stderr to a logfile named with the new parent shell's PID (to avoid overwriting).
This works, but vim fails to start with Vim: Warning: Output is not to a terminal. Are there any known solutions, up to and including patching the shell, to do this?
Background: vim is failing because isatty() is returning false when given the file descriptor for stdout; this is a safeguard to prevent uses such as vim >file that generally don't make sense. (Also, there are operating system calls available for interacting with PTYs that are useful to graphical, cursor-oriented programs that aren't available with a simple FIFO; this is why tools like ssh go to the trouble to provide a pseudoterminal during interactive use).
What's important for your purposes is that vim is directly inspecting the file descriptor it's passed as stdout. The shell is not a party to this -- it's literally vim running a standard-C-library call that gets some details about an open file descriptor -- so it's nothing that patching or reconfiguring the shell can fix.
To avoid this, then, you need to find a different way to redirect your output for logging such that stdout and stderr are still pointed at PTYs.
That said, for your real goal (logging all activity, vs redirecting stdout in-place), what you want is probably script:
if [ -z "$redirection_done" ]; then
redirection_done=1 exec script shell.log bash --login -i
fi
Using logging support from another tool which simulates a TTY, such as screen or tmux, will likewise suffice. (unbuffer, from the expect toolkit, can be used with similar effect).
Back to your literal question... (since while it may not be what you want to know, it is what you asked):
In all POSIX shells, including bash,
exec >wherever
...will immediately redirect stdout for the current shell to wherever. This can be a process substitution in bash, as anywhere else; thus, in an already-running shell, you can execute
exec > >(tee shell.log) 2>&1

make nohup ignore ">" and "<"

So, i'm running moses machine translation system on my server computer. I access terminal from ssh, and i came across an interesting problem.
The scrip i'm running uses > to specify and output file and it looks like this:
~/mosesdecoder/bin/moses -f /home/tin/working/filtered/moses.ini -i /home/tin/working/filtered/input.29242 > final
Now, since it will take some time for the translation to finish (around 10 hours) i want it to run with nohup, but when i do that even if i put & at the end i end up with file named "final" filled with stdout stuff.
Any idea on how to avoid it??
If you're running the commands inside an actual script file, you could get rid of the > inside the script, and run nohup ./sciptname.sh.
This will print the script's output to terminal, but nohup will redirect it to "nohup.out" in the current directory.
Source:
According to the nohup manpage I am reading, If the standard output is a terminal, the standard output is appended to the file nohup.out in the current directory.
Give it a shot :)

log all stderr to file and console

There are plenty of threads here discussing how to do this for scripts or for the cmdline (mostly involving pipes, redirections, tee).
What I didn't find is a solution which can be set up once and then just works globally, without manipulating single scripts or adding something to every command line.
What I want to achieve is something like described in the top answer of
How do I write stderr to a file while using "tee" with a pipe?
Isn't it possible to configure the bash session so that all stderr output is logged to a file, while still writing it to console? Something I could add to .bashrc and thus automatically set up every time I login?
Software: Bash 4.2.24(1)-release (x86_64-pc-linux-gnu), xterm, Ubuntu 12.04
Try this variation on #0xC0000022L's previous solution (put it in your .bash_profile):
exec 2> >( tee log.file > /dev/tty )
A couple of caveats:
The prompt and anything you type at the command line are printed to stderr, and so will be logged in your file.
There could be an issue with the newline that terminates a command not being displayed in your terminal; I observe it on my Linux host, but not on my Mac OS X laptop. Perhaps someone else can explain and/or fix the issue. For example, if I type "echo stdout", I see the following:
$ echo stdoutstdout
$

Terminal emulator implementation - problems with repeated input

I am trying to implement a terminal emulator in Java. It is supposed to be able to host both cmd.exe on Windows and bash on Unix-like systems (I would like to support at least Linux and Mac OS X). The problem I have is that both cmd.exe and bash repeat on their standard output whatever I send to their standard input.
For example, in bash, I type "ls", hit enter, at which point the terminal emulator sends the input line to bash's stdin and flushes the stream. The process then outputs the input line again "ls\n" and then the output of the ls command.
This is a problem, because other programs apart from bash and cmd.exe don't do that. If I run, inside either bash, or cmd.exe, the command "python -i", the python interactive shell does not repeat the input in the way bash and cmd.exe does. This means a workaround would have to know what process the actual output came from. I doubt that's what actual terminal emulators do.
Running "bash -i" doesn't change this behaviour. As far as I know, cmd.exe doesn't have distinct "interactive" and "noninteractive" modes.
EDIT: I am creating the host process using the ProcessBuilder class. I am reading the stdout and stderr and writing to the stdin of the process using a technique similar to the stream gobbler. I don't set any environment variables before I start the host process. The exact commands I use to start the processes are bash -i for bash and cmd for cmd.exe. I'll try to post minimal code example as soon as I manage to create one.
On Unix, run stty -echo to disable "local echo" (i.e. the shell repeating everything that you type). This is usually enabled so a user can edit what she types.
In your case, BASH must somehow allocate a pseudo TTY; otherwise, it would not echo every command. set +x would have a similar effect but then, you'd see + ls instead of ls in the output.
With cmd.exe the command #ECHO OFF should achieve the same effect.
Just execute those after the process has been created and it should work.

Forking and saving output of a lisp program

I have a lisp program that needs to run for a long, long time. I wanted to make a bash script so that I could just do $./script.sh& on my school's computer and then check the output periodically without having to be personally running the process. All I want to do is call the program "clisp" and have it execute these commands:
(load "ll.l")
(make)
and save all output to a file. How do I make this script?
Look at the nohup built-in bash command:
From Wikipedia
nohup is most often used to run
commands in the background as daemons.
Output that would normally go to the
terminal goes to a file called
nohup.out if it has not already been
redirected. This command is very
helpful when there is a need to run
numerous batch jobs which are
inter-dependent
You can launch the script with nohup, and when you relog see the progress in the nohup.out file
You just want something like this:
#!/bin/sh
clisp > OUTPUTFILE 2>&1 << EOF
(load "11.1")
(make)
EOF

Resources