Run an executable and read logs at the same time - bash

I have a scenario where I run an executable (as an entrypoint) from a docker container.
The problem is, that executable doesn't write logs to stdout, but to a file.
I need a way to run that executable in the foreground (so that if it crashes, it crashes the container as well), but pipe logs from a file to stdout at the same time.
Any suggestion on how to do that?

The Linux environment provides a couple of special files that actually relay to other file descriptors. If you set the log file to /dev/stdout or /dev/fd/1, it will actually appear on the main process's stdout.
The Docker Hub nginx image has a neat variation on this. If you look at its Dockerfile it specifies:
RUN ln -sf /dev/stdout /var/log/nginx/access.log
The Nginx application configuration specifies its log file as /var/log/nginx/access.log. If you do nothing, that is a symlink to /dev/stdout, and so access logs appear in the docker logs output. But if you'd prefer to have the logs in files, you can bind-mount a host directory on /var/log/nginx and you'll get access.log on the host as a file instead.

G'day areller!
Well, your question is quite generic/abstract (as David Maze mentioned, you didn't said what are the exact commands neither how are you running them), but i think i got it.
You will do the following:
# if the command logs to the stdout, do:
command &> /var/tmp/command.log &
tail -f /var/tmp/command.log
# if the command logs to an specific file in, idk, /var/adm, do:
tail -f /specific/file/directory/command.log
Instead of explaining tail -f by myself, i will take a citation from the tail(1p) manual page:
-f If the input file is a regular file or if the file operand specifies a FIFO, do not terminate after the last line of the input file
has been copied, but read and copy further bytes from the input file when they become available. If no file operand is specified and
standard input is a pipe, the -f option shall be ignored. If the input file is not a FIFO, pipe, or regular file, it is unspecified
whether or not the -f option shall be ignored.
I hope i've helped you.

Related

How to pipe output of a command that expects file argument

I know you can pipe output of a one command to another - for example
ls -la | less
to see output of ls -la inside of less instead of terminal stdio.
But if you use a command with a parameter that saves the output to a file
command --save-to-file file.txt
Then how to pipe that to another command?
this way will not work:
command --save-to-file | less
because command will complain that you used --save-to-file without any argument (filename)
If I remember well there was something like a buffer or a temp file in ram you could put instead of file.txt so you could do something like:
command --save-to-file ram-buffer.txt && cat ram-buffer.txt
Without even creating a file on disk, it that right?
Why I need that?
Some of commands have only a basic output to the stdio and more useful type of output cannot be printed by them but only saved to file. The thing is I am not interested in saving the more useful type of output to any file at all but just to print it in terminal or pipe to chain of another commands that do the filtering etc and then eventually print the processed output.
I would not like to be responsible to crating a tmp file then delete it etc. Perfectly I would like to just use a kind of magic file (or redirection) in place of file.txt that I could pipe to another command.
It is important to me to not write any content of the output to a disk if this is possible. Just print it in terminal or pipe to other command(s).
At this moment I'm trying to capture output of PHPUnit
phpunit --log-junit log.xml
which is not a shell command but a PHP script that uses:
#!/usr/bin/env php
But I remember I used to have an example with linux command that I wanted get the output but the form of it was only available with a parameter --save-to-file outputfile.txt
Perhaps because piping/redirecting an output designed to be saved to a file is not binary safe and therefore such output can be corrupted when piped/redirected - can it be?
Some programs have special handling for -. For example, you can tell tar to write to stdout so it can be used in a pipeline. This would create a tarball locally and untar it remotely without the tarball ever being written to disk:
tar -cf - *.txt | ssh user#host tar -C /dir/ -xf -
You can use /dev/stdout with nearly all programs, as long as they don't need a seekable file.
command --save-to-file /dev/stdout
As #Benjamin W. pointed out in the comments, you can save it to /dev/stdout, which is the output and then pipe the output to whatever you want(e.g. less)
command --save-to-file /dev/stdout | less
Take care because there may be additional output to stdout. In this case you could throw that away, save it to stderr and redirect it to stdout:
command --save-to-file /dev/stderr >/dev/null 2>/dev/stdout | less
If both, stderr and stdout are used you may be able to write your own driver for this of manipulate /proc/pid/mem or something like this.

bash commands to remote hosts - errors with writing local output files

I'm trying to run several sets of commands in parallel on a few remote hosts.
I've created a script that constructs these commands, and then writes the output in a local file, something along the lines of:
ssh <me>#<ip1> "command" 2> ./path/to/file/newFile1.txt & ssh <me>#<ip2>
"command" 2> ./path/to/file/newFile2.txt & ssh <me>#<ip2> "command" 2>
./path/to/file/newFile3.txt; ...(same repeats itself, with new commands and new
file names)...
My issue is that, when my script runs these commands, I am getting the following errors:
bash: ./path/to/file/newFile1.txt: No such file or directory
bash: ./path/to/file/newFile2.txt: No such file or directory
bash: ./path/to/file/newFile3.txt: No such file or directory
...
These files do NOT exist but will be written. That being said, the directory paths are valid.
The strange thing is that, if I copy and paste the whole big command, then it works without any issue. I'd rather have it automated tho ;).
Any ideas?
Edit - more information:
My filesystem is the following:
- home
- User
- Desktop
- Servers
- Outputs
- ...
I am running the bash script from home/User/Desktop/Servers.
The script creates the commands that need to be run on the remote servers. First thing first, the script creates the directories where the files will be stored.
outputFolder="./Outputs"
...
mkdir -p ${outputFolder}/f{fileNumb}
...
The script then continues to create the commands that will be called on remotes hosts, and their respective outputs will be placed in the created directories.
The directories are there. Running the commands gives me the errors, however printing and then copying the commands into the same location works for some reason. I have also tried to give the full path to directory, still same issue.
Hope I've been a bit clearer.
If this is the exact error message you get:
bash: ./path/to/file/newFile1.txt: No such file or directory
Then you'll note that there's an extra space between the colon and the dot, so it's actually trying to open a file called " ./path/to/file/newFile1.txt" (without the quotes).
However, to accomplish that, you'd need to use quotes around the filename in the redirection, as in
something ... 2> " ./path/to/file/newFile1.txt"
Or the first character would have to something else than a regular space. A non-breaking space perhaps, possible something that some editor might create if you hit alt-space or such.
I don't believe you've shown enough to correctly answer the question.
This doesn't look like a problem with ssh, but the way you are calling the (ssh) commands.
You say that you are writing the commands into a file... presumably you are then running that file as a script. Could you show the code you use to do that. I believe that's your problem.
I suspect you have made a false assumption about the way the working directory changes when you run a script. It doesn't. You are listing relative paths, so its important to know what they are relative to. That is the most likely reason for it working when you copy and paste it... You are executing from a different working directory.
I am new to bash scripting and was building my script based on another one I had seen. I was "running" the command by simply calling the variable where the command was stored:
$cmd
Solved by using:
eval $cmd
instead. My bad, should have given the full script from the start.

How to get error output and store it in a variable or file

I'm having a little trouble figuring out how to get error output and store it in a variable or file in ksh. So in my script I have cp -p source.file destination inside a while loop.
When I get the below error
cp: source.file: The file access permissions do not allow the specified action.
I want to grab it and store it in a variable or file.
Thanks
You can redirect the error output of the command like so:
cp -p source.file destination 2>> my_log.txt
It will append the error message to the my_log.txt file.
In case you want a variable you can redirect stderr to stdout and assign the command output to a variable:
my_error_var=$(cp -p source.file destination 2>&1)
In ksh (as per Q), as in bash and other sh derivatives, you can get all/just stderr output from cp using redirection, then grabbing in a var (using $(), better than backtick if using a vaguely recent version):
output=$(cp -p source.file destination 2>&1)
cp doesn't normally output anything though this would capture stdout and stderr; to capture just stderr this way, use 1>/dev/null also. The other solutions redirecting to a file could use cat/various other commands to output/process the logfile.
Reason why I don't suggest using outputting to temporary files:
Redirecting to a file then reading that in (via read command or more inefficiently via $(cat file) ), particularly for just a single line, is less efficient and slower; though not so bad if you want to append to it each time for multiple operations before displaying the errors. You'll also leave the temporary file around unless you ALWAYS clear it up, don't forget when people interrupt (ie. Ctrl-C) or kill the script.
Using temporary files also could be a problem if the script is run multiple times at once (eg. could happen via cron if filesystem/other delays cause massive overruns or just from multiple users), unless the temporary filename is unique.
Generating temporary files is also a security risk unless done very carefully, especially if the file data is processed again or the contents could be rewritten before display by something else to confuse/phish the user/break the script. Don't get into a habit of doing it too casually, read up on temporary files (eg. mktemp) first via other questions here/google.
You can do STDERR redirects by doing:
command 2> /path/to/file.txt

Redirection Doesn't Work

I want to put my program's output into a file. I keyed in the following :
./prog > log 2>&1
But there is nothing in the file "log". I am using the Ubuntu 11.10 and the default shell is bash.
Anybody know the cause of this AND how I can debug this?
There are many possible causes:
The program reads the input from log file while you try to redirect into it with truncation (see Why doesn't "sort file1 > file1" work?)
The output is buffered so that you don't see data in the file until the output buffer is flushed. You can manually call fflush or output std::flush if using C++ I/O stream etc.
The program is smart enough and disables output if the output stream is not a terminal.
You look at the wrong file (i.e. in another directory).
You try to dump file's contents incorrectly.
Your program outputs '\0' as the first character so the output appears to be empty, even though there is some data.
Name your own.
Your best bet is to run this application under a debugger (like gdb) or use strace or ptrace (or both) and see what the program is doing. I mean, really, output redirection works for the last like 40 years, so the problem must be somewhere else.

Why does bash sometime not flush output of a python program to a file

I have a crontab job calling a python script and outputting to a file:
python run.py &> current_date.log
now sometimes when I do
tail -f current_date.log
I see the file filling up with the output, but other times the log file exists, but stays empty for a long time. I am sure that the python script is printing stuff right after it starts running, and the log file is created. Any ideas why does it stay empty some of the time?
Python buffers output when it detects that it is not writing to a tty, and so your log file may not receive any output right away. You can configure your script to flush output or you can invoke python with the -u argument to get unbuffered output.
$ python -h
...
-u : unbuffered binary stdout and stderr (also PYTHONUNBUFFERED=x)
see man page for details on internal buffering relating to '-u'
...
The problem is actually Python (not bash) and is by design. Python buffers output by default. Run python with -u to prevent buffering.
Another suggestion is to create a class (or special function) which calls flush() right after the write to the log file.

Resources