writing sysdig output to a file - bash

I want to check my users commands on my server for a period of time so I use sysdig command with nohup. I want to write output to a file like so:
# nohup sysdig -c spy_users 1>>/path/to/true 2>>/path/to/false &
But result is not written to file realtime. Any idea?

Probably it is being buffered. Try to set the buffering to line-buffer or deactivete it completely:
nohup stdbuf -oL -eL sysdig -c spy_users 1>>/path/to/true 2>>/path/to/false &

Related

make nohup write other than nohup.out

I've been using below command to make tail to write nohup.out and also print the output on the terminal.
nohup train.py & tail -f nohup.out
However, I need nohup to use different file names.
When I try
nohup python train.py & tail -F vanila_v1.out
I'm getting following error message.
tail: cannot open 'vanila_v1.out' for readingnohup: ignoring input and appending output to 'nohup.out': No such file or directory
I also tried
nohup python train.py & tail -F nohup.out > vanila_v1.txt
Then it doesn't write an output on stdout.
How do I make nohup to write other than nohup.out? I don't mind simultaneously writing two different files. But to keep track of different processes, I need the name to be different.
Thanks.
You need to pipe the STDOUT and STDERR for the nohup command like:
$ nohup python train.py > vanila_v1.out 2>&1 & tail -F vanila_v1.out
At this point, the process will go into the background and you can use tail -f vanila_v1.out. That's one way to do it.
A little more information is available here for the STDOUT and STDERR link. Here is another question that uses the tee command rather that > to achieve the same in one go.

Telling nohup to write output in real-time

When using nohup the output of a script is buffered and only gets dumped to the log file (nohup.out) after the script has finished executing. It would be really useful to see the script output in something close to real-time, to know how it is progressing.
Is there a way to make nohup write the output whenever it is produced by the script? Or, since such frequent file access operations are slow, to dump the output periodically during execution?
There's a special program for this: unbuffer! See http://linux.die.net/man/1/unbuffer
The idea is that your program's output routines recognize that stdout is not a terminal (isatty(stdout) == false), so they buffer output up to some maximum size. Using the unbuffer program as a wrapper for your program will "trick" it into writing the output one line at a time, as it would do if you ran the program in an interactive terminal directly.
What's the command you are executing? You could create a .bash file which inside is redirecting the output to files after each command (echo "blabh" >> your-output.txt) so you can check that file during the nohup execution (you should run: nohup script.bash &)
Cheers
Please use stdbuf. It's added in GNU coreutils from 7.5.
stdbuf -i0 -o0 -e0 cmd
Command with nohup like:
nohup stdbuf -i0 -o0 -e0 ./server >> log_server.log 2>&1
Reference

Unzip file to named pipe and file

I'm trying to unzip a file and redirect the output to a named pipe and another file. Therefore I'm using the command tee.
gunzip -c $FILE | tee -a $UNZIPPED_FILE > $PIPE
My question is is there any other option to achieve the same but with a command that will write the file asynchronously. I want the output redirected to the pipe immediately and that the writing to the file will run in the background by teeing the output to some sort of buffer.
Thanks in advance
What you need is a named pipe (FIFO). First create one:
mkfifo fifo
Now we need a process reading from the named pipe. There's an old unix utillity called buffer that was earlier for asynchronous writing to tape devices. Start a process reading from the pipe in the background:
buffer -i fifo -o async_file -u 100000 -t &
-i is the input file and -o the output file. The -u flag is only for you to see, that it is really asynchonous. It's a small pause after every write for 1/10 second. And -t gives a summary when finished.
Now start the gunzip process:
gunzip -c archive.gz | tee -a fifo > $OTHER_PIPE
You see the gunzip process is ending very fast. In the folder you will see the file async_file growing slowly, that's the background process that is writig to that file, the buffer process. When finishing (can take very long with a huge file) you see a summary. The other file is written directly.

Reading realtime output from airodump-ng

When I execute the command airodump-ng mon0 >> output.txt , output.txt is empty. I need to be able to run airodump-ng mon0 and after about 5 seconds stop the command , than have access to its output. Any thoughts where I should begin to look? I was using bash.
Start the command as a background process, sleep 5 seconds, then kill the background process. You may need to redirect a different stream than STDOUT for capturing the output in a file. This thread mentions STDERR (which would be FD 2). I can't verify this here, but you can check the descriptor number with strace. The command should show something like this:
$ strace airodump-ng mon0 2>&1 | grep ^write
...
write(2, "...
The number in the write statement is the file descriptor airodump-ng writes to.
The script might look somewhat like this (assuming that STDERR needs to be redirected):
#!/bin/bash
{ airodump-ng mon0 2>> output.txt; } &
PID=$!
sleep 5
kill -TERM $PID
cat output.txt
You can write the output to a file using the following:
airodump-ng [INTERFACE] -w [OUTPUT-PREFIX] --write-interval 30 -o csv
This will give you a csv file whose name would be prefixed by [OUTPUT-PREFIX]. This file will be updated after every 30 seconds. If you give a prefix like /var/log/test then the file will go in /var/log/ and would look like test-XX.csv
You should then be able to access the output file(s) by any other tool while airodump is running.
By airodump-ng 1.2 rc4 you should use following command:
timeout 5 airodump-ng -w my --output-format csv --write-interval 1 wlan1mon
After this command has compeleted you can access it's output by viewing my-01.csv. Please not that the output file is in CSV format.
Your command doen't work because airodump-ng output to stderr instead of stdout!!! So following command is corrected version of yours:
airodump-ng mon0 &> output.txt
The first method is better in parsing the output using other programs/applications.

Send Output errors of nohup to syslog

I'm attempting to write a bash script that uses nohup and passes errors to rsyslog.
I've tried this command with different variations of the log variable (see below) but can't get the output passed to anything but a std txt file. I can't get it to pipe.
nohup imageprocessor.sh > "$LOG" &
Is it possible to pipe nohup output or do I need a different command.
A couple of variations of log that I have tried
LOG="|/usr/bin/logger -t workspaceworker -p LOCAL5.info &2"
or
LOG="|logtosyslog.sh"
or
LOG="logtosyslog.sh"
A way in bash to redirect output to syslog is:
exec > >(logger -t myscript)
stdout is then sent to logger command
exec 2> >(logger -t myscript)
for stderr
Not directly. nohup will detach the child process, so piping the output of the nohup command isn't helpful. This is what you want:
nohup sh -c 'imageprocessor.sh | logger'

Resources