Unzip file to named pipe and file - bash

I'm trying to unzip a file and redirect the output to a named pipe and another file. Therefore I'm using the command tee.
gunzip -c $FILE | tee -a $UNZIPPED_FILE > $PIPE
My question is is there any other option to achieve the same but with a command that will write the file asynchronously. I want the output redirected to the pipe immediately and that the writing to the file will run in the background by teeing the output to some sort of buffer.
Thanks in advance

What you need is a named pipe (FIFO). First create one:
mkfifo fifo
Now we need a process reading from the named pipe. There's an old unix utillity called buffer that was earlier for asynchronous writing to tape devices. Start a process reading from the pipe in the background:
buffer -i fifo -o async_file -u 100000 -t &
-i is the input file and -o the output file. The -u flag is only for you to see, that it is really asynchonous. It's a small pause after every write for 1/10 second. And -t gives a summary when finished.
Now start the gunzip process:
gunzip -c archive.gz | tee -a fifo > $OTHER_PIPE
You see the gunzip process is ending very fast. In the folder you will see the file async_file growing slowly, that's the background process that is writig to that file, the buffer process. When finishing (can take very long with a huge file) you see a summary. The other file is written directly.

Related

make nohup write other than nohup.out

I've been using below command to make tail to write nohup.out and also print the output on the terminal.
nohup train.py & tail -f nohup.out
However, I need nohup to use different file names.
When I try
nohup python train.py & tail -F vanila_v1.out
I'm getting following error message.
tail: cannot open 'vanila_v1.out' for readingnohup: ignoring input and appending output to 'nohup.out': No such file or directory
I also tried
nohup python train.py & tail -F nohup.out > vanila_v1.txt
Then it doesn't write an output on stdout.
How do I make nohup to write other than nohup.out? I don't mind simultaneously writing two different files. But to keep track of different processes, I need the name to be different.
Thanks.
You need to pipe the STDOUT and STDERR for the nohup command like:
$ nohup python train.py > vanila_v1.out 2>&1 & tail -F vanila_v1.out
At this point, the process will go into the background and you can use tail -f vanila_v1.out. That's one way to do it.
A little more information is available here for the STDOUT and STDERR link. Here is another question that uses the tee command rather that > to achieve the same in one go.

Read and write to file before task is complete using cmd

Consider, I am using the command
C:\>ping www.google.com 1>a.txt 2>&1 | type a.txt
It works well as by default windows sends 4 packets, the task ends and then the file content is displayed.
But when I use
C:\>ping www.google.com -t 1>a.txt 2>&1 | type a.txt
Here the task isn't complete as I have used the -t switch. How can I display the file contents as it is being written in the file.
I don't want to use tee from GnuWin32 CoreUtils
You don't want to use tee from the GnuWin32 CoreUtils?
Why don't you try the PowerShell version of the tee command?
Here is a reading
If you are insistent in using only CMD, I think it would be difficult as there is no way (AFAIK) to immediately flush the log buffer to disk.

Telling nohup to write output in real-time

When using nohup the output of a script is buffered and only gets dumped to the log file (nohup.out) after the script has finished executing. It would be really useful to see the script output in something close to real-time, to know how it is progressing.
Is there a way to make nohup write the output whenever it is produced by the script? Or, since such frequent file access operations are slow, to dump the output periodically during execution?
There's a special program for this: unbuffer! See http://linux.die.net/man/1/unbuffer
The idea is that your program's output routines recognize that stdout is not a terminal (isatty(stdout) == false), so they buffer output up to some maximum size. Using the unbuffer program as a wrapper for your program will "trick" it into writing the output one line at a time, as it would do if you ran the program in an interactive terminal directly.
What's the command you are executing? You could create a .bash file which inside is redirecting the output to files after each command (echo "blabh" >> your-output.txt) so you can check that file during the nohup execution (you should run: nohup script.bash &)
Cheers
Please use stdbuf. It's added in GNU coreutils from 7.5.
stdbuf -i0 -o0 -e0 cmd
Command with nohup like:
nohup stdbuf -i0 -o0 -e0 ./server >> log_server.log 2>&1
Reference

bash sequence: wait for output, then start next program

In my case I have to run openvpn before ssh'ing into a server, and the openvpn command echos out "Initialization Sequence Completed".
So, I want my script to setup the openvpn and then ssh in.
My question is: How do you execute a command in bash in the background and await it to echo "completed" before running another program?
My current way of doing this is having 2 terminal panes open, one running:
sudo openvpn --config FILE
and in the other I run:
ssh SERVER
once the the first terminal pane has shown me the "Initialization Sequence Completed" text.
It seems like you want to run openvpn as a process in the background while processing its stdout in the foreground.
exec 3< <(sudo openvpn --config FILE)
sed '/Initialization Sequence Completed$/q' <&3 ; cat <&3 &
# VPN initialization is now complete and running in the background
ssh SERVER
Explanation
Let's break it into pieces:
echo <(sudo openvpn --config FILE) will print out something like /dev/fd63
the <(..) runs openvpn in the background, and...
attaches its stdout to a file descriptor, which is printed out by echo
exec 3< /dev/fd63
(where /dev/fd63 is the file descriptor printed from step 1)
this tells the shell to open the file descriptor (/dev/fd63) for reading, and...
make it available at the file descriptor 3
sed '/Initialization Sequence Completed$/q' <&3
now we run sed in the foreground, but make it read from the file descriptor 3 we just opened
as soon as sed sees that the current line ends with "Initialization Sequence Completed", it quits (the /q part)
cat <&3 &
openvpn will keep writing to file descriptor 3 and eventually block if nothing reads from it
to prevent that, we run cat in the background to read the rest of the output
The basic idea is to run openvpn in the background, but capture its output somewhere so that we can run a command in the foreground that will block until it reads the magic words, "Initialization Sequence Completed". The above code tries to do it without creating messy temporary files, but a simpler way might be just to use a temporary file.
Use -m 1 together with --line-buffered in grep to terminate a grep after first match in a continuous stream. This should work:
sudo openvpn --config FILE | grep -m "Initialization Sequence Completed" --line-buffered && ssh SERVER

Reading realtime output from airodump-ng

When I execute the command airodump-ng mon0 >> output.txt , output.txt is empty. I need to be able to run airodump-ng mon0 and after about 5 seconds stop the command , than have access to its output. Any thoughts where I should begin to look? I was using bash.
Start the command as a background process, sleep 5 seconds, then kill the background process. You may need to redirect a different stream than STDOUT for capturing the output in a file. This thread mentions STDERR (which would be FD 2). I can't verify this here, but you can check the descriptor number with strace. The command should show something like this:
$ strace airodump-ng mon0 2>&1 | grep ^write
...
write(2, "...
The number in the write statement is the file descriptor airodump-ng writes to.
The script might look somewhat like this (assuming that STDERR needs to be redirected):
#!/bin/bash
{ airodump-ng mon0 2>> output.txt; } &
PID=$!
sleep 5
kill -TERM $PID
cat output.txt
You can write the output to a file using the following:
airodump-ng [INTERFACE] -w [OUTPUT-PREFIX] --write-interval 30 -o csv
This will give you a csv file whose name would be prefixed by [OUTPUT-PREFIX]. This file will be updated after every 30 seconds. If you give a prefix like /var/log/test then the file will go in /var/log/ and would look like test-XX.csv
You should then be able to access the output file(s) by any other tool while airodump is running.
By airodump-ng 1.2 rc4 you should use following command:
timeout 5 airodump-ng -w my --output-format csv --write-interval 1 wlan1mon
After this command has compeleted you can access it's output by viewing my-01.csv. Please not that the output file is in CSV format.
Your command doen't work because airodump-ng output to stderr instead of stdout!!! So following command is corrected version of yours:
airodump-ng mon0 &> output.txt
The first method is better in parsing the output using other programs/applications.

Resources