Read and write to file before task is complete using cmd - windows

Consider, I am using the command
C:\>ping www.google.com 1>a.txt 2>&1 | type a.txt
It works well as by default windows sends 4 packets, the task ends and then the file content is displayed.
But when I use
C:\>ping www.google.com -t 1>a.txt 2>&1 | type a.txt
Here the task isn't complete as I have used the -t switch. How can I display the file contents as it is being written in the file.
I don't want to use tee from GnuWin32 CoreUtils

You don't want to use tee from the GnuWin32 CoreUtils?
Why don't you try the PowerShell version of the tee command?
Here is a reading
If you are insistent in using only CMD, I think it would be difficult as there is no way (AFAIK) to immediately flush the log buffer to disk.

Related

How to check if a specific executable has a live process

I want to write a script that checks periodically if a specific executable has a live process, something like this:
psping [-c ###] [-t ###] [-u user-name] exe-name
-c - limit amount of pings, Default is infinite
-t - define alternative timeout in seconds, Default is 1 sec
-u - define user to check process for. The default is ANY user.
For example, psping java will list all processes that are currently invoked by the java command.
The main goal is to count and echo the number of live processes for a user, whose executable file is exe-name, java in the above example.
I wrote a function:
perform_ping(){
ps aux | grep "${EXE_NAME}" | awk '{print $2}' | while read PID
do
echo $PID # -> This will echo the correct PID
# How to find if this PID was executed by ${EXE_NAME}
done
fi
sleep 1
}
I'm having a hard time figuring out how to check if a specific executable file has a live process.
To list all processes that opens a file, we can use the lsof command. Because an executable must be opened in order to be run, we may just use lsof for this purpose.
The next problem is that when we run a java file, we simply type java some_file, and if we issue lsof java it will coldly says that lsof: status error on java: No such file or directory because the java is actually /usr/bin/java.
To convert from java to /usr/bin/java we can use which java, so the command would be:
lsof $(which $EXE_FILE)
The output may looks like this:
lsof: WARNING: can't stat() tracefs file system /sys/kernel/debug/tracing
Output information may be incomplete.
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
python3 26969 user txt REG 8,1 4526456 15409 /usr/bin/python3.6
In this case I searched python3 as lsof $(which python3). It will report the PID in the second field. But when there's another user that invokes python3 too, lsof will issue the warning on stderr like the first two lines because it cannot read other users info. Therefore, we modify the command as:
lsof $(which python3) 2> /dev/null
to suppress the warning. Then we're almost there:
lsof $(which python3) 2> /dev/null | awk 'NR > 1 { print $2 }'
Then you can use read to catch the PID.
Edit: how to list all processes for all users?
By default lsof doesn't read process for a specific file, but after further reading man lsof I found that there are options that meet your needs.
-a causes list selection options to be ANDed.
-c c selects the listing of files for processes executing the command that begins with the characters of c. Multiple commands may be specified, using multiple -c options.
-u s selects the listing of files for the user whose login names or user ID numbers are in the comma-separated set s.
Therefore, you can use
lsof -c java
to list all commands that are run by java. And to see a specific user, add -u option as
lsof -a -c java -u user
-a is needed for the AND operation. If you run this command you will see multiple entry for a process, to unique them, run
lsof -c java 2> /dev/null | sed 1d | sort -uk2,2
Also please notice that users may run their own java in their path and therefore you have to decide which one to monitor: java or /usr/bin/java.

How to save command output to a file in windows and show output on terminal also?

I want to save output of a command in a file as well as display output on terminal.
For saving output to a file I am using following command:
>> ps -ef > D:\temp.txt
But this saves output into files but do not display on terminal. Which command should I use to perform both requirements?
Use MinGW, and than use the following command:
ps -ef 2>&1 | tee D:\temp.txt
This will work.

Unzip file to named pipe and file

I'm trying to unzip a file and redirect the output to a named pipe and another file. Therefore I'm using the command tee.
gunzip -c $FILE | tee -a $UNZIPPED_FILE > $PIPE
My question is is there any other option to achieve the same but with a command that will write the file asynchronously. I want the output redirected to the pipe immediately and that the writing to the file will run in the background by teeing the output to some sort of buffer.
Thanks in advance
What you need is a named pipe (FIFO). First create one:
mkfifo fifo
Now we need a process reading from the named pipe. There's an old unix utillity called buffer that was earlier for asynchronous writing to tape devices. Start a process reading from the pipe in the background:
buffer -i fifo -o async_file -u 100000 -t &
-i is the input file and -o the output file. The -u flag is only for you to see, that it is really asynchonous. It's a small pause after every write for 1/10 second. And -t gives a summary when finished.
Now start the gunzip process:
gunzip -c archive.gz | tee -a fifo > $OTHER_PIPE
You see the gunzip process is ending very fast. In the folder you will see the file async_file growing slowly, that's the background process that is writig to that file, the buffer process. When finishing (can take very long with a huge file) you see a summary. The other file is written directly.

Reading realtime output from airodump-ng

When I execute the command airodump-ng mon0 >> output.txt , output.txt is empty. I need to be able to run airodump-ng mon0 and after about 5 seconds stop the command , than have access to its output. Any thoughts where I should begin to look? I was using bash.
Start the command as a background process, sleep 5 seconds, then kill the background process. You may need to redirect a different stream than STDOUT for capturing the output in a file. This thread mentions STDERR (which would be FD 2). I can't verify this here, but you can check the descriptor number with strace. The command should show something like this:
$ strace airodump-ng mon0 2>&1 | grep ^write
...
write(2, "...
The number in the write statement is the file descriptor airodump-ng writes to.
The script might look somewhat like this (assuming that STDERR needs to be redirected):
#!/bin/bash
{ airodump-ng mon0 2>> output.txt; } &
PID=$!
sleep 5
kill -TERM $PID
cat output.txt
You can write the output to a file using the following:
airodump-ng [INTERFACE] -w [OUTPUT-PREFIX] --write-interval 30 -o csv
This will give you a csv file whose name would be prefixed by [OUTPUT-PREFIX]. This file will be updated after every 30 seconds. If you give a prefix like /var/log/test then the file will go in /var/log/ and would look like test-XX.csv
You should then be able to access the output file(s) by any other tool while airodump is running.
By airodump-ng 1.2 rc4 you should use following command:
timeout 5 airodump-ng -w my --output-format csv --write-interval 1 wlan1mon
After this command has compeleted you can access it's output by viewing my-01.csv. Please not that the output file is in CSV format.
Your command doen't work because airodump-ng output to stderr instead of stdout!!! So following command is corrected version of yours:
airodump-ng mon0 &> output.txt
The first method is better in parsing the output using other programs/applications.

Bash Redirect to a file

I am trying to redirect output of a command to a file. The command I am using (zypper) downloads packages from the internet. The command I am using is
zypper -x -n in geany >> log.txt
The command gradually prints output to the console. The problem I am facing is that the above command writes the command output all at once after the command finishes executing. How do I redirect the bash output as I get it onto the terminal, rather than writing all the command output at the end.
Not with bash itself, but via the tee command:
zipper -x -n in geany | tee log.txt
&>>FILE COMMAND
will append the output of COMMAND to FILE
In your case
&>>log.txt zypper -x -n in geany
If you want to pipe a command through a filter, you must assure that the command outputs to standard output (file descriptor 1) -- if it outputs to standard error (file descriptor 2), you have to redirect the 2 to 1 before the pipe. Take into account that only stdout passed through a pipe.
So you have to do so:
2>&1 COMMAND | FILTER
If you want to grep the output and in the same keep it into a log file, you have to duplicate it with tee, and use a filter like ... | tee log-file | grep options

Resources