Printed filesizes of files being written not updating? - bash

I'm working on a shell script which does multiple tcpdumps in the background and then waits for the user to terminate. While waiting, I'd like for the script to print out the size of the folder where the .pcap files are being written. So far I've been getting the same size printed out even though the files are getting larger as the tcpdump command keeps running.
while true; do
echo ""
du -hcs <path>/<folder_name>
echo "Press enter to stop all traces: "
if read -rsn1 -t 5;
then
break
fi
echo ""
done

Try using watch
watch -n1 "du -hcs <path>/<folder_name>"
this will print the size each second

Turns out the problem was with the tcpdump command, adding -U before -w to tcpdump solved the problem.

Related

On what occasion does 'tee' deletes the file it was writing on?

bash4.2, centos
the script
#!/bin/bash
LOG_FILE=$homedir/logs/result.log
exec 3>&1
exec > >(tee -a ${LOG_FILE}) 2>&1
echo
end_shell_number=10
for script in `seq -f "%02g_*.sh" 0 $end_shell_number`; do
if ! bash $homedir/$script; then
printf 'Script "%s" failed, terminating...\n' "$script" >&2
exit 1
fi
done
It basically runs through sub-shells number 00 to 10 and logs everything to a LOG_FILE while also displaying on stdout.
I was watching the log getting stacked with tail -F ./logs/result.log,
and it was working nicely until the log file suddenly got removed.
The sub-shells does nothing related to file descriptors nor the log file. They remotely restart tomcats via ssh commands.
Question :
tee was writing on a log file successfully until the file gets erased and logging stops from then on.
Is there a filesize limit or timeout in tee? Is there any known behavior of tee that it deletes a file?
On what occasion does 'tee' deletes the file it was writing on?
tee does not delete nor truncate the file once it has started writing.
Is there a filesize limit or timeout in tee?
No.
Is there any known behavior of tee that it deletes a file?
No.
Note that file can be removed, but the process (tee) still will wrote the open file descriptor, but the file will not be accessible (see man 3 unlink).

Running commands in bash script in parallel with loop

I have a script where I start a packet capture with tshark and then check whether the user has submitted an input text file.
If there is a file present, I need to run a command for every item in the file through a loop (while tshark is running); else continue running tshark.
I would also like some way to stop tshark with user input such as a letter.
Code snippet:
echo "Starting tshark..."
sleep 2
tshark -i ${iface} &>/dev/null
tshark_pid=$!
# if devices aren't provided (such as in case of new devices, start capturing directly)
if [ -z "$targets" ]; then
echo "No target list provided."
else
for i in $targets; do
echo "Attempting to deauthenticate $i..."
sudo aireplay-ng -0 $number -a $ap -c $i $iface$mon
done
fi
What happens here is that tshark starts, and only when I quit it using Ctrl+c does it move on to the if statement and subsequent loop.
Adding a & at the end of command executes the command in a new sub process. Mind you won't be able to kill it with ctlr + c
example:
firefox
will block the shell
firefox & will not block shell

Unable to exit line in bash script

I am writing a script to start an application, grep for the word "server startup", exit and then execute the next command. But it would not exit and execute next cmd after condition is met. Any help?
#!/bin/bash
application start; tail -f /application/log/file/name | \
while read line ; do
echo "$line" | grep "Server startup"
if [ $? = 0 ]
then
echo "application started...!"
fi
done
Don't Use Tail's Follow Flag
Tail's follow flag (e.g. -f) will not exit, and will continue to follow the file until it receives an appropriate signal or encounters an error condition. You will need to find a different approach to tracking data at the end of your file, such as watch, logwatch, or periodic log rotation using logrotate. The best tool to use will depend a lot on the format and frequency of your log data.

bash: redirected file seen by script as 'does not exist '

I want to check if there are any errors with the last command, hence redirecting stderr to a file and checking the file for "error" string.(Only one possible error in this case.)
My script looks like below:
#aquire lock
rm -f /some/path/err.out
MyProgramme 2>/some/path/err.out &
if grep -i "error" /some/path/err.out ; then
echo "ERROR while running MyProgramme, check /some/path/err.out for error(s)"
#release lock
exit 1
fi
'if' condition is giving error 'No such file or directory' on err.out, however I can see the file exists.
Did I miss anything ?.. Any help is appreciated. Thanks!
PS: I couldn't check the exit code using $? as it is running in background.
In addition to the file possibly not existing when you call grep, you only call grep once, and it only sees whatever data is currently in the file. grep will not continue reading from the file when it reaches the end, waiting for MyProgramme to complete. Instead, I would recommend using a named pipe as the input to grep. This will cause grep to continue reading from the pipe until MyProgramme does, in fact, complete.
#aquire lock
rm -f /some/path/err.out
p=/some/path/err.out
mkfifo "$p"
MyProgramme 2> "$p" &
if grep -i "error" "$p" ; then
echo "ERROR while running MyProgramme, check /some/path/err.out for error(s)"
#release lock
exit
fi
When you start MyProgramme in the background, it's possible that grep executes before MyProgramme could write (and thus create) to the file /some/path/err.out. That's why even though the file exists later when you check it yourself, grep couldn't find it.
You can wait until the background job completes using wait before inspecting the file using grep.

Docker kill an infinite process in a container after X amount of time

I am using the code found in this docker issue to basically start a container run a process within 20 seconds and if the process completes / does not complete / fails to execute / times out the container is killed regardless.
The code I am using currently is this:
#!/bin/bash
set -e
to=$1
shift
cont=$(docker run -d "$#")
code=$(timeout "$to" docker wait "$cont" || true)
docker kill $cont &> /dev/null
echo -n 'status: '
if [ -z "$code" ]; then
echo timeout
else
echo exited: $code
fi
echo output:
# pipe to sed simply for pretty nice indentation
docker logs $cont | sed 's/^/\t/'
docker rm $cont &> /dev/null
Which is almost perfect however if you run an infinite process (for example this python infinite loop):
while True:
print "inifinte loop"
The whole system jams up and the app crashes, after reading around a bit I think it has something to do with the STDOUT Buffer but I have absolutely no idea what that means?
The problem you have is with a process that is writing massive amounts of data to stdout.
These messages get logged into a file which grows infinitely.
Have a look at (depending on your system's location for log files):
sudo find /var/lib/docker/containers/ -name '*.log' -ls
You can remove old log files if they are of no interest.
One possibility is to start your docker run -d daemon
under a ulimit restriction on the max size a file can be.
Add to the start of your script, for example:
ulimit -f 20000 -c 0
This limits file sizes to 20000*1024 bytes, and disables core file dumps, which you expect
to get from infinite loops where writes are forced to fail.
Please add & at the end of
cont=$(docker run -d "$#")&
It will run the process in background.
I don't know dockers but if it still fail to stop you may also add just after this line the following :
mypid=$!
sleep 20 && kill $mypid
Regards

Resources