Redirect stderr/-out of running application to my own bash script - bash

I am running an application called "hd-idle". It is spinning down disks after a specific time of inactivity.
The output looks like this:
user#linux:~$ sudo /usr/sbin/hd-idle -i 10800
symlinkPolicy=0, defaultIdle=10800, defaultCommand=scsi, defaultPowerCondition=0, debug=false, logFile=, devices=
sda spindown
sdd spindown
sde spindown
sda spinup
sdd spinup
sdd spindown
[...]
I want to save this output to a logfile (while the application in running), add timestamps and change sd[a-z] to corresponding model/serial of the hard drive.
I wrote a small bash script that does what I want:
user#linux:~$ cat hd_idle_logger.sh
#!/bin/bash
DATUM=$(date '+%Y-%m-%d %H:%M:%S')
INPUT=$(cat)
REGEX='(sd[a-z])\s(spin(down|up))'
[[ $INPUT =~ $REGEX ]]
if [ -n ${BASH_REMATCH[1]} ]
then
MODEL=$(lsblk /dev/${BASH_REMATCH[1]} -n -o MODEL)
SERIAL=$(lsblk /dev/${BASH_REMATCH[1]} -n -o SERIAL)
fi
echo -e "$DATUM\t${MODEL}_$SERIAL (${BASH_REMATCH[1]})\t${BASH_REMATCH[2]}" >> /home/linux/hd_idle_logger.log
I can verify that it works:
user#linux:~$ echo "sdd spindown" |& ./hd_idle_logger.sh
user#linux:~$ cat hd_idle_logger.log
2023-02-12 12:14:54 WDC_WD120EMAZ-10BLFA6_1PAEL2ES (sdd) spindown
But running the application and passing the output to my script doesn't work, the logfile doesn't produce any content and I don't see the output on console anymore:
user#linux:~$ sudo /usr/sbin/hd-idle -i 10800 |& /home/user/hd_idle_logger.sh
So what I am doing wrong?

As long as hd-idle is running, your script will be stuck at INPUT=$(cat). Because $(cat) has to capture ALL output, it can online terminate once hd-idle terminated.
You need a script/program that processes hd-idle's output on the fly; e.g. line by line, while hd-idle is still running. You could do this with a while read loop:
#! /bin/bash
regex='(sd[a-z])\s(spin(down|up))'
while IFS= read -r line; do
[[ $line =~ $regex ]] || continue
model=$(lsblk /dev/"${BASH_REMATCH[1]}" -n -o MODEL)
serial=$(lsblk /dev/"${BASH_REMATCH[1]}" -n -o SERIAL)
printf '%(%Y-%m-%d %H:%M:%S)T\t%s_%s (%s)\t%s\n' \
"$model" "$serial" "${BASH_REMATCH[1]}" "${BASH_REMATCH[2]}"
done >> /home/linux/hd_idle_logger.log
However, it would be more efficient to switch to utils like sed or awk and pre-compute the list of serial numbers or look for the required information in the /sys/block file system, so that you don't have to execute lsblk for each line.

Related

watch dmesg, exit after first occurrence

I have a script which watches dmesg and kills a process after a specific log message
#!/bin/bash
while sleep 1;
do
# dmesg -w | grep --max-count=1 -q 'protocol'
dmesg -w | sed '/protocol/Q'
mkdir -p /home/user/dmesg/
eval "dmesg -T > /home/user/dmesg/dmesg-`date +%d_%m_%Y-%H:%M`.log";
eval "dmesg -c";
pkill -x -9 programm
done
The Problem is that sed as well as grep only trigger after two messages.
So the script will not continue after only one message.
Is there anything I am missing?
You have a script that periodically executes dmesg. Instead, write a script that watches the output of dmesg.
dmesg | while IFS= read -r line; do
case "$line" in
*protocol*)
echo "do something when line has protocol"
;;
esac
done
Consider reading https://mywiki.wooledge.org/BashFAQ/001 .

how to break from a while loop that reads a file in bash

I was trying to break a loop I created with
ctrl+C while reading a file.
Instead it was just stopping the specific iteration and not
the whole loop.
How can I stop it completely instead of having to ctrl+C all the iterations?
The whole script can be found here
An example file is like that:
echo -e "SRR7637893\nSRR7637894\nSRR7637895\nSRR7637896" > filenames.txt
The specific code chunk that probably makes the issue is the while loop here(set -xv; has been added afterwards as suggested from markp-fuso in comments):
set -xv;
while read -r line; do
echo "Now downloading "${line}"\n"
docker run --rm -v "$OUTPUT_DIR":/data -w /data inutano/sra-toolkit:v2.9.2 fasterq-dump "${line}" -t /data/shm -e $PROCESSORS
if [[ -s $OUTPUT_DIR/${line}.fastq ]]; then
echo "Using pigz on ${line}.fastq"
pigz --best $OUTPUT_DIR/"${line}*.fastq"
else
echo "$OUTPUT_DIR/${line}.fastq not found"
fi
done < "$INPUT_txt"; set +xv

Need to print current CPU usage and Memory usage in file continuously

I have prepared the below script, but it's not adding any data to the output file.
My intention is to get the current CPU usage and Memory usage and print them on log file.
What is wrong with my below script? I will run this script file in CentOS machine.
#!/usr/bin/bash
HOSTNAME=$(hostname)
mkdir -p /root/scripts
LOGFILE=/root/scripts/xcpuusagehistory.log
touch $LOGFILE
a=0;
b=1;
while [ "$a" -lt "$b" ]
do
CPULOAD=`top -d10 | grep "Cpu(s)"`
echo "$CPULOAD on Host $HOSTNAME" >> $LOGFILE
done
while true
do
cpu_load="$(top -b -n1 -d10 | grep "Cpu(s)")"
echo "$cpu_load on Host $HOSTNAME" >> "$log_file"
sleep 1
done
See top batch mode (-b) in the man page.

Bash script? Manipulating & graphing collected data from a csv file

I'm trying to write a script that can detect my presence at home. So far I've written a script that outputs data from hcitool lescan into a csv file in the following format:
TIMESTAMP MAC_ADDRESS_1 MAC_ADDRESS_2 AD_INFINITUM
2018-09-22.11:48:34 FF:FF:FF:FF:FF:FF FF:FF:FF:FF:FF:FF FF:FF:FF:FF:FF:FF
I'm trying to figure out how to write a script to convert the data into a graphable format - is gnuplot the program to be used for this? I guess that this would require a bash? script that imports the csv file keeping all timestamps, then adding a new column into the array for each unique MAC address then populating the entries with a 1 or 0 depending if the Mac address is detected per line. Are there any built in commands that can do/help with this or would I have to script it myself?
The code I used to generate the .csv is here. Sorry, its probably not the prettiest as I've just only started with bash scripting.
cd /home/pi/projects/bluetooth_control;
while true
do
echo 'reset hci0';
sudo hciconfig hci0 down;
sudo hciconfig hci0 up;
echo 'timestamp';
echo `date +%Y-%m-%d.%H:%M:%S` &> test1.csv;
echo 'running scan';
(sudo timeout 20 stdbuf -oL hcitool lescan | grep -Eo '(([A-Z]|[0-9]){2}:){5}([A-Z]|[0-9]){2}') &> test.csv;
echo 'removing duplicates to test.csv';
(sort test.csv | uniq) >> test1.csv;
(paste -s test1.csv) >> data.csv;
echo 'sleep for 60s';
sleep 60;
done
I've had time to play around and in the interest of completing the answer here is the solution I came up with. I'm not sure how efficient it is to run it in Bash vs. Python but here goes:
#!/bin/bash
cd /home/pi/projects/bluetooth_control;
while true
do
echo 'reset hci0';
sudo hciconfig hci0 down;
sudo hciconfig hci0 up;
echo 'timestamp';
# Create necessary temp files
echo "temp" &> test1.csv;
echo `date +%Y-%m-%d.%H:%M:%S` &> test2.csv;
echo 'running scan';
# Filter out only MAC addresses
(sudo timeout 20 stdbuf -oL hcitool lescan | grep -Eo '(([A-Z]|[0-9]){2}:){5}([A-Z]|[0-9]){2}') &> /home/pi/projects/bluetooth_control/test.csv;
echo 'removing duplicates to test.csv';
# Append each unique value to test1.csv
(sort test.csv | uniq) >> test1.csv;
# For each line in test1.csv, add text to mac_database if it doesn't exist
while read line
do
grep -q -F $line mac_database || echo $line >> mac_database
done <test1.csv
# For each line in mac_database, run an if loop
while read line
do
# If $line from mac_database exists in test1.csv, then
if grep -Fxq "$line" test1.csv
then
echo '1' >> test2.csv
else
echo '0' >> test2.csv
fi
done <mac_database
# Convert file to csv format, and append to data.csv
(paste -s test2.csv) >> data.csv;
echo 'sleep for 60s';
sleep 60;
done
Hopefully this helps whoever might choose to do this in the future.

Why does bash script stop working

The script monitors incoming HTTP messages and forwards them to a monitoring application called zabbix, It works fine, however after about 1-2 days it stops working. Heres what I know so far:
Using pgrep i see the script is still running
the logfile file gets updated properly (first command of script)
The FIFO pipe seems to be working
The problem must be somewhere in WHILE loop or tail command.
Im new at scripting so maybe someone can spot the problem right away?
#!/bin/bash
tcpflow -p -c -i enp2s0 port 80 | grep --line-buffered -oE 'boo.php.* HTTP/1.[01]' >> /usr/local/bin/logfile &
pipe=/tmp/fifopipe
trap "rm -f $pipe" EXIT
if [[ ! -p $pipe ]]; then
mkfifo $pipe
fi
tail -n0 -F /usr/local/bin/logfile > /tmp/fifopipe &
while true
do
if read line <$pipe; then
unset sn
for ((c=1; c<=3; c++)) # c is no of max parameters x 2 + 1
do
URL="$(echo $line | awk -F'[ =&?]' '{print $'$c'}')"
if [[ "$URL" == 'sn' ]]; then
((c++))
sn="$(echo $line | awk -F'[ =&?]' '{print $'$c'}')"
fi
done
if [[ "$sn" ]]; then
hosttype="US2G_"
host=$hosttype$sn
zabbix_sender -z nuc -s $host -k serial -o $sn -vv
fi
fi
done
You're inputting from the fifo incorrectly. By writing:
while true; do read line < $pipe ....; done
you are closing and reopening the fifo on each iteration of the loop. The first time you close it, the producer to the pipe (the tail -f) gets a SIGPIPE and dies. Change the structure to:
while true; do read line; ...; done < $pipe
Note that every process inside the loop now has the potential to inadvertently read from the pipe, so you'll probably want to explicitly close stdin for each.

Resources