Script won't work in the background - bash

I'm making my first steps in Linux and shell scripting. Wrote a small script that should alert me if my laptop's battery is running low.
It works in the foreground but not in the background for some reason. Gives out:
do_connect: could not connect to socket
connect: No such file or directory
Failed to open LIRC support. You will not be able to use your remote control
The script code is the following:
#!/bin/bash
perc=`upower -i $(upower -e | grep BAT) | grep percentage | cut -c26- | cut -c -2`
state=`upower -i $(upower -e | grep BAT) | grep state | cut -c26-`
while true; do
while [[ $perc -gt 20 ]]; do
sleep 300
done
while [[ $state = 'discharging' ]]; do
mplayer /root/scripts/sad.ogg
sleep 120
done
while [[ $perc -le 20 ]]; do
sleep 300
done
done
Will greatly appreciate any advice!

I think your problem is with mplayer the do connect error is saying that it's truing to get a socket with lirc, it should work if you set your
$HOME/.mplayer/config
lirc=no

Related

Why does bash script stop working

The script monitors incoming HTTP messages and forwards them to a monitoring application called zabbix, It works fine, however after about 1-2 days it stops working. Heres what I know so far:
Using pgrep i see the script is still running
the logfile file gets updated properly (first command of script)
The FIFO pipe seems to be working
The problem must be somewhere in WHILE loop or tail command.
Im new at scripting so maybe someone can spot the problem right away?
#!/bin/bash
tcpflow -p -c -i enp2s0 port 80 | grep --line-buffered -oE 'boo.php.* HTTP/1.[01]' >> /usr/local/bin/logfile &
pipe=/tmp/fifopipe
trap "rm -f $pipe" EXIT
if [[ ! -p $pipe ]]; then
mkfifo $pipe
fi
tail -n0 -F /usr/local/bin/logfile > /tmp/fifopipe &
while true
do
if read line <$pipe; then
unset sn
for ((c=1; c<=3; c++)) # c is no of max parameters x 2 + 1
do
URL="$(echo $line | awk -F'[ =&?]' '{print $'$c'}')"
if [[ "$URL" == 'sn' ]]; then
((c++))
sn="$(echo $line | awk -F'[ =&?]' '{print $'$c'}')"
fi
done
if [[ "$sn" ]]; then
hosttype="US2G_"
host=$hosttype$sn
zabbix_sender -z nuc -s $host -k serial -o $sn -vv
fi
fi
done
You're inputting from the fifo incorrectly. By writing:
while true; do read line < $pipe ....; done
you are closing and reopening the fifo on each iteration of the loop. The first time you close it, the producer to the pipe (the tail -f) gets a SIGPIPE and dies. Change the structure to:
while true; do read line; ...; done < $pipe
Note that every process inside the loop now has the potential to inadvertently read from the pipe, so you'll probably want to explicitly close stdin for each.

Looking into bash script to log SSH activity

I'm having some suspicious SSH activity, apparently originating from my computer (OSX Sierra)... for this reason I am trying to determine why, and more specifically from where this is happening.
I'm basically looking for something to track ssh calls, the following seems to work to reveal which process PID makes the call. I choose to check every 15 seconds (perhaps this should be even lower)
lsof -r 15 -i -a -c ssh
for this process I would then like to run ps -fp <PID> for information about the program that is making these ssh requests.
I'd like to automate this (run ps -fp for any ssh activity found) and log the resulting information.
I have no real experience making scripts, if anyone could help me make this possible any help would be greatly appreciated.
Hmm, Not sure if this will work on a Mac, but this may get you started:
while [[ 1 ]] ; do echo "## $(date) ##" ; S_PIDS=$(lsof -i -a -c ssh | awk '/ssh/ {print $2}') ; ps -fp ${S_PIDS} ; sleep 15 ; done
Or, to log the info:
while [[ 1 ]] ; do echo "## $(date) ##" ; S_PIDS=$(lsof -i -a -c ssh | awk '/ssh/ {print $2}') ; ps -fp ${S_PIDS} ; sleep 15 ; done | tee /tmp/ssh.log
:)
Dale

How to get the correct number of background jobs running, f.ex. $(jobs | wc -l | xargs) returns 1 instead of 0

I have a jenkins build job that starts processes in the background. I need to write a function that checks wether there are still background processes running. To test it I came up with this:
#!/bin/bash -e
function waitForUploadFinish() {
runningJobs=$(jobs | wc -l | xargs)
echo "Waiting for ${runningJobs} background upload jobs to finish"
while [ "$(jobs | wc -l | xargs)" -ne 0 ];do
echo "$(jobs | wc -l | xargs)"
echo -n "." # no trailing newline
sleep 1
done
echo ""
}
for i in {1..3}
do
sleep $i &
done
waitForUploadFinish
The problem is it never comes down to 0. Even when the last sleep is done, there is still one job running?
mles-MBP:ionic mles$ ./jobs.sh
Waiting for 3 background upload jobs to finish
3
.2
.1
.1
.1
.1
.1
.1
Why I don't want to use wait here
In the Jenkins build job script where this snippet is for, i'm starting background upload processes for huge files. They don't run for 3 seconds like in the example here with sleep. They can take up to 30 minutes to proceed. If I use wait here, the user would see something like this in the log:
upload huge_file1.ipa &
upload huge_file2.ipa &
upload huge_file3.ipa &
wait
They would wonder why is nothing going on?
Instead I want to implement something like this:
upload huge_file1.ipa &
upload huge_file2.ipa &
upload huge_file3.ipa &
Waiting for 3 background upload jobs to finish
............
Waiting for 2 background upload jobs to finish
.................
Waiting for 1 background upload jobs to finish
.........
Upload done
That's why I need the loop with the current running background jobs.
This fixes it:
function waitForUploadFinish() {
runningJobs=$(jobs | wc -l | xargs)
echo "Waiting for ${runningJobs} background upload jobs to finish"
while [ `jobs -r | wc -l | tr -d " "` != 0 ]; do
jobs -r | wc -l | tr -d " "
echo -n "." # no trailing newline
sleep 1
done
echo ""
}
Note: you will only count the background processes that are started by this bash script, you will not see the background processes from the starting shell.
As the gniourf_gniourf commented: if you only need to wait and don't need to output then a simple wait after the sleeps is much simpler.
for i in {1..3}; do
sleep $i &
done
wait
Please consider comments made by gniourf_gniourf, as your design is not good to start with.
However, despite a much simpler and more efficient solution being possible, there is the question of why you are seeing what you are seeing.
I modified the first line of your loop, like so :
while [ "$(jobs | tee >(cat >&2) | wc -l | xargs)" -ne 0 ];do
The tee command takes its input and sends it to both standard out and to the file passed as argument. >(cat >&2) is syntax that, to explain it simply, provides a file to the tee command, but that file really is a named FIFO and anything written to that file will be sent to standard error, bypassing the pipeline and allowing you to see what jobs is spitting out, all while allowing the rest of the pipeline to operate normally.
If you do that, you will notice that the "job" that jobs keeps on returning is not a job, but a message stating some other job has finished. I do not know why it keeps on repeating that, but this is the cause of the problem.
You could replace :
while [ "$(jobs | wc -l | xargs)" -ne 0 ];do
With :
while [ "$(jobs -p | grep "^[0-9]" | wc -l | xargs)" -ne 0 ];do
This will cause jobs to echo PIDs, and filter out any line that does not begin with a number, so messages will not be counted.

netcat inside a while read loop returning immediately

I am making a menu for myself, because sometimes I need to search (Or NMAP which port).
I want to do the same as running the command in the command line.
Here is a piece of my code:
nmap $1 | grep open | while read line; do
serviceport=$(echo $line | cut -d' ' -f1 | cut -d'/' -f1);
if [ $i -eq $choice ]; then
echo "Running command: netcat $1 $serviceport";
netcat $1 $serviceport;
fi;
i=$(($i+1));
done;
It is closing immediately after it scanned everything with nmap.
Don't use FD 0 (stdin) for both your read loop and netcat. If you don't distinguish these streams, netcat can consume content emitted by the nmap | grep pipeline rather than leaving that content to be read by read.
This has a few undesirable effects: Further parts of the while/read loop don't get executed, and netcat sees a closed stdin stream and exits when the pipeline's contents are consumed (so you don't get interactive control of netcat, if that's what you're trying to accomplish). An easy way to work around this issue is to feed the output of your nmap pipeline in on a non-default file descriptor; below, I'm using FD 3.
There's a lot wrong with this code beyond the scope of the question, so please don't consider the parts I've copied-and-pasted an endorsement, but:
while read -r -u 3 line; do
serviceport=${line%% *}; serviceport=${serviceport##/*}
if [ "$i" -eq "$choice" ]; then
echo "Running command: netcat $1 $serviceport"
netcat "$1" "$serviceport"
fi
done 3< <(nmap "$1" | grep open)

bash script inside here document not behaving as expected

Here is a minimal test case which fails
#!/bin/tcsh
#here is some code in tcsh I did not write which spawns many processes.
#let us pretend that it spawns 100 instances of stupid_test which the user kills
#manually after an indeterminate period
/bin/bash <<EOF
#!/bin/bash
while true
do
if [[ `ps -e | grep stupid_test | wc -l` -gt 0 ]]
then
echo 'test program is still running'
echo `ps -e | grep stupid_test | wc -l`
sleep 10
else
break
fi
done
EOF
echo 'test program finished'
The stupid_test program is consists of
#!/bin/bash
while true; do sleep 10; done
The intended behavior is to run until stupid_test is killed (in this case manually by the user), and then terminate within the next ten seconds. The observed behavior is that the script does not terminate, and evaluates ps -e | grep stupid_test | wc -l == 1 even after the program has been killed (and it no longer shows up under ps)
If the bash script is run directly, rather than in a here document, the intended behavior is recovered.
I feel like I am doing something very stupidly wrong, I am not the most experienced shell hacker at all. Why is it doing this?
Usually when you try to grep the name of a process, you get an extra matching line for grep itself, for example:
$ ps xa | grep something
57386 s002 S+ 0:00.01 grep something
So even when there is no matching process, you will get one matching line. You can fix that by adding a grep -v grep in the pipeline:
ps -e | grep stupid_test | grep -v grep | wc -l
As tripleee suggested, an even better fix is writing the grep like this:
ps -e | grep [s]tupid_test
The meaning of the pattern is exactly the same, but this way it won't match grep itself anymore, because the string "grep [s]tupid_test" doesn't match the regular expression /[s]tupid_test/.
Btw I would rewrite your script like this, cleaner:
/bin/bash <<EOF
while :; do
s=$(ps -e | grep [s]tupid_test)
test "$s" || break
echo test program is still running
echo "$s"
sleep 10
done
EOF
Or a more lazy but perhaps sufficient variant (hinted by bryn):
/bin/bash <<EOF
while ps -e | grep [s]tupid_test
do
echo test program is still running
sleep 10
done
EOF

Resources