Difference in behavior between shell and script - bash

I have a set of commands that I am attempting to run in a script. To be exact, the lines are
rm tmp_pipe
mkfifo tmp_pipe
python listen_pipe.py &
while [ true ]; do nc -l -w30 7036 >>tmp_pipe; done &
listen_pipe.py is simply
if __name__ == "__main__":
f = open("tmp_pipe")
vals = " "
while "END" not in vals:
vals = f.readline()
if len(vals) > 0:
print(vals)
else:
f = open("tmp_pipe")
If I run the commands in the order shown I get my desired output, which is a connection to an ESP device that streams motion data. The connection resets after 30 seconds if the ESP device leaves the network range or if the device is turned off. The python script continues to read from the pipe and does not terminate when the tcp connection is reset. However, if I run this code inside a script file nc fails to connect and the device remains in an unconnected state indefinitely. The script is just
#!/bin/bash
rm tmp_pipe
mkfifo tmp_pipe
python listen_pipe.py &
while [ true ]; do nc -l -w30 7036 >>tmp_pipe; done &
This is being run on Ubuntu 16.04. Any suggestions are greatly welcomed, I have been fighting with this code all day. Thanks,
Ian

Related

How I combine a bash hookscript with a Perl hookscript for use with Proxmox?

I am trying to use Promox VE has the hypervisor for running VMs.
In one of my VMs, I have a hookscript that is written for the bash shell:
#!/bin/bash
if [ $2 == "pre-start" ]
then
echo "gpu-hookscript: Resetting GPU for Virtual Machine $1"
echo 1 > /sys/bus/pci/devices/0000\:01\:00.0/remove
echo 1 > /sys/bus/pci/rescan
fi
which is to help with enabling GPU passthrough.
And then I have another hookscript that is written in Perl, which enables virtio-fs:
#!/usr/bin/perl
# Exmple hook script for PVE guests (hookscript config option)
# You can set this via pct/qm with
# pct set <vmid> -hookscript <volume-id>
# qm set <vmid> -hookscript <volume-id>
# where <volume-id> has to be an executable file in the snippets folder
# of any storage with directories e.g.:
# qm set 100 -hookscript local:snippets/hookscript.pl
use strict;
use warnings;
print "GUEST HOOK: " . join(' ', #ARGV). "\n";
# First argument is the vmid
my $vmid = shift;
# Second argument is the phase
my $phase = shift;
if ($phase eq 'pre-start') {
# First phase 'pre-start' will be executed before the guest
# ist started. Exiting with a code != 0 will abort the start
print "$vmid is starting, doing preparations.\n";
system('/var/lib/vz/snippets/launch-virtio-daemon.sh');
# print "preparations failed, aborting."
# exit(1);
} elsif ($phase eq 'post-start') {
# Second phase 'post-start' will be executed after the guest
# successfully started.
print "$vmid started successfully.\n";
} elsif ($phase eq 'pre-stop') {
# Third phase 'pre-stop' will be executed before stopping the guest
# via the API. Will not be executed if the guest is stopped from
# within e.g., with a 'poweroff'
print "$vmid will be stopped.\n";
} elsif ($phase eq 'post-stop') {
# Last phase 'post-stop' will be executed after the guest stopped.
# This should even be executed in case the guest crashes or stopped
# unexpectedly.
print "$vmid stopped. Doing cleanup.\n";
} else {
die "got unknown phase '$phase'\n";
}
exit(0);
What would be the best way for me to combine these two files into a single format, so that I can use it as a hookscript in Proxmox?
I tried reading the thread here about how to convert a bash shell script to Perl, and not being a programmer, admittedly, I didn't understand what I was reading.
I appreciate the teams help in educating a non-programmer.
Thank you.
before
system('/var/lib/vz/snippets/launch-virtio-daemon.sh');
insert pls.
system('echo 1 > /sys/bus/pci/devices/0000\:01\:00.0/remove');
system('echo 1 > /sys/bus/pci/rescan');
Had your original code above evaluated return code of these perl calls (it is not the case):
echo 1 > /sys/bus/pci/devices/0000\:01\:00.0/remove
echo 1 > /sys/bus/pci/rescan
you could apply solutions from:
Getting Perl to return the correct exit code

Make Bash Script Recognize It Is Already Existent

Question
How can I instruct my the bash script to not attempt to re-connect if to my rsync daemon if the process lock.file already exists? (as to prevent the bash script from attempting to infinitely create new connections after the first connection has already been made)?
This is an example of one of my rsync-daemon wrapper scripts:
#!/bin/sh
#
#
while [ 1 ]
do
cputool --load-limit 7.5 -- nice -n -15 rsync -avxP --no-i-r --rsync-path="rsync" --log-file=/var/log/rsync-home.log --exclude 'snap' --exclude 'lost+found' --exclude=".*" --exclude=".*/" 127.0.0.1::home /media/username/external/home-files-only && sync && echo 3 > /proc/sys/vm/drop_caches
if [ "$?" = "0" ] ; then
echo "rsync completed normally"
exit
else
echo "Rsync failure. Backing off and retrying..."
sleep 10
fi
done
#end of shell script
This is my /etc/rsyncd.conf:
[home]
path = /home/username
list = yes
use chroot = false
strict modes = false
uid = root
gid = root
read only = yes
# Data source information
max connections = 1
lock file = /var/run/rsyncd-home.lock
[prod-bkup]
path = /media/username/external/Server-Backups/Prod/today
list = yes
use chroot = false
strict modes = false
uid = root
gid = root
# Don't allow to modify the source files
read only = yes
max connections = 1
lock file = /var/run/rsyncd-prod-bkup.lock
[test-bkup]
path = /media/username/external/Server-Backups/Test/today
list = yes
use chroot = false
strict modes = false
uid = root
gid = root
# Don't allow to modify the source files
read only = yes
max connections = 1
lock file = /var/run/rsyncd-test-bkup.lock
[VminRoot2]
path = /root/VDI-Files
list = yes
use chroot = false
strict modes = false
uid = root
gid = root
# Don't allow to modify the source files
read only = yes
max connections = 1
lock file = /var/run/rsyncd-VminRoot2.lock
Thanks to #james-brown I now have multiple ways to ensure my script runs once.. correctly...
Solution 1 (quick & dirty):
flock -n <lock file> <script>
Or in my case, using this command to execute my cron job:
flock -n /var/run/rsyncd-home.lock /path/to/my_script.sh
caveat - this leaves your script vulnerable to stale lock files that may prevent execution on the next time interval.
Solution 2:
So, I used a bullet-proof method (so I think... I invite folks to correct my understanding, if need be)...
First, I did apt install procmail, then removed/hashed out the below two lines in my /etc/rsyncd.conf and ran systemctl restart rsync:
#max connections = 1
#lock file = /var/run/rsyncd-home.lock
From there I edited /usr/local/bin/backupscript.sh as follows:
#!/bin/bash
#
LOCK=/var/run/rsyncd-home.lock
remove_lock()
{
rm -f "$LOCK"
}
another_instance()
{
echo "There is another instance running, exiting"
exit 1
}
lockfile -r 0 -l 3600 "$LOCK" || another_instance
trap remove_lock EXIT
#new using rsyncd & perpetual restart
while [ 1 ]
do
cputool --load-limit 7.5 -- nice -n -15 rsync -avxP --no-i-r --rsync-path="rsync" --log-file=/var/log/rsync-home.log --exclude 'snap' --exclude="Variety Images" --exclude="Downloads/WebDev/Vmin-Vbox" --exclude 'Downloads/WebDev/Win10-Vbox' --exclude="Videos/other" --exclude 'lost+found' --exclude=".*" --exclude=".*/" 127.0.0.1::home /media/username/external/home-files-only && sync && echo 3 > /proc/sys/vm/drop_caches
if [ "$?" = "0" ] ; then
echo "rsync completed normally"
exit
else
echo "Rsync failure. Backing off and retrying..."
sleep 10
fi
done
#end of shell script
PRESTO:
The script will only connect to rsync daemon once, it will re-connect on dropped connections thanks to the while loop, and there is no danger of stale lock files interrupting my backup process at future intervals... (i.e. problem solved).
Very useful reference:
https://www.baeldung.com/linux/bash-ensure-instance-running

Change back DPI settings in a bash script

I would like to run a program that does not properly support my desired resolution+DPI settings.
Also I want to change my default GTK theme to a lighter one.
What I currently have:
#!/bin/bash
xfconf-query -c xsettings -p /Xft/DPI -s 0
GTK_THEME=/usr/share/themes/Adwaita/gtk-2.0/gtkrc /home/unknown/scripts/ch_resolution.py --output DP-0 --resolution 2560x1440 beersmith3
This sets my DPI settings to 0, changes the gtk-theme, runs a python script that changes my resolution and runs the program, and on program exit changes it back. This is working properly.
Now I want to change back my DPI settings to 136 on program exit
xfconf-query -c xsettings -p /Xft/DPI -s 136
My guess is I need to use a while loop but have no idea how to do it.
ch_resolution.py
#!/usr/bin/env python3
import argparse
import re
import subprocess
import sys
parser = argparse.ArgumentParser()
parser.add_argument('--output', required=True)
parser.add_argument('--resolution', required=True)
parser.add_argument('APP')
args = parser.parse_args()
device_context = '' # track what device's modes we are looking at
modes = [] # keep track of all the devices and modes discovered
current_modes = [] # remember the user's current settings
# Run xrandr and ask it what devices and modes are supported
xrandrinfo = subprocess.Popen('xrandr -q', shell=True, stdout=subprocess.PIPE)
output = xrandrinfo.communicate()[0].decode().split('\n')
for line in output:
# luckily the various data from xrandr are separated by whitespace...
foo = line.split()
# Check to see if the second word in the line indicates a new context
# -- if so, keep track of the context of the device we're seeing
if len(foo) >= 2: # throw out any weirdly formatted lines
if foo[1] == 'disconnected':
# we have a new context, but it should be ignored
device_context = ''
if foo[1] == 'connected':
# we have a new context that we want to test
device_context = foo[0]
elif device_context != '': # we've previously seen a 'connected' dev
# mode names seem to always be of the format [horiz]x[vert]
# (there can be non-mode information inside of a device context!)
if foo[0].find('x') != -1:
modes.append((device_context, foo[0]))
# we also want to remember what the current mode is, which xrandr
# marks with a '*' character, so we can set things back the way
# we found them at the end:
if line.find('*') != -1:
current_modes.append((device_context, foo[0]))
for mode in modes:
if args.output == mode[0] and args.resolution == mode[1]:
cmd = 'xrandr --output ' + mode[0] + ' --mode ' + mode[1]
subprocess.call(cmd, shell=True)
break
else:
print('Unable to set mode ' + args.resolution + ' for output ' + args.output)
sys.exit(1)
subprocess.call(args.APP, shell=True)
# Put things back the way we found them
for mode in current_modes:
cmd = 'xrandr --output ' + mode[0] + ' --mode ' + mode[1]
subprocess.call(cmd, shell=True)
edit:
Thanks #AndreLDM for pointing out that I do not need a separate python script to change the resolution, I don't know why I didn't think of that.
I changed it so I don't need the python script and it is working properly now. If I can improve this script please tell me!
#!/bin/bash
xrandr --output DP-0 --mode 2560x1440
xfconf-query -c xsettings -p /Xft/DPI -s 0
GTK_THEME=/usr/share/themes/Adwaita/gtk-2.0/gtkrc beersmith3
if [ $? == 0 ]
then
xrandr --output DP-0 --mode 3840x2160
xfconf-query -c xsettings -p /Xft/DPI -s 136
exit 0
else
xrandr --output DP-0 --mode 3840x2160
xfconf-query -c xsettings -p /Xft/DPI -s 136
exit 1
fi

which loop in bash script

I am quite new in bash, but I need to create a simple script which will do below steps:
Wait 1 minute
A) bash script will use CM to generate result file
B) check row 8 in result file (to know if Administrator is running any jobs or not)
if NO jobs:
C) bash script will use CM to start cube refresh
D) wait 1 minute
D1) Remove result file
E) generate result file
E1) Read row 8
no jobs:
F) remove result file G) EXIT
yes:
I) Go to D)
YES:
E) Wait 1 minute
F) Remove result file
Go to A)
As bash doesn't have goto (or should not be use), I tried few loops, but I not sure which I should choose.
I know how to:
- start cube(step C)
- generate result file (step A & E):
- check line 8:
sed '8!d' /abc_uat/cmlogs/adm_jobs_u1.log
condition for loops will be probably similar to this: !='Owner = Administrator'
but how to avoid goto ?
I tried with while do loop, but I am not sure what should I add in case of false condition, I added else, but not sure of it:
sleep 60
Generate result file with admin jobs (which admin runs inside of 3rd party tool)
while [ sed '8!d' admin_jobs_result_file.log !="Owner = Administrator" ];
do
--NO Admin jobs
START CUBE REFRESH (it will start admin job)
sleep 60
REMOVE RESULT FILE (OLD)
GENERATE RESULT FILE
while [ sed '8!d' admin_jobs_result_file.log = "Owner = Administrator" ];
--Admin is still running cube refresh
do
sleep 60
REMOVE RESULT FILE (OLD)
GENERATE RESULT FILE
-- it should continue checking every 1 minute if admin is still running cube refresh job, so I hope it will go back to while condition
else
done
else
-- Admin is running something
sleep 60
REMOVE RESULT FILE (OLD)
GENERATE RESULT FILE
-it should check result file again but I think it will finish loop
done
You can replace goto with a loop. while loop, for example.
Syntax
while <condition>
do
action
done
Check out cron jobs. Delegate, if possible, "waiting for a minute" task to cron. Cron should worry about running your script on a timely fashion.
You may consider writing two scripts instead of one.
Do you really need to create a result file? Do you know piping ? (no offense, just mentioning it because you said you were fairly new to bash)
Hopefully this is self explanatory.
result_file=admin_jobs_result_file.log
function generate {
logmsg sleeping
sleep 60
rm -f "$result_file"
logmsg generating
# use CM to generate result file
}
function owner_is_administrator {
# if line 8 contains "Owner = Administrator", exit success
# else exit failure
sed -n '8 {/Owner = Administrator/ q 0; q 1}' "$result_file"
}
function logmsg { date "+%Y-%m-%d %T -- $*"; }
##############
generate
while owner_is_administrator; do
generate
done
# at this point, line 8 does NOT contain "Owner = Administrator"
logmsg start cube refresh
# use CM to start cube refresh
generate
while owner_is_administrator; do
generate
done
logmsg Done
Looks like AIX's sed can't exit with a specified status. Try this instead:
function owner_is_administrator {
# if line 8 contains "Owner = Administrator", exit success
# else exit failure
awk 'NR == 8 {if (/Owner = Administrator/) {exit 0} else {exit 1}}' "$result_file"
}

Not able to connect to socket using socat

I am trying to parse rsyslog logs. For this i am sending all my logs to socat which is then sending them to Unix Domain Socket. That socket is created via perl script which is listening on that socket to parse logs.
My bash script to which rsyslog is sending all log is
if [ ! `pidof -x log_parser.pl` ]
then
./log_parser.pl & 1>&1
fi
if [ -S /tmp/sock ]
then
/usr/bin/socat -t0 -T0 - UNIX-CONNECT:/tmp/sock 2>> /var/log/socat.log
fi
/tmp/sock is created using perl script log_parser.pl which is
use IO::Socket::UNIX;
sub socket_create {
$socket_path = '/tmp/sock';
unlink($socket_path);
$listner = IO::Socket::UNIX->new(
Type => SOCK_STREAM,
Local => $socket_path,
Listen => SOMAXCONN,
Blocking => 0,
)
or die("Can't create server socket: $!\n");
$socket = $listner->accept()
or die("Can't accept connection: $!\n");
}
socket_create();
while(1) {
chomp($line=<$socket>);
print "$line\n";
}
There is this error i am getting from socat which is
2015/02/24 11:58:01 socat[4608] E connect(3, AF=1 "/tmp/sock", 11): Connection refused
I am no champion in sockets so i am not able to understand what is this. Please help. Thanks in advance.
The main issue is that when i kill my perl script then bash script is suppose to call it again and start it.
What actually happening is that sript is started but socat is not started instead it give this error and never start.
I can duplicate your error if I don't run your perl program before trying to use socat. Here is what works for me:
1) my_prog.pl:
use strict;
use warnings;
use 5.016;
use Data::Dumper;
use IO::Socket::UNIX;
my $socket_path = '/tmp/sock';
unlink $socket_path;
my $socket = IO::Socket::UNIX->new(
Local => $socket_path,
Type => SOCK_STREAM,
Listen => SOMAXCONN,
) or die "Couldn't create socket: $!";
say "Connected to $socket_path...";
my $CONN = $socket->accept()
or die "Whoops! Failed to open a connection: $!";
{
local $/ = undef; #local -> restore previous value when the enclosing scope, delimited by the braces, is exited.
#Setting $/ to undef puts file reads in 'slurp mode' => whole file is considered one line.
my $file = <$CONN>; #Read one line.
print $file;
}`
2) $ perl my_prog.pl
3) socat -u -v GOPEN:./data.txt UNIX-CONNECT:/tmp/sock
The -u and -v options aren't necessary:
-u Uses unidirectional mode. The first address is only used for
reading, and the second address is only used for writing (exam-
ple).
-v Writes the transferred data not only to their target streams,
but also to stderr. The output format is text with some conver-
sions for readability, and prefixed with "> " or "< " indicating
flow directions.
4) You can also do it like this:
cat data.txt | socat STDIN UNIX-CONNECT:/tmp/sock
Pipe stdout of cat command to socat, then list STDIN as one of socat's files.
Response to comment:
This bash script works for me:
#!/usr/bin/env bash
echo 'bash script'
../pperl_programs/my_prog.pl &
sleep 1s
socat GOPEN:./data.txt UNIX-CONNECT:/tmp/sock
It looks like the perl script doesn't have enough time to setup the socket before socat tries to transfer data.

Resources