Why does passing a variable from one bash script to another cause it to fail? - bash

I have been trying to figure this one out for a while. I am trying to automate a few things. I only have rights to edit the scripts I write. I am currently using my script to call another script that I cannot edit, let's call it script.sh
I have tried:
if [[ -n $PASS ]]; then
su -c 'echo "$PASS" | ./script.sh' &
wait $!
else
./script.sh &
wait $!
fi
if [[ -n $PASS ]]; then
echo "$PASS" | ./script.sh &
wait $!
else
./script.sh &
wait $!
fi
if [[ -n $PASS ]]; then
./script.sh <<< $PASS &
wait $!
else
./script.sh &
wait $!
fi
This calls a script I cannot edit:
#!/bin/bash
echo "foo: "
read PASSWORD
echo
echo "foo"
...
if [ ! -f ./config.ini ]; then
./script2.sh ./config.ini
fi
My issue it that script.sh then calls another script, let's say script2.sh, that cats out a config.ini file to be used later in the process. Script2.sh fails to create config.ini correctly. Specifically the command user=$(/usr/bin/who am i | cut -d ' ' -f1) fails to set the variable.
So, 3 scripts deep one command fails. But it works if run manually or if I don't echo $PASS and enter it manually. Any ideas would be greatly appreciated.

Related

Using while read, do Loop in bash script, to parse command line output

So I am trying to create a script that will wait for a certain string in the output from the command that's starting another script.
I am running into a problem where my script will not move past this line of code
$(source path/to/script/LOOPER >> /tmp/looplogger.txt)
I have tried almost every variation I can think of for this line
ie. (./LOOPER& >> /tmp/looplogger.txt)
bash /path/to/script/LOOPER 2>1& /tmp/looplogger.txt etc.
For Some Reason I cannot get it to run in a subshell and have the rest of the script go about its day.
I am trying to run a script from another script and access it's output then parse line by line until a certain string is found
Then once that string is found my script would kill said script (which I am aware if it is sourced then then the parent script would terminate as well).
The script that is starting looper then trying to kill it-
#!/bin/bash
# deleting contents of .txt
echo "" > /tmp/looplogger.txt
#Code cannot get past this command
$(source "/usr/bin/gcti/LOOPER" >> /tmp/ifstester.txt)
while [[ $(tail -1 /tmp/looplogger.txt) != "Kill me" ]]; do
sleep 1
echo ' in loop ' >> /tmp/looplogger.txt
done >> /tmp/looplogger.txt
echo 'Out of loop' >> looplogger.txt
#This kill command works as intended
kill -9 $(ps -ef | grep LOOPER | grep -v grep | awk '{print $2}')
echo "Looper was killed" > /tmp/looplogger.txt
I have tried using while IFS= read -r as well. for the above script. But I find it's syntax alittle confusing.
Looper Script -
./LOOPER
#!/bin/bash
# Script to test with scripts that kill & start processes
let i=0
# Infinite While Loop
while :
do
i=$((i+1))
until [ $i -gt 10 ]
do
echo "I am looping :)"
sleep 1
((i=i+1))
done
echo "Kill me"
sleep 1
done
Sorry for my very wordy question.

How to avoid the same bash script from running more than once when its called from another script?

I have a script called "upcall" which calls 4 different scripts. In upcall I call them in the way show. The first part of the script works when I run the script directly (bash upload_cloud1), but does not when its called from the script below. Im sure there is a way to fix this, but just not sure what it is. I have it currently setup in crontab to run every 15 mins to check for used space.
#!/bin/bash
if [[ "`pidof -x $(basename $0) -o %PPID`" ]]; then
echo "This script is already running with PID `pidof -x $(basename $0) -o %PPID`"
exit; fi
count=$(</opt/rclone/scripts/upcount)
size=$(df -k /dev/sda2 | tail -1 | awk '{print $3}')
if [ "$size" -gt "234003200" ]; then
bash /opt/rclone/scripts/upload_cloud${count}
else
echo "Not full yet"
fi

How to properly iterate through a list using sshpass with a single ssh-login

Situation: we're feeding a list of filenames to an sshpass and it iterates correctly through a remote folder to check whether files with the given names actually exists, then build an updated list containing only the files that do exist, which is reused later in the bash script.
Problem: The list comprises sometimes tens of thousands of files, which means tens of thousands of ssh logins, which is harming performance and sometimes getting us blocked by our own security policies.
Intended solution: instead of starting the for-loop and calling sshpass each time, do it otherwise and pass the loop to an unique sshpass call.
I've got to pass the list to the sshpass instruction in the example test below:
#!/bin/bash
all_paths=(`/bin/cat /home/user/filenames_to_be_tested.list`)
existing_paths=()
sshpass -p PASSWORD ssh -n USER#HOST bash -c "'
for (( i=0; i<${#all_paths[#]}; i++ ))
do
echo ${all_paths[i]}
echo \"-->\"$i
if [[ -f ${all_paths[i]} ]]
then
echo ${all_paths[i]}
existing_paths=(${all_paths[i]})
fi
done
'
printf '%s\n' "${existing_paths[#]}"
The issue here is that it appears to loop (you see a series of echoed lines), but in the end it is not really iterating the i and is always checking/printing the same line.
Can someone help spot the bug? Thanks!
The problem is that bash first parses the string and substitutes the variables. That happens before it's sent to the server. If you want to stop bash from doing that, you should escape every variable that should be executed on the server.
#! /bin/bash
all_paths=(rootfs.tar derp a)
read -sp "pass? " PASS
echo
sshpass -p $PASS ssh -n $USER#$SERVER "
files=(${all_paths[#]})
existing_paths=()
for ((i=0; i<\${#files[#]}; i++)); do
echo -n \"\${files[#]} --> \$i\"
if [[ -f \${files[\$i]} ]]; then
echo \${files[\$i]}
existing_paths+=(\${files[\$i]})
else
echo 'Not found'
fi
done
printf '%s\n' \"\${existing_paths[#]}\"
This becomes hard to read very fast. However, there's an option I personally like to use. Create functions and export them to the server to be executed there to omit escaping a lot of stuff.
#! /bin/bash
all_paths=(rootfs.tar derp a)
function files_exist {
local files=($#)
local found=()
for file in ${files[#]}; do
echo -n "$file --> "
if [[ -f $file ]]; then
echo "exist"
found+=("$file")
else
echo "missing"
fi
done
printf '%s\n' "${found[#]}"
}
read -sp "pass? " PASS
echo
sshpass -p $PASS ssh -n $USER#$SERVER "
$(typeset -f files_exist)
files_exist ${all_paths[#]}
"

Continuing a BASH script after error

This script works well in finding what I need, but there are occassions where a 404 error just kills everything.
#!/bin/sh
set +e
exec 7<foo.txt
exec 8<bar.tmp
echo "Retrieving data"
while read line1 <&7 && read line2 <&8
do
echo "beginning... retrieving files from d list"
echo "this WILL take a while"
echo $line1
echo $line2
wget -e robots=off -t1 -r -p -Q20k --wait=30 --random-wait --limit-rate=200k -np -U "$line1" http://$line2/page.html
cp /home/user/testing/*.html /home/user/production
echo "done"
done
exec 7<&-
exec 8<&-
I want to continue the script because even though this site, known as $line2 has a 404, the others don't.
I have done the "set +e", and even ran the script with "|| true", all stopping after the error. Because of the 404, there are no files to copy - and then it fails to go onto the next site.
Any suggestions?
What I found works is this:
if [ ! -d "/home/user/production" ]; then
continue #continue the loop.
fi

Self-daemonizing bash script

I want to make a script to be self-daemonizing, i.e., no need to invoke nohup $SCRIPT &>/dev/null & manually on the shell prompt.
My plan is to create a section of code like the following:
#!/bin/bash
SCRIPTNAME="$0"
...
# Preps are done above
if [[ "$1" != "--daemonize" ]]; then
nohup "$SCRIPTNAME" --daemonize "${PARAMS[#]}" &>/dev/null &
exit $?
fi
# Rest of the code are the actual procedures of the daemon
Is this wise? Do you have better alternatives?
Here are things I see.
if [[ $1 != "--daemonize" ]]; then
Shouln't that be == --daemonize?
nohup $SCRIPTNAME --daemonize "${PARAMS[#]}" &>/dev/null &
Instead of calling your script again, you could just summon a subshell that's placed in a background:
(
Codes that run in daemon mode.
) </dev/null >/dev/null 2>&1 &
disown
Or
function daemon_mode {
Codes that run in daemon mode.
}
daemon_mode </dev/null >/dev/null 2>&1 &
disown

Resources