Syntax error after "read" only when script invoked with "bash -s <script" - bash

I have two scripts and both are in different servers and I use these for automating a small process.
script1 starts script2 using command
ssh -i /pathToKeyFile/keyfile.pem user#server2 'bash -s < /pathToScriptFile/script2.sh'
In script2.sh I have a "case" question:
#!/bin/bash
# Ask to start up JBOSS
read -p "DB restore completed. Start JBOSS and FACADE (y/n)" startJBOSS
case "$startJBOSS" in
y|Y ) echo "Starting JBOSS and FACADE";;
n|N ) echo "Stopping here"
exit;;
* ) echo "Invalid option..."
exit;;
esac
echo "More commands here"
exit
So when I execute script1.sh it works fine and starst script2 on remote server.
But script2 fails to error
bash: line 5: syntax error near unexpected token `)'
bash: line 5: ` y|Y ) echo "Starting JBOSS and FACADE";;'
If I execute script2.sh directly from remote server it works as expected.
I also tried so that both script files are located in one server. Of cource in this case commant to start script2.sh is different, but in this case both works again as expected.
I cannot figure out why script2.sh fails when it is started from and other script located in an other server. I assume that script2.sh "code" is correct as it works when ran separately.

The problem is that read reads from stdin -- the same place your code is coming from.
Thus, instead of reading a line from the user, it reads a line from the file of source, consuming the case command, leaving the rest of the source file syntactically invalid.
Simple Answer: Don't Do That.
bash -s <filename makes sense when the <filename is coming from somewhere not accessible to the copy of bash (like the other side of the SSH connection, or a file that can only be read by a different user), but that's not the case for your example. Thus, you can just stop using the -s argument and the redirection:
ssh -i /pathToKeyFile/keyfile.pem user#server2 'bash /pathToScriptFile/script2.sh'
...or make the prompt conditional...
Another approach is to make the read conditional on there actually being a user listening at the TTY:
if [[ -t 0 ]]; then # test whether FD 0, stdin, is a TTY
read -p "DB restore completed. Start JBOSS and FACADE (y/n)" startJBOSS
else
startJBOSS=y # no TTY, so assume yes
fi
...or make the prompt read from /dev/tty, and make sure SSH passes it through.
An alternate approach is to read from /dev/tty explicitly, and then to arrange for that to be valid in the context of your script by passing appropriate arguments to ssh:
if read -p "DB restore completed. Start JBOSS and FACADE (y/n)" startJBOSS </dev/tty; then
: "read $startJBOSS from user successfully" # <- will be logged if run with set -x
else
echo "Unable to read result from user; is this being run with a TTY?" >&2
exit 1
fi
...and then, on the other side, using the -t argument to SSH, to force there to be a TTY (if one is available to SSH itself; if not, it won't have a means to read from the user out-of-band either):
ssh -t -i /pathToKeyFile/keyfile.pem user#server2 'bash -s < /pathToScriptFile/script2.sh'

Related

Automatize the cert creation OpenVPN

I do not know why I am getting an error when I run my script with SSH, but when I run the bash from my CA server everything works fine.
I installed my VPN server based on this article https://www.digitalocean.com/community/tutorials/how-to-set-up-an-openvpn-server-on-ubuntu-18-04
I wrote a bash for the VPN creation but when I try to run it I need to SSH to the other server at some point. If I start the script with SSH in it I got an error message:
>./easyrsa: 341: set: Illegal option -o echo
My bash contain this and run from my VPN server:
sshpass -p $PASSWORD ssh username#"CA server IP" "/home/username/makevpn.sh $NAME $PASSWORD"
And makevpn.sh contain this:
>./easyrsa sign-req client $NAME
After this run it seems okay but give that error above.
I tried to read after this error and found nothing. :( Hope someone can help because I am hopeless after 4 days of troubleshooting.
Code of VPN script
#!/bin/sh
clear
read -p "Please enter the name of the new certificate : " NAME
read -p "Please enter the Password : " PASSWORD
cd /home/username/EasyRSA-3.0.7/
./easyrsa gen-req $NAME nopass
echo "gen-req done"
cp /home/username/EasyRSA-3.0.7/pki/private/$NAME.key /home/username/client-configs/keys/
echo "cp done"
sshpass -p $PASSWORD scp /home/username/EasyRSA-3.0.7/pki/reqs/$NAME.req username#192.168.1.105:/tmp
echo "scp done"
sshpass -p $PASSWORD ssh username#192.168.1.105 "/home/username/makevpn.sh $NAME $PASSWORD"
echo "ssh done"
cp /tmp/$NAME.crt /home/username/client-configs/keys/
echo "last CP done"
sudo /home/username/client-configs/make_config.sh $NAME
echo "All Done"
Code on CA server
#!/bin/sh
NAME=$1
PASSWORD=$2
cd /home/username/EasyRSA-3.0.7/
echo "CD Done"
./easyrsa import-req /tmp/$NAME.req $NAME
echo "Import-req done"
./easyrsa sign-req client $NAME
echo "Sign-req done"
sshpass -p $PASSWORD scp /home/username/EasyRSA-3.0.7/pki/issued/$NAME.crt username#192.168.1.103:/tmp
echo "Scp done"
I was just browsing the code of that easyrsa script here. This one is likely different from yours given the line for the error is 341. On the Github page, it is line 352 and it is part of a function called cleanup. It appears that this function is only attached as a trap (line 2744). Traps are used to catch signals like sigint (interrupt) which is normally sent on the terminal with ctrl+c (and may display a character like ^C). The reason the error only displays in your script is it likely causes a signal to be emitted that you would not normally receive if you ran it manually over ssh.
The error itself is really not an issue.
Code from Github:
Line 352:
(stty echo 2>/dev/null) || { (set -o echo 2>/dev/null) && set -o echo; }
Line 2744:
trap "cleanup" EXIT
It appears that line is just trying to turn terminal output of your typed characters back on (via stty echo). Sometimes programs will disable terminal output somewhere, and then re-enable it when the program finishes. However, if you were to kill the program mid way through (e.g. with ctrl+c), your program would terminate with the terminal output still disabled. This would make the terminal appear to be frozen. It would still work, but would not display the characters you type with your keyboard. The point of the trap is to ensure that terminal output is re-enabled no matter how the program exits.
More info...
At line 567 there is a function that disables echo. Looks like the point is to not show a password to the screen. If you were to kill the program during password reading, echo would remain disabled on the terminal. Likely the reason for the error has more to do with the way you are running the script. For whatever reason it causes stty echo to fail. Line 352 is assuming that the failure is due to stty echo not being a valid command. So on failure ( || ), it tries a different method (set -o echo) of enabling echo. If I try to run that on my terminal, I also get an error (bash 4.2):
-bash: set: echo: invalid option name

Redirect named pipe input to file

I would like to create a file to which I can write as described in the Datadog Datagram docs:
echo -n 'a' >/dev/udp/localhost/8125
echo -n 'b' >/dev/udp/localhost/8125
echo -n 'c' >/dev/udp/localhost/8125
Everything that is written to that file should be – instead of being handled by Datadog and sent to them via the agent – written to a log file. After executing the three lines above the log file should contain the following:
a
b
c
I thought that a named pipe and a background process that handles that would be perfect. However, it does not work as expected and the background process never writes anything, even though writing seems to work.
I created the following script:
#!/usr/bin/env bash
set -Eeuo pipefail
log=/var/log/datadog-agent.log
touch $log
# https://docs.datadoghq.com/developers/dogstatsd/datagram_shell/
pipe=/dev/udp/localhost/8125
if [[ ! -p $pipe ]]; then
rm -f $pipe
mkdir -p "$(dirname $pipe)"
mkfifo -m 0666 $pipe
fi
trap 'rm -f $pipe' EXIT
while :; do
read -r line <$pipe
echo "$line" >>$log
done
And the following systemd service:
[Unit]
Description=Fake Datadog Agent
[Service]
ExecStart=/usr/local/bin/datadog-agent
Type=exec
[Install]
WantedBy=multi-user.target
The service is started correctly after executing systemctl enable --now datadog-agent, however, as I said, nothing is ever being written to the log file.
This is very strange to me because opening two shell instances where I write the following in the first shell:
mkfifo pipe
while :; do read -r line <pipe; echo "$line"; done
And then start sending data in the second shell prints the lines correctly.
The answer to the question is found in the comments to it. Hence, this question should not go unanswered.
The code from the question works as expected, however, the path where the named pipe resides is a special path and this is the reason why the data that is being sent to it never reaches the script. The corresponding special casing in Bash for instance can be found in redir.c.
The solution to the problem is to use a real UDP server on that port:
socat -u -v -x udp-listen:8125,fork /dev/null &>/var/log/datadog-agent.log

Script stuck during read line when script is executed remotely

I want to have one script which starts a services in another server.
I have tested that the script works as expected in the server where the server is going to run.
This is the code which starts the service and monitors the log until it is in the startup process:
pkill -f "$1"
nohup java -jar -Dspring.profiles.active=$PROFILE $1 &
tail -n 0 -f nohup.out | while read LOGLINE
do
echo $LOGLINE
[[ "${LOGLINE}" == *"$L_LOG_STRING"* ]] && pkill -P $$ tail
done
This works fine as long as I execute that from that machine.
Now I want to call that script from another server:
#!/usr/bin/env bash
DESTINATION_SERVER=$1
ssh root#$DESTINATION_SERVER /bin/bash << EOF
echo "Restarting first service..."
/usr/local/starter.sh -s parameter
echo "Restarting second service..."
/usr/local/starter.sh -s parameter2
EOF
Well, everytime I try that the script of the remote server gets stuck in the "while READ" loop. But as I said, when I execute it locally from the server works fine, and in my "not simplified script" I´m not using any system variable or similar.
Update: I just tried to simplify the code even more with the following lines in the first scenario:
pkill -f "$1"
nohup java -jar -Dspring.profiles.active=$PROFILE $1 &
tail -n 0 -f nohup.out | sed "/$L_LOG_STRING/ q"
I'd say the problem is some how in the "|" through ssh, but I still can find why.
it seems that the problem comes from not having an interactive console when you execute the ssh command, therefore the nohup command behaves strangly.
I could solve it in two ways, outputing the code to the file explicitly:
"nohup java -jar -Dspring.profiles.active=test &1 >> nohup.out &"
instead of:
"nohup java -jar -Dspring.profiles.active=test &1&"
Or changing the way I access via ssh adding the tt option (just one did not work):
ssh -tt root#$DESTINATION_SERVER /bin/bash << EOF
But this last solution could lead to other problems with some character, so unless someone suggests another solution that is my patch which makes it work.

How to catch/write - success/failure logs for sFTP - PUT command

In my shell script, after sFTP put process is completed. I need to check if the PUT process is completed successfully or failed.
Don't use expect at all.
#!/usr/bin/env bash
batchfile=$(mktemp -t sftp-batchfile.XXXXXX) || exit
trap 'rm -f "$batchfile"' EXIT
cat >"$batchfile" <EOF
put test_file.txt $dest_location
bye
EOF
if sftp -b "$batchfile" "user#hostname"; then
echo "The put succeeded"
else
echo "The put failed"
fi
As given in the SFTP man page, with emphasis added:
The final usage format allows for automated sessions using the -b option. In such cases, it is necessary to configure non-interactive authentication to obviate the need to enter a password at connection time (see sshd(8) and ssh-keygen(1) for details).

Temporarily remove the ssh private key password in a shell scriptI

I am required to deploy some files from server A to server B. I connect to server A via SSH and from there, connect via ssh to server B, using a private key stored on server A, the public key of which resides in server B's authorized_keys file. The connection from A to B happens within a Bash shell script that resides on server A.
This all works fine, nice and simple, until a security-conscious admin pointed out that my SSH private key stored on server A is not passphrase protected, so that anyone who might conceivably hack into my account on server A would also have access to server B, as well as C, D, E, F, and G. He has a point, I guess.
He suggests a complicated scenario under which I would add a passphrase, then modify my shell script to add a a line at the beginning in which I would call
ssh-keygen -p -f {private key file}
answer the prompt for my old passphrase with the passphrase and the (two) prompts for my new passphrasw with just return which gets rid of the passphrase, and then at the end, after my scp command
calling
ssh-keygen -p -f {private key file}
again, to put the passphrase back
To which I say "Yecch!".
Well I can improve that a little by first reading the passphrase ONCE in the script with
read -s PASS_PHRASE
then supplying it as needed using the -N and -P parameters of ssh-keygen.
It's almost usable, but I hate interactive prompts in shell scripts. I'd like to get this down to one interactive prompt, but the part that's killing me is the part where I have to press enter twice to get rid of the passphrase
This works from the command line:
ssh-keygen -p -f {private key file} -P {pass phrase} -N ''
but not from the shell script. There, it seems I must remove the -N parameter and accept the need to type two returns.
That is the best I am able to do. Can anyone improve this? Or is there a better way to handle this? I can't believe there isn't.
Best would be some way of handling this securely without ever having to type in the passphrase but that may be asking too much. I would settle for once per script invocation.
Here is a simplified version the whole script in skeleton form
#! /bin/sh
KEYFILE=$HOME/.ssh/id_dsa
PASSPHRASE=''
unset_passphrase() {
# params
# oldpassword keyfile
echo "unset_key_password()"
cmd="ssh-keygen -p -P $1 -N '' -f $2"
echo "$cmd"
$cmd
echo
}
reset_passphrase() {
# params
# oldpassword keyfile
echo "reset_key_password()"
cmd="ssh-keygen -p -N '$1' -f $2"
echo "$cmd"
$cmd
echo
}
echo "Enter passphrase:"
read -s PASSPHRASE
unset_passphrase $PASSPHRASE $KEYFILE
# do something with ssh
reset_passphrase $PASSPHRASE $KEYFILE
Check out ssh-agent. It caches the passphrase so you can use the keyfile during a certain period regardless of how many sessions you have.
Here are more details about ssh-agent.
OpenSSH supports what's called a "control master" mode, where you can connect once, leave it running in the background, and then have other ssh instances (including scp, rsync, git, etc.) reuse that existing connection. This makes it possible to only type the password once (when setting up the control master) but execute multiple ssh commands to the same destination.
Search for ControlMaster in man ssh_config for details.
Advantages over ssh-agent:
You don't have to remember to run ssh-agent
You don't have to generate an ssh public/private key pair, which is important if the script will be run by many users (most people don't understand ssh keys, so getting a large group of people to generate them is a tiring exercise)
Depending on how it is configured, ssh-agent might time out your keys part-way through the script; this won't
Only one TCP session is started, so it is much faster if you're connecting over and over again (e.g., copying many small files one at a time)
Example usage (forgive Stack Overflow's broken syntax highlighting):
REMOTE_HOST=server
log() { printf '%s\n' "$*"; }
error() { log "ERROR: $*" >&2; }
fatal() { error "$*"; exit 1; }
try() { "$#" || fatal "'$#' failed"; }
controlmaster_start() {
CONTROLPATH=/tmp/$(basename "$0").$$.%l_%h_%p_%r
# same as CONTROLPATH but with special characters (quotes,
# spaces) escaped in a way that rsync understands
CONTROLPATH_E=$(
printf '%s\n' "${CONTROLPATH}" |
sed -e 's/'\''/"'\''"/g' -e 's/"/'\''"'\''/g' -e 's/ /" "/g'
)
log "Starting ssh control master..."
ssh -f -M -N -S "${CONTROLPATH}" "${REMOTE_HOST}" \
|| fatal "couldn't start ssh control master"
# automatically close the control master at exit, even if
# killed or interrupted with ctrl-c
trap 'controlmaster_stop' 0
trap 'exit 1' HUP INT QUIT TERM
}
controlmaster_stop() {
log "Closing ssh control master..."
ssh -O exit -S "${CONTROLPATH}" "${REMOTE_HOST}" >/dev/null \
|| fatal "couldn't close ssh control master"
}
controlmaster_start
try ssh -S "${CONTROLPATH}" "${REMOTE_HOST}" some_command
try scp -o ControlPath="${CONTROLPATH}" \
some_file "${REMOTE_HOST}":some_path
try rsync -e "ssh -S ${CONTROLPATH_E}" -avz \
some_dir "${REMOTE_HOST}":some_path
# the control master will automatically close once the script exits
I could point out an alternative solution for this. Instead of having the key stored on server A I would keep the key locally. Now I would create a local port forward to server B on port 4000.
ssh -L 4000:B:22 usernam#A
And then in a new terminal connect through the tunnel to server B.
ssh -p 4000 -i key_copied_from_a user_on_b#localhost
I don't know how feasible this is to you though.
Building up commands as a string is tricky, as you've discovered. Much more robust to use arrays:
cmd=( ssh-keygen -p -P "$1" -N "" -f "$2" )
echo "${cmd[#]}"
"${cmd[#]}"
Or even use the positional parameters
passphrase="$1"
keyfile="$2"
set -- ssh-keygen -p -P "$passphrase" -N "" -f "$keyfile"
echo "$#"
"$#"
The empty argument won't be echoed surrounded by quotes, but it's there

Resources