In my shell script, after sFTP put process is completed. I need to check if the PUT process is completed successfully or failed.
Don't use expect at all.
#!/usr/bin/env bash
batchfile=$(mktemp -t sftp-batchfile.XXXXXX) || exit
trap 'rm -f "$batchfile"' EXIT
cat >"$batchfile" <EOF
put test_file.txt $dest_location
bye
EOF
if sftp -b "$batchfile" "user#hostname"; then
echo "The put succeeded"
else
echo "The put failed"
fi
As given in the SFTP man page, with emphasis added:
The final usage format allows for automated sessions using the -b option. In such cases, it is necessary to configure non-interactive authentication to obviate the need to enter a password at connection time (see sshd(8) and ssh-keygen(1) for details).
Related
I do not know why I am getting an error when I run my script with SSH, but when I run the bash from my CA server everything works fine.
I installed my VPN server based on this article https://www.digitalocean.com/community/tutorials/how-to-set-up-an-openvpn-server-on-ubuntu-18-04
I wrote a bash for the VPN creation but when I try to run it I need to SSH to the other server at some point. If I start the script with SSH in it I got an error message:
>./easyrsa: 341: set: Illegal option -o echo
My bash contain this and run from my VPN server:
sshpass -p $PASSWORD ssh username#"CA server IP" "/home/username/makevpn.sh $NAME $PASSWORD"
And makevpn.sh contain this:
>./easyrsa sign-req client $NAME
After this run it seems okay but give that error above.
I tried to read after this error and found nothing. :( Hope someone can help because I am hopeless after 4 days of troubleshooting.
Code of VPN script
#!/bin/sh
clear
read -p "Please enter the name of the new certificate : " NAME
read -p "Please enter the Password : " PASSWORD
cd /home/username/EasyRSA-3.0.7/
./easyrsa gen-req $NAME nopass
echo "gen-req done"
cp /home/username/EasyRSA-3.0.7/pki/private/$NAME.key /home/username/client-configs/keys/
echo "cp done"
sshpass -p $PASSWORD scp /home/username/EasyRSA-3.0.7/pki/reqs/$NAME.req username#192.168.1.105:/tmp
echo "scp done"
sshpass -p $PASSWORD ssh username#192.168.1.105 "/home/username/makevpn.sh $NAME $PASSWORD"
echo "ssh done"
cp /tmp/$NAME.crt /home/username/client-configs/keys/
echo "last CP done"
sudo /home/username/client-configs/make_config.sh $NAME
echo "All Done"
Code on CA server
#!/bin/sh
NAME=$1
PASSWORD=$2
cd /home/username/EasyRSA-3.0.7/
echo "CD Done"
./easyrsa import-req /tmp/$NAME.req $NAME
echo "Import-req done"
./easyrsa sign-req client $NAME
echo "Sign-req done"
sshpass -p $PASSWORD scp /home/username/EasyRSA-3.0.7/pki/issued/$NAME.crt username#192.168.1.103:/tmp
echo "Scp done"
I was just browsing the code of that easyrsa script here. This one is likely different from yours given the line for the error is 341. On the Github page, it is line 352 and it is part of a function called cleanup. It appears that this function is only attached as a trap (line 2744). Traps are used to catch signals like sigint (interrupt) which is normally sent on the terminal with ctrl+c (and may display a character like ^C). The reason the error only displays in your script is it likely causes a signal to be emitted that you would not normally receive if you ran it manually over ssh.
The error itself is really not an issue.
Code from Github:
Line 352:
(stty echo 2>/dev/null) || { (set -o echo 2>/dev/null) && set -o echo; }
Line 2744:
trap "cleanup" EXIT
It appears that line is just trying to turn terminal output of your typed characters back on (via stty echo). Sometimes programs will disable terminal output somewhere, and then re-enable it when the program finishes. However, if you were to kill the program mid way through (e.g. with ctrl+c), your program would terminate with the terminal output still disabled. This would make the terminal appear to be frozen. It would still work, but would not display the characters you type with your keyboard. The point of the trap is to ensure that terminal output is re-enabled no matter how the program exits.
More info...
At line 567 there is a function that disables echo. Looks like the point is to not show a password to the screen. If you were to kill the program during password reading, echo would remain disabled on the terminal. Likely the reason for the error has more to do with the way you are running the script. For whatever reason it causes stty echo to fail. Line 352 is assuming that the failure is due to stty echo not being a valid command. So on failure ( || ), it tries a different method (set -o echo) of enabling echo. If I try to run that on my terminal, I also get an error (bash 4.2):
-bash: set: echo: invalid option name
I have two scripts and both are in different servers and I use these for automating a small process.
script1 starts script2 using command
ssh -i /pathToKeyFile/keyfile.pem user#server2 'bash -s < /pathToScriptFile/script2.sh'
In script2.sh I have a "case" question:
#!/bin/bash
# Ask to start up JBOSS
read -p "DB restore completed. Start JBOSS and FACADE (y/n)" startJBOSS
case "$startJBOSS" in
y|Y ) echo "Starting JBOSS and FACADE";;
n|N ) echo "Stopping here"
exit;;
* ) echo "Invalid option..."
exit;;
esac
echo "More commands here"
exit
So when I execute script1.sh it works fine and starst script2 on remote server.
But script2 fails to error
bash: line 5: syntax error near unexpected token `)'
bash: line 5: ` y|Y ) echo "Starting JBOSS and FACADE";;'
If I execute script2.sh directly from remote server it works as expected.
I also tried so that both script files are located in one server. Of cource in this case commant to start script2.sh is different, but in this case both works again as expected.
I cannot figure out why script2.sh fails when it is started from and other script located in an other server. I assume that script2.sh "code" is correct as it works when ran separately.
The problem is that read reads from stdin -- the same place your code is coming from.
Thus, instead of reading a line from the user, it reads a line from the file of source, consuming the case command, leaving the rest of the source file syntactically invalid.
Simple Answer: Don't Do That.
bash -s <filename makes sense when the <filename is coming from somewhere not accessible to the copy of bash (like the other side of the SSH connection, or a file that can only be read by a different user), but that's not the case for your example. Thus, you can just stop using the -s argument and the redirection:
ssh -i /pathToKeyFile/keyfile.pem user#server2 'bash /pathToScriptFile/script2.sh'
...or make the prompt conditional...
Another approach is to make the read conditional on there actually being a user listening at the TTY:
if [[ -t 0 ]]; then # test whether FD 0, stdin, is a TTY
read -p "DB restore completed. Start JBOSS and FACADE (y/n)" startJBOSS
else
startJBOSS=y # no TTY, so assume yes
fi
...or make the prompt read from /dev/tty, and make sure SSH passes it through.
An alternate approach is to read from /dev/tty explicitly, and then to arrange for that to be valid in the context of your script by passing appropriate arguments to ssh:
if read -p "DB restore completed. Start JBOSS and FACADE (y/n)" startJBOSS </dev/tty; then
: "read $startJBOSS from user successfully" # <- will be logged if run with set -x
else
echo "Unable to read result from user; is this being run with a TTY?" >&2
exit 1
fi
...and then, on the other side, using the -t argument to SSH, to force there to be a TTY (if one is available to SSH itself; if not, it won't have a means to read from the user out-of-band either):
ssh -t -i /pathToKeyFile/keyfile.pem user#server2 'bash -s < /pathToScriptFile/script2.sh'
First I don't know if I talk about STDIN out STDOUT, but this is what I want to achieve :
There's a program that export database from distant server and send output as gzipped content.
I want to unzip the content, parse it.
If it's OK then import it, otherwise send an error message. I don't want to write to any temporary file on disk so I want to handle things directly from STD
someExportCommand > /dev/stdin #seems not work
#I want to write a message here
echo "Export database done"
cat /dev/stdin > gunzip > /dev/stdin
echo "Unzip done"
if [[ "$mycontentReadFromSTDIN" =* "password error" ]]; then
echo "error"
exit 1
fi
#I want to echo that we begin impor"
echo "Import begin"
cat /dev/stdin | mysql -u root db
#I want to echo that import finished
echo "Import finish"
The challenge here is not to write to a physical file. It's easier if it's the case but I want to do the hard way. Is it possible and how?
A literal implementation of what you're asking for (not a good idea, but doing exactly what you asked) might look like the following:
This is a bad idea for several reasons:
If a database is large enough to be bothering with, trying to fit it in memory, and especially in a shell variable is a bad idea.
In order to fit binary data into a shell variable, it needs to be encoded (as with base64, or uunencode, or other tools). This makes it even larger than it was before, and also adds performance overhead
...however, the bad-idea code, as requested:
#!/usr/bin/env bash
set -o pipefail # if any part of a command fails, count the whole thing a failure
if exportOutputB64=$(someExportCommand | base64); then
echo "Export database done" >&2
else
echo "Export database reports failure" >&2
exit 1
fi
if exportOutputDecompressedB64=$(base64 --decode <<<"$exportOutputB64" | gunzip -c | base64); then
echo "Decompress of export done" >&2
else
echo "Decompress of export failed" >&2
exit 1
fi
unset exportOutputB64
if grep -q 'password error' < <(base64 --decode <<<"$exportOutputDecompressedB64"); then
echo "Export contains password error; considering it a failure" >&2
exit 1
fi
echo "Import begin"
mysql -u root db < <(base64 --decode <<<"$exportOutputDecompressedB64")
If I were writing this myself, I'd just set up a pipeline that processes the whole thing in-place, and uses pipefail to ensure that errors in early stages are detected:
set -o pipefail
someExportCommand | gunzip -c | mysql -u root db
The important thing about a pipeline is that all parts of it run at the same time. Thus, someExportCommand is still running when mysql -u root db starts. Consequently, there's no need for a large buffer anywhere (in memory, or on disk) to store your database contents.
The requirement to not use a temporary file seems extremely misdirected; but you can avoid it by reading into a shell variable, or perhaps an array.
Any error message is likely to be on stderr, not stdin. But you should examine the program's exit status instead of looking for whether it prints an error message.
#!/bin/bash
result=$(someExportCommand) || exit
At this point, the script will have exited if there was a failure; and otherwise, result contains its output.
Now, similarly to error messages, status messages, too, should be printed to standard error, not standard output. A common convention is also to include the name of the script in the message.
echo "$0: Import begin" >&2
Now, pass the variable to mysql.
mysql -u root db <<<"$result"
Notice that the <<<"here string" syntax is a Bash feature; you can't use it with /bin/sh. If you need the script to be portable to sh, the standard solution is still to use a pipe;
printf '%s\n' "$result" | mysql -u root db
Finally, print the status message to stderr again.
echo "$0: Import finished" >&2
Using a shell variable for a long string is not particularly efficient or elegant; capturing the output into a temporary file is definitely the recommended approach.
I'm a sysadmin and I frequently have a situation where I have a script or command that generates a lot of output which I would only like to have emailed to me if the command fails. It's pretty easy to write a script that runs the command, collects the output and emails it if the command fails, but I was thinking I should be able to write a command that
1) accepts log info on stdin
2) waits for the inputting process to exit and see what it's exit status was
3a) if the inputting process exited cleanly, append the logging input to a normal log file
3b) if the inputting process failed, append the logging input to the normal log and also send me an email.
It would look something like this on the command line:
something_important | mailonfail.sh me#example.com /var/log/normal_log
That would make it really easy to use in crontabs.
I'm having trouble figuring out how to make my script wait for the writing process and evaluate how that process exits.
Just to be exatra clear, here's how I can do it with a wrapper:
#! /bin/bash
something_important > output
ERR=$!
if [ "$ERR" -ne "0" ] ; then
cat something_important | mail -s "something_important failed" me#example.com
fi
cat something_important >> /var/log/normal_log
Again, that's not what I want, I want to write a script and pipe commands into it.
Does that make sense? How would I do that? Am I missing something?
Thanks Everyone!
-Dylan
Yes it does make sense, and you are close.
Here are some advises:
#!/bin/sh
TEMPFILE=$(mktemp)
trap "rm -f $TEMPFILE" EXIT
if [ ! something_important > $TEMPFILE ]; then
mail -s 'something goes oops' -a $TEMPFILE you#example.net
fi
cat $TEMPFILE >> /var/log/normal.log
I won't use bashisms so /bin/sh is fine
create a temporary file to avoid conflicts using mktemp(1)
use trap to remove file when the script exit, normally or not
if the command fail
then attach the file, which would or would not be preferred over embedding it
if it's a big file you could even gzip it, but the attachment method will change:
# using mailx
gzip -c9 $TEMPFILE | uuencode fail.log.gz | mailx -s subject ...
# using mutt
gzip $TEMPFILE
mutt -a $TEMPFILE.gz -s ...
gzip -d $TEMPFILE.gz
etc.
I have a bash+expect script which has to connect normal user, i want
to read the specific file and store into the variable to be used
after while that specific file in root user. How can i get the value ?
My script is:
#!/bin/bash
set prompt ">>> "
set command ls /root/test1
expect << EOF
spawn su root
expect "password:"
send "rootroot\r"
expect "$prompt\r"
send "$command\r"
expect "$prompt\r"
expect -re "(.*)\r\n$prompt\r\n"
EOF
echo "$command"
if [ ! -f "$command" ]; then
echo "file is not exist"
else
echo "file is exist"
fi
whenever i'm execute my shell script it show following output:
ls: /root/: Permission denied
file is not exist
basically test is there but it is showing "file is not exist"
This question is very old but i hope someone gets help from this answer.
--> You should use #!/usr/bin/expect or #!/bin/expect to use expect properly, expect<<EOF might work but thats not conventional way to write script.
--> You script should end with EOF statement . Ex.
#!/usr/bin/expect << EOF
<some stuff you want to do>
EOF
--> Some basic thing about spawn. Whatever you write in spawn will execute but it will not have effect on entire script. Its not like environment variables.
In short, spawn will start new process and your command is not under spawn process.
Ex.
#!/usr/bin/expect
spawn bash -c "su root '<your-cmd-as-root>'"
<some-expect-send-stuff-etc>
Now in your script, $command should be write inside spawn like i showed in above example.