Send shell script eval results by email - shell

This question follows on from my own answer to one of my previous questions. The script I have given there works just fine. To round things off I would like to grab the results of the eval'd rsync results and email them off. I haven't got the foggiest idea how that should be done.
Update
In response to the comments I am providing more details in this question. The script in question copied over from my own answer to my other question:
#! /bin/bash
# Backup to EVBackup Server
local="/var/lib/automysqlbackup/daily/dbname"
#replace dbname as appropriate
svr="$(uname -n)"
remote="$(svr/.example.com/)"
#strip out the .example.com bit to get the destination folder on the remote server
remote+="/"
evb="user#user.evbackup.com:"
#user will have to be replaced with your username
evb+=$remote
cmd='rsync -avz -e "ssh -i /backup/ssh_key" '
#your ssh key location may well be different
cmd+=$local
cmd+=$evb
#at this point cmd will be something like
#rsync -avz -e "ssh -i /backup/ssh_key" /home bob#bob.evbackup.com:home-data
eval $cmd
When you run this script from the terminal you see the output from the Rsync command. What I would like to be able to do is to grab that output into a shell variable and then send it off by email so I can keep a human aware of any issues that might have been encountered whilst doing the Rsync.
I guess I will be able to figure out how to send emails from a shell script. What has me stumped is how I capture the payload for that email - i.e. the Rsync output.

Related

How can i redirect unwanted output on bash login over ssh?

I've got a script, that will use ssh to login to another machine and run a script there. My local script will redirect all the output to a file. It works fine in most cases, but on certain remote machines, i am capturing output that i don't want, and it looks like it's coming from stderr. Maybe because of the way bash is processing entries in its start-up files, but this is just speculation.
Here is an example of some unwanted lines that end up in my file.
which: no node in (/usr/local/bin:/bin:/usr/bin)
stty: standard input: Invalid argument
My current method is to just strip the predictable output that i don't want, but it feels like bad practice.
How can i capture output from only my script?
Here's the line that runs the remote script.
ssh -p 22 -tq user#centos-server "/path/to/script.sh" > capture
The ssh uses authorized_keys.
Edit: In the meantime, i'm going to work on directing the output from my script on machine B to a file and then copying it to A via scp and deleting it on B. But i would really like to be able to suppress the output completely, because when i run the script on machine A, it makes the output difficult to read.
To build on your comment on Raman's answer. Have you tried supressing .bashrc and .bash_profile as shown below?
ssh -p 22 -tq user#centos-server "bash --norc --noprofile /path/to/script.sh" > capture
If rc-files is the problem on some servers you should try and fix the broken rc-files instead of your script/invocation since it'll affect all (non-interactive) logins.
Try running ssh user#host 'grep -ls "which node" .*' on all your servers to find if they have "which node" in any dotfiles as indicated by your error message.
Another thing to look out for is your shebang. You tag this as bash and write CentOS but on a Debian/Ubuntu server #!/bin/sh gives you dash instead of (sh-compatible) bash.
YOu can redirect stdout (2) to /dev/null and redirect the rest to the log fole as follows:
ssh -p 22 -tq user#centos-server bash -c "/path/to/script.sh" 2>/dev/null >> capture

SMB Client Commands Through Shell Script

I have a shell script, which I am using to access the SMB Client:
#!/bin/bash
cd /home/username
smbclient //link/to/server$ password -W domain -U username
recurse
prompt
mput baclupfiles
exit
Right now, the script runs, accesses the server, and then asks for a manual input of the commands.
Can someone show me how to get the commands recurse, prompt, mput baclupfiles and exit commands to be run by the shell script please?
I worked out a solution to this, and sharing for future references.
#!/bin/bash
cd /home/username
smbclient //link/to/server$ password -W domain -U username << SMBCLIENTCOMMANDS
recurse
prompt
mput backupfiles
exit
SMBCLIENTCOMMANDS
This will enter the commands between the two SMBCLIENTCOMMANDS statements into the smb terminal.
smbclient accepts the -c flag for this purpose.
-c|--command command string
command string is a semicolon-separated list of commands to be executed instead of
prompting from stdin.
-N is implied by -c.
This is particularly useful in scripts and for printing stdin to the server, e.g.
-c 'print -'.
For instance, you might run
$ smbclient -N \\\\Remote\\archive -c 'put /results/test-20170504.xz test-20170504.xz'
smbclient disconnects when it is finished executing the commands.
smbclient //link/to/server$ password -W domain -U username -c "recurse;prompt;mput backupfiles"
I would comment to Calchas's answer which is the correct approach-but did not directly answer OP's question-but I am new and don't have the reputation to comment.
Note that the -c listed above is semicolon separated list of commands (as documented in other answers), thus adding recurse and prompt enables the mput to copy without prompting.
You may also consider using the -A flag to use a file (or a command that decrypts a file to pass to -A) to fully automate this script
smbclient //link/to/server$ password -A ~/.smbcred -c "recurse;prompt;mput backupfiles"
Where the file format is:
username = <username>
password = <password>
domain = <domain>
workgroup = <workgroup>
workgroup is optional, as is domain, but usually needed if not using a domain\username formatted username.
I suspect this post is WAY too late to be useful to this particular need, but maybe useful to other searchers, since this thread lead me to the more elegant answer through -c and semicolons.
I would take a different approach using autofs with smb. Then you can eliminate the smbclient/ftp like approach and refactor your shell script to use other functions like rsync to move your files around. This way your credentials aren't stored in the script itself as well. You can bury them somewhere on your fs and make it read only by root an no one else.

bash script not capturing stdout, just stderr

I have the following script (let's call it move_site.sh) that copies a website directory structure to another server
#!/bin/bash
scp -r /usr/local/apache2/htdocs/$1 http#$2:/local/htdocs 1>$1$2.out 2>&1
So calling it from the command line, I pass it the webiste site directory name, and destination server as such:
nohup ./move_site.sh site1 server1 &
However, in the resulting that is named site1server1.out, there are only stderr messages, if any.
Can someone tell me how I can get the file and directory names that are copied, included in the output file, so that I have some kind of record?
Thanks.
A quick try :
Maybe it is because when everything went fine, scp doesn't print anything to stdout (?).
Have a try : run your scp command outside the script, most probably you don't have anything on std out. (redirect nothing to $1$2.out, it's still nothing :))
I don't think it is possible with scp but with rsync you can track what has been transferred to stdout. So changing scp -r by rsync -r -v -e should does the trick. (at least if you can go for rsync unstead of scp).

Runnig commands on remote unix server

I am working on a shell script. It is used to login to another remote server check for the existance of a file and it it doesnt it creates and if it does nothing happens.
I am trying to do this by,
ssh -i private_key server_name 'script to create new file'
After spending a long time thinking why it is not happening, it stuck to mind that the variable from where i am doing ssh will not be available in the server to which it goes. So I have to pass a variable. But I couldnt do so. But the logon part is smooth.
Please help.
If it's OK to update modification time of the file:
ssh -i keyfile server "touch '$newfile'"
otherwise
ssh -i keyfile server "[ -f '$newfile' ] && touch '$newfile'"
Those should work assuming your login shell is a Bourne-type shell.
Use double quotes, or switch up the quotes if you need single quotes for some parts.
"foo $bar baz"
'foo '"$bar"' baz'

How to automate password entry?

I want to install a software library (SWIG) on a list of computers (Jenkins nodes). I'm using the following script to automate this somewhat:
NODES="10.8.255.70 10.8.255.85 10.8.255.88 10.8.255.86 10.8.255.65 10.8.255.64 10.8.255.97 10.8.255.69"
for node in $NODES; do
scp InstallSWIG.sh root#$node:/root/InstallSWIG.sh
ssh root#$node sh InstallSWIG.sh
done
This way it's automated, except for the password request that occur for both the scp and ssh commands.
Is there a way to enter the passwords programmatically?
Security is not an issue. I’m looking for solutions that don’t involve SSH keys.
Here’s an expect example that sshs in to Stripe’s Capture The Flag server and enters the password automatically.
expect <<< 'spawn ssh level01#ctf.stri.pe; expect "password:"; send "e9gx26YEb2\r";'
With SSH the right way to do it is to use keys instead.
# ssh-keygen
and then copy the *~/.ssh/id_rsa.pub* file to the remote machine (root#$node) into the remote user's .ssh/authorized_keys file.
You can perform the task using empty, a small utility from sourceforge. It's similar to expect but probably more convenient in this case. Once you have installed it, your first scp will be accomplished by following two commands:
./empty -f scp InstallSWIG.sh root#$node:/root/InstallSWIG.sh
echo YOUR_SECRET_PASSWORD | ./empty -s -c
The first one starts your command in the background, tricking it into thinking it's running in interactive mode on a terminal. The other one sends it data from stdin. Of course, putting your password anywhere on command line is risky due to shell history being preserved, users being able to see it in ps results etc. Not secure either, but a bit better thing would be to store the password in a file and redirect the second command's input from that file instead of using echo and a pipe.
After copying to the server, you can run the script in a similar manner:
./empty -f ssh root#$node sh InstallSWIG.sh
echo YOUR_SECRET_PASSWORD | ./empty -s -c
You could look into setting up passwordless ssh keys for that. Establishing Batch Mode Connections between OpenSSH and SSH2 is a starting point, you'll find lots of information on this topic on the web.
Wes' answer is the correct one but if you're keen on something dirty and slow, you can use expect to automate this.

Resources