shell script get ssh remote error - shell

I'm trying to make a remote mysqldump and afterwards download it with rsync which is all working good, but I also want to log the remote errors I get which I now only see in the terminal output.
I mean errors like this mysqldump: Got error: 1044: Access denied for user 'root'#'localhost' to database 'information_schema' when using LOCK TABLES?
This is the important part of my code:
MYSQL_CMD="mysqldump -u ${MYSQL_USER} -p${MYSQL_PASS} $db -r /root/mysql_${db}.sql"
$SSH -p ${SSH_PORT} ${SSH_USER}#${SSH_HOST} "${MYSQL_CMD}" >> "${LOGFILE}"
In my research I only found solutions for getting the exit code and return values.
I hope someone can give me a hint, thanks in advance.

These error messages are being written to stderr. You can redirect this to a file using 2> or 2>> just like you do for stdout with > and >>. Eg:
ssh ... 2>/tmp/logerrors
Note there is no space between 2 and >. You can merge stderr into the same file as stdout by replacing your >> "${LOGFILE}" with
ssh ... &>> "${LOGFILE}"
Again, no space in &>, which can also be written >&.

Related

How can i redirect unwanted output on bash login over ssh?

I've got a script, that will use ssh to login to another machine and run a script there. My local script will redirect all the output to a file. It works fine in most cases, but on certain remote machines, i am capturing output that i don't want, and it looks like it's coming from stderr. Maybe because of the way bash is processing entries in its start-up files, but this is just speculation.
Here is an example of some unwanted lines that end up in my file.
which: no node in (/usr/local/bin:/bin:/usr/bin)
stty: standard input: Invalid argument
My current method is to just strip the predictable output that i don't want, but it feels like bad practice.
How can i capture output from only my script?
Here's the line that runs the remote script.
ssh -p 22 -tq user#centos-server "/path/to/script.sh" > capture
The ssh uses authorized_keys.
Edit: In the meantime, i'm going to work on directing the output from my script on machine B to a file and then copying it to A via scp and deleting it on B. But i would really like to be able to suppress the output completely, because when i run the script on machine A, it makes the output difficult to read.
To build on your comment on Raman's answer. Have you tried supressing .bashrc and .bash_profile as shown below?
ssh -p 22 -tq user#centos-server "bash --norc --noprofile /path/to/script.sh" > capture
If rc-files is the problem on some servers you should try and fix the broken rc-files instead of your script/invocation since it'll affect all (non-interactive) logins.
Try running ssh user#host 'grep -ls "which node" .*' on all your servers to find if they have "which node" in any dotfiles as indicated by your error message.
Another thing to look out for is your shebang. You tag this as bash and write CentOS but on a Debian/Ubuntu server #!/bin/sh gives you dash instead of (sh-compatible) bash.
YOu can redirect stdout (2) to /dev/null and redirect the rest to the log fole as follows:
ssh -p 22 -tq user#centos-server bash -c "/path/to/script.sh" 2>/dev/null >> capture

BASH - Assign SSH output to variable

I've read all the threads that I could find regarding this issue that I am having, however I have not yet found a solution to my problem.
First let me say that I am currently attempting to do this work in a very restrictive environment so I do not have the option of messing with configurations or any other admin type functions.
code:
ssh -t username#host "sudo -u user hadoop fs -ls /"
running this returns the output that I am looking for, however the next line will hang and does not assign the output to the variable:
output=$(ssh -t username#host "sudo -u user hadoop fs -ls")
I have attempted any and all variations of this line that I could find. If I do an echo of the variable it just spits out a blank line. The reason for the -t option is becuase without it I was getting an error along the lines of:
sudo: no tty present and no askpass program specified
sudo: pam_authenticate: Conversation error
I really don't have any contingency plans if I can't get this single line to work, so if you can help, it would be greatly appreciated.
Please give this a shot. I was able to do it at least 10 times in a row
output=$(sshpass -f <(printf '%s\n' $password) ssh user:#host "sudo ls");
echo $output;
This command is using sshpass to pass the password non interactively to the ssh command.

Direct mongo output to a file

I have a script that automate the rebuilding of mongo db's for our servers:
#!/bin/sh
mongo local:host 127.0.0.1 mongodb-create-ourdatabase.js > validate.txt
mongoimport --host 127.0.0.1 --db ourdatabase --collection ourUser --file create-ourUser.js > validate.txt
The output of the first line when the database is created writes to file, but the output of the second line, where the collection ourUser is created outputs to screen.
What am I missing?
First, both calls create an empty, new validate.txt file. So second call clobbers first call result. I doubt that this is what you want, so you should change the second > by >> to append to your logfile.
Second, executables issue output through 2 screen channels: standard output (aka stdout, used for normal output, results) and standard error (aka stderr, used for warnings and errors). It is not possible to know which stream is the target by looking at the output.
To merge both streams and get all process output, you have to pipe stderr to stdout to be able to redirect, using 2&>1 (dup & close pipe 2=stderr to 1=stdout)
mongo local:host 127.0.0.1 mongodb-create-ourdatabase.js 2&>1 > validate.txt
mongoimport --host 127.0.0.1 --db ourdatabase --collection ourUser --file create-ourUser.js 2&>1 >> validate.txt
Thanks for the response Jean-Francois, unfortunately that did not work, but it was close. What worked was:
#!/bin/sh
mongo localhost:27017 mongodb-create-our-database.js 2>&1 > validate.txt
mongoimport --host 127.0.0.1 --db ourdatabase --collection ourUser --file create-ourUser.js >> validate.txt 2>&1
Using 2&>1 had the script looking for file 2, and I found this excellent explanation:
Scroll down to 1st answer
which worked for me.

How can I capture the error returned from mysqldump and use it to take another action in a shell script?

I'm using mysqldump in a shell script to dump several schemas from a production environment to a local one.
schemas = (one two three)
read -p "Enter Username: " un
read -s -p "Enter Password: " pw
for schema in "${schemas[#]}"
do
:
mysqldump -h SERV -u $un --password=$p > /dev/null 2>&1 | mysql -uroot LOCAL
done
I'm redirecting out and error to /dev/null to prevent warnings and error messages, but I want to be able to catch the error and do something else based on the output (e.g. Access Denied, Not Found).
How can I capture the error returned from mysqldump and use it to take another action in a shell script?
For what it's worth, the $? variable always seems to be 0 after mysqldump completes, even if the STDERR is access denied.
I did a little more research and found the answer here:
http://scratching.psybermonkey.net/2011/01/bash-how-to-check-exit-status-of-pipe.html

bash script not capturing stdout, just stderr

I have the following script (let's call it move_site.sh) that copies a website directory structure to another server
#!/bin/bash
scp -r /usr/local/apache2/htdocs/$1 http#$2:/local/htdocs 1>$1$2.out 2>&1
So calling it from the command line, I pass it the webiste site directory name, and destination server as such:
nohup ./move_site.sh site1 server1 &
However, in the resulting that is named site1server1.out, there are only stderr messages, if any.
Can someone tell me how I can get the file and directory names that are copied, included in the output file, so that I have some kind of record?
Thanks.
A quick try :
Maybe it is because when everything went fine, scp doesn't print anything to stdout (?).
Have a try : run your scp command outside the script, most probably you don't have anything on std out. (redirect nothing to $1$2.out, it's still nothing :))
I don't think it is possible with scp but with rsync you can track what has been transferred to stdout. So changing scp -r by rsync -r -v -e should does the trick. (at least if you can go for rsync unstead of scp).

Resources