Automate SSH and running commands - bash

I am trying to automate SSH into a machine and then run some commands. However, I am getting stuck at the SSH portion:
for h in ${hosts[*]}; do
ssh -i foo.pem bc2-user#$h
echo "here"
sudo bash
cd /data/kafka/tmp/kafka-logs/
exit
exit
done
As soon as I run this script, I am able to SSH in but the automation stops at this message:
********************************************************************************
This is a private computer system containing information that is proprietary
and confidential to the owner of the system. Only individuals or entities
authorized by the owner of the system are allowed to access or use the system.
Any unauthorized access or use of the system or information is strictly
prohibited.
All violators will be prosecuted to the fullest extent permitted by law.
********************************************************************************
Last login: Thu Dec 10 10:19:23 2015 from 10.81.120.55
-bash: ulimit: open files: cannot modify limit: Operation not permitted
When I press control + d, my echo "here" commands executes and then my script exits. Without performing the rest of the commands.
I read around and I tried this but I am getting this syntax error:
./kafka_prefill_count.sh: line 38: warning: here-document at line 26 delimited by end-of-file (wanted `EOF')
./kafka_prefill_count.sh: line 39: syntax error: unexpected end of file
script:
for h in ${hosts[*]}; do
ssh -i foo.pem bc2-user#$h << EOF
echo "here"
sudo bash
cd /data/kafka/tmp/kafka-logs/
ls | grep "$dir_name"
exit
exit
bash -l
EOF
done

Your current script isn't really how you would execute commands remotely using ssh.
It should look more like this
shh -i foo.pem user#host 'echo "here" ; hostname'
(hostname command is just example to prove its running on other machine.
Good resource
Just saw your edit:
EOF needs to be all the way to the left.
ssh -i foo.pem bc2-user#$h << EOF
echo "here"
sudo bash
cd /data/kafka/tmp/kafka-logs/
ls | grep "$dir_name"
exit
exit
bash -l
EOF

Try this with two here documents, one for ssh and one for bash:
ssh -i foo.pem bc2-user#$h << EOF1
echo "remote"
sudo bash << EOF2
cd /data/kafka/tmp/kafka-logs/
ls | grep "$dir_name"
EOF2
EOF1

Related

Create and write systemd service from Shell script Failed [duplicate]

This question already has answers here:
How do I use sudo to redirect output to a location I don't have permission to write to? [closed]
(15 answers)
sudo cat << EOF > File doesn't work, sudo su does [duplicate]
(5 answers)
Closed 1 year ago.
I am trying to automate the addition of a repository source in my arch's pacman.conf file but using the echo command in my shell script. However, it fails like this:-
sudo echo "[archlinuxfr]" >> /etc/pacman.conf
sudo echo "Server = http://repo.archlinux.fr/\$arch" >> /etc/pacman.conf
sudo echo " " >> /etc/pacman.conf
-bash: /etc/pacman.conf: Permission denied
If I make changes to /etc/pacman.conf manually using vim, by doing
sudo vim /etc/pacman.conf
and quiting vim with :wq, everything works fine and my pacman.conf has been manually updated without "Permission denied" complaints.
Why is this so? And how do I get sudo echo to work? (btw, I tried using sudo cat too but that failed with Permission denied as well)
As #geekosaur explained, the shell does the redirection before running the command. When you type this:
sudo foo >/some/file
Your current shell process makes a copy of itself that first tries to open /some/file for writing, then if that succeeds it makes that file descriptor its standard output, and only if that succeeds does it execute sudo. This is failing at the first step.
If you're allowed (sudoer configs often preclude running shells), you can do something like this:
sudo bash -c 'foo >/some/file'
But I find a good solution in general is to use | sudo tee instead of > and | sudo tee -a instead of >>. That's especially useful if the redirection is the only reason I need sudo in the first place; after all, needlessly running processes as root is precisely what sudo was created to avoid. And running echo as root is just silly.
echo '[archlinuxfr]' | sudo tee -a /etc/pacman.conf >/dev/null
echo 'Server = http://repo.archlinux.fr/$arch' | sudo tee -a /etc/pacman.conf >/dev/null
echo ' ' | sudo tee -a /etc/pacman.conf >/dev/null
I added > /dev/null on the end because tee sends its output to both the named file and its own standard output, and I don't need to see it on my terminal. (The tee command acts like a "T" connector in a physical pipeline, which is where it gets its name.) And I switched to single quotes ('...') instead of doubles ("...") so that everything is literal and I didn't have to put a backslash in front of the $ in $arch. (Without the quotes or backslash, $arch would get replaced by the value of the shell parameter arch, which probably doesn't exist, in which case the $arch is replaced by nothing and just vanishes.)
So that takes care of writing to files as root using sudo. Now for a lengthy digression on ways to output newline-containing text in a shell script. :)
To BLUF it, as they say, my preferred solution would be to just feed a here-document into the above sudo tee command; then there is no need for cat or echo or printf or any other commands at all. The single quotation marks have moved to the sentinel introduction <<'EOF', but they have the same effect there: the body is treated as literal text, so $arch is left alone:
sudo tee -a /etc/pacman.conf >/dev/null <<'EOF'
[archlinuxfr]
Server = http://repo.archlinux.fr/$arch
EOF
But while that's how I'd do it, there are alternatives. Here are a few:
You can stick with one echo per line, but group all of them together in a subshell, so you only have to append to the file once:
(echo '[archlinuxfr]'
echo 'Server = http://repo.archlinux.fr/$arch'
echo ' ') | sudo tee -a /etc/pacman.conf >/dev/null
If you add -e to the echo (and you're using a shell that supports that non-POSIX extension), you can embed newlines directly into the string using \n:
# NON-POSIX - NOT RECOMMENDED
echo -e '[archlinuxfr]\nServer = http://repo.archlinux.fr/$arch\n ' |
sudo tee -a /etc/pacman.conf >/dev/null
But as it says above, that's not POSIX-specified behavior; your shell might just echo a literal -e followed by a string with a bunch of literal \ns instead. The POSIX way of doing that is to use printf instead of echo; it automatically treats its argument like echo -e does, but doesn't automatically append a newline at the end, so you have to stick an extra \n there, too:
printf '[archlinuxfr]\nServer = http://repo.archlinux.fr/$arch\n \n' |
sudo tee -a /etc/pacman.conf >/dev/null
With either of those solutions, what the command gets as an argument string contains the two-character sequence \n, and it's up to the command program itself (the code inside printf or echo) to translate that into a newline. In many modern shells, you have the option of using ANSI quotes $'...', which will translate sequences like \n into literal newlines before the command program ever sees the string. That means such strings work with any command whatsoever, including plain old -e-less echo:
echo $'[archlinuxfr]\nServer = http://repo.archlinux.fr/$arch\n ' |
sudo tee -a /etc/pacman.conf >/dev/null
But, while more portable than echo -e, ANSI quotes are still a non-POSIX extension.
And again, while those are all options, I prefer the straight tee <<EOF solution above.
The problem is that the redirection is being processed by your original shell, not by sudo. Shells are not capable of reading minds and do not know that that particular >> is meant for the sudo and not for it.
You need to:
quote the redirection ( so it is passed on to sudo)
and use sudo -s (so that sudo uses a shell to process the quoted redirection.)
http://www.innovationsts.com/blog/?p=2758
As the instructions are not that clear above I am using the instructions from that blog post. With examples so it is easier to see what you need to do.
$ sudo cat /root/example.txt | gzip > /root/example.gz
-bash: /root/example.gz: Permission denied
Notice that it’s the second command (the gzip command) in the pipeline that causes the error. That’s where our technique of using bash with the -c option comes in.
$ sudo bash -c 'cat /root/example.txt | gzip > /root/example.gz'
$ sudo ls /root/example.gz
/root/example.gz
We can see form the ls command’s output that the compressed file creation succeeded.
The second method is similar to the first in that we’re passing a command string to bash, but we’re doing it in a pipeline via sudo.
$ sudo rm /root/example.gz
$ echo "cat /root/example.txt | gzip > /root/example.gz" | sudo bash
$ sudo ls /root/example.gz
/root/example.gz
sudo bash -c 'echo "[archlinuxfr]" >> /etc/pacman.conf'
STEP 1 create a function in a bash file (write_pacman.sh)
#!/bin/bash
function write_pacman {
tee -a /etc/pacman.conf > /dev/null << 'EOF'
[archlinuxfr]
Server = http://repo.archlinux.fr/\$arch
EOF
}
'EOF' will not interpret $arch variable.
STE2 source bash file
$ source write_pacman.sh
STEP 3 execute function
$ write_pacman
append files (sudo cat):
cat <origin-file> | sudo tee -a <target-file>
append echo to file (sudo echo):
echo <origin> | sudo tee -a <target-file>
(EXTRA) disregard the ouput:
echo >origin> | sudo tee -a <target-file> >/dev/null

Why my script returns a segmentation error when executing a command via ssh?

I'm writing a bash script that is supposed to run applications with arguments on a remote computer. On the computer, root just needs to write the command:
# ./MyApp -t '{"name":"john"}'
So on my own computer I wrote a script (it is simplified in terms of what is actually to show the problem):
#!/bin/bash
VAR="{\"name\":\"john\"}"
echo "VAR: $VAR"
COMMAND="./MyAPP -t '$VAR'"
echo "COMMAND: $COMMAND"
ssh root#192.168.100.1 "$COMMAND"
Output:
$ ./my_script
VAR: {"name":"john"}
COMMAND: ./MyApp -t '{"name":"john"}'
root#192.168.100.1's password:
Segmentation fault
Why does I get segmentation error?

Passing Bash Command Through SSH - Executing Variable Capture

I am passing the following command straight through SSH:
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /key/path server#111.111.111.111 'bash -s' << EOF
FPM_EXISTS=`ps aux | grep php-fpm`
if [ ! -z "$FPM_EXISTS" ]
then
echo "" | sudo -S service php5-fpm reload
fi
EOF
I get the following error:
[2015-02-25 22:45:23] local.INFO: bash: line 1: syntax error near unexpected token `('
bash: line 1: ` FPM_EXISTS=root 2378 0.0 0.9 342792 18692 ? Ss 17:41 0:04 php-fpm: master process (/etc/php5/fpm/php-fpm.conf)
It's like it is trying to execute the output of ps aux | grep php-fpm instead of just capturing git the variable. So, if I change the command to try to capture ls, it acts like it tries to execute that as well, of course returning "command not found" for each directory.
If I just paste the contents of the Bash script into a file and run it it works fine; however, I can't seem to figure out how to pass it through SSH.
Any ideas?
You need to wrap starting EOF in single quotes. Otherwise ps aux | grep php-fpm would get interpreted by the local shell.
The command should look like this:
ssh ... server#111.111.111.111 'bash -s' << 'EOF'
FPM_EXISTS=$(ps aux | grep php-fpm)
if [ ! -z "$FPM_EXISTS" ]
then
echo "" | sudo -S service php5-fpm reload
fi
EOF
Check this: http://tldp.org/LDP/abs/html/here-docs.html (Section 19.7)
Btw, I would encourage you to use $() instead of backticks consequently for command substitution because of the ability to nest them. You will have more fun, believe me. Check this for example: What is the benefit of using $() instead of backticks in shell scripts?
You should wrap the EOF in single quotes.
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /key/path server#111.111.111.111 'bash -s' << 'EOF'
FPM_EXISTS=`ps aux | grep php-fpm`
if [ ! -z "$FPM_EXISTS" ]
then
echo "" | sudo -S service php5-fpm reload
fi
EOF

Function runs on CLI but not from bash script

I made a bash script to generate an ssh that I use all the time with a couple options. At the end it echos a string for whatever it is going to execute, and then it executes it. Currently I'm trying to add the functionality to add in a bash command to run once the ssh is completed, but its giving an error like so:
bash: /bin/echo 'hello'; bash -l: No such file or directory
Yet if I copy the command it runs, and run it from outside the executable, it runs perfectly. Is there any reason I would be getting this error from inside the executable, and not from the CLI?
An example command it generates is:
pair -c "/bin/echo 'hello'"
Running: ssh ****##.#.#.# -p443 -t "/bin/echo 'hello'; bash -l"
This is most likely caused because of too strong quoting. This error line
bash: /bin/echo 'hello'; bash -l: No such file or directory
shows that bash does not try to execute the command /bin/echo with the argument 'hello' followed by the command bash -l. Instead bash is trying to execute the command /bin/echo 'hello'; bash -l.
Compare:
$ ssh localhost -t "/bin/echo 'foo'; bash -l"
foo
$ logout # this is the new shell
Connection to localhost closed.
and:
$ ssh localhost -t '"/bin/echo 'foo'; bash -l"'
bash: /bin/echo foo; bash -l: No such file or directory
Connection to localhost closed.
The solution to this problem usually involves eval, but I cannot tell for sure unless I see more code from you.

Execute a command on remote hosts via ssh from inside a bash script

I wrote a bash script which is supposed to read usernames and IP addresses from a file and execute a command on them via ssh.
This is hosts.txt :
user1 192.168.56.232
user2 192.168.56.233
This is myScript.sh :
cmd="ls -l"
while read line
do
set $line
echo "HOST:" $1#$2
ssh $1#$2 $cmd
exitStatus=$?
echo "Exit Status: " $exitStatus
done < hosts.txt
The problem is that execution seems to stop after the first host is done. This is the output:
$ ./myScript.sh
HOST: user1#192.168.56.232
total 2748
drwxr-xr-x 2 user1 user1 4096 2011-11-15 20:01 Desktop
drwxr-xr-x 2 user1 user1 4096 2011-11-10 20:37 Documents
...
drwxr-xr-x 2 user1 user1 4096 2011-11-10 20:37 Videos
Exit Status: 0
$
Why does is behave like this, and how can i fix it?
In your script, the ssh job gets the same stdin as the read line, and in your case happens to eat up all the lines on the first invocation. So read line only gets to see
the very first line of the input.
Solution: Close stdin for ssh, or better redirect from /dev/null. (Some programs
don't like having stdin closed)
while read line
do
ssh server somecommand </dev/null # Redirect stdin from /dev/null
# for ssh command
# (Does not affect the other commands)
printf '%s\n' "$line"
done < hosts.txt
If you don't want to redirect from /dev/null for every single job inside the loop, you can also try one of these:
while read line
do
{
commands...
} </dev/null # Redirect stdin from /dev/null for all
# commands inside the braces
done < hosts.txt
# In the following, let's not override the original stdin. Open hosts.txt on fd3
# instead
while read line <&3 # execute read command with fd0 (stdin) backed up from fd3
do
commands... # inside, you still have the original stdin
# (maybe the terminal) from outside, which can be practical.
done 3< hosts.txt # make hosts.txt available as fd3 for all commands in the
# loop (so fd0 (stdin) will be unaffected)
# totally safe way: close fd3 for all inner commands at once
while read line <&3
do
{
commands...
} 3<&-
done 3< hosts.txt
The problem that you are having is that the SSH process is consuming all of the stdin, so read doesn't see any of the input after the first ssh command has ran. You can use the -n flag for SSH to prevent this from happening, or you can redirect /dev/null to the stdin of the ssh command.
See the following for more information:
http://mywiki.wooledge.org/BashFAQ/089
Make sure the ssh command does not read from the hosts.txt using ssh -n
I have a feeling your question is unnecessarily verbose..
Essentially you should be able to reproduce the problem with:
while read line
do
echo $line
done < hosts.txt
Which should work just fine.. Do you edit the right file? Are there special characters in it? Check it with a proper editor (eg: vim).

Resources