Execute Script available on remote machine - bash

I am trying to execute a script available on remote machine using ssh. The output differ if it is run from client via ssh or run after ssh into the server.
The script will trim a file.
tail -n 100 users.txt > temp.txt
rm users.txt
mv temp.txt users.txt
echo $(wc -l users.txt)
echo Done
On running from client side :
client#client_mac $ ssh user#server_mac '~/path_to_script/demo_script.sh'
Output :
0 users.txt
Done
while after ssh'ing at server side :
client#client_mac $ ssh user#server_mac
user#server_mac $ cd ~/path_to_script/
user#server_mac $ ./demo_script.sh
Output :
100 users.txt
Done
How do we execute a script that is available on remote machine ? Is the syntax different ?

Your script always looks for users.txt in the current working directory.
In the first example, the current working directory is your home directory; that's why you have to run the script with ~/path_to_script/demo_script.sh rather than ./demo_script.sh. As such, you are getting the line count of ~/users.txt in your output.
In the second example, you change the working directory from ~ to ~/path_to_script before executing the script, so the output contains the line count of ~/path_to_script/users.txt.

Related

bash scripting; copy and chmod and untar files in multiple remote servers

I am a newbie to bash scripting. I am trying to copy a gz file, then change permissions and untar it on remote servers (all centos machines).
#!/bin/bash
pwd=/home/sujatha/downloads
cd $pwd
logfile=$pwd/log/`echo $0|cut -f1 -d'.'`.log
rm $logfile
touch $logfile
server="10.1.0.22"
for a in $server
do
scp /home/user/downloads/prometheus-2.0.0.linux-amd64.tar.gz
ssh -f sujatha#10.1.0.22 "tar -xvzf/home/sujatha/downloads/titantest/prometheus-2.0.0.linux-amd64.tar.gz"
sleep 2
echo
done
exit
The scp part is successfull. But not able to do the remaining actions. after untarring I also want to add more actions like appending a variable to the config files. all through the script. Any advise would be helpful
Run a bash session in your ssh connection:
ssh 192.168.2.9 bash -c "ls; sleep 2; echo \"bye\""

how to ssh a loop over several commands

I am new to ssh so forgive me if my questions are trivial..i need to make a a remote computer execute a set of commands several times so i was thinking about making a loop using ssh ..the problem is i don't know do i save those commands in a file and loop on that file or can i like save them in ssh and just call them ..i am really troubled..also if i make a loop like this
i= 10
while i!= 0
execute command.text file ???
i--
How to i tell it to execute the file ?
Just try first on the shell in the remote machine to run the command you want.
You will find plenty of info over the internet about loops in shell/bash/csh/whatevershell:
For instance assuming bash run in the remote host (from: http://www.bashoneliners.com/ )
$ for ((i=1; i<=10; ++i)); do echo $i; done
Once you learn that, simply then take that statement to the ssh command from the machine you want to trigger the action:
$ ssh user#remotehost 'for ((i=1; i<=10; ++i)); do echo $i; done'
You can write a simple script that will execute needed commands, and path it to ssh.
For example:
script.sh, it will iterate over your bunch of commands 10 times:
for i in $(seq 10)
do
command1
command2
command3
done
and path it to remote server for execution:
$ ssh $SERVERNAME < script.sh
If you have this command.text file in which you have written all the commands in column (you can modify them with vi or vim and put them in column), you don't even need to do a loop, you can simply do:
cat command.text | awk '{print "ssh user#remotehost "$0" "}' | sh -x
For example if command.text contains:
ls -lart
cd /tmp
uname -a
This will let you do all commands written in the command.text by doing ssh user#remotehost.

Escape whole find content to send it to a command over ssh

I'm trying to use a ssh command like :
ssh user#host command -m MYFILE
MYFILE is the content of a file on my local directory.
I'm using Bash. I've tried to use printf "%q", but i'd not working. MYFILE contains spaces, new lines, single and doublequotes...
Is there a way my command gets the file content ? I can't actually run anything else than command on the remote host.
How about first transferring the file to the remote machine
scp MYFILE user#host:myfile &&
ssh user#host 'command -m "$(< myfile)" && rm myfile'

bash script parallel ssh remote command

i have a script that fires remote commands on several different machines through ssh connection. Script goes something like:
for server in list; do
echo "output from $server"
ssh to server execute some command
done
The problem with this is evidently the time, as it needs to establish ssh connection, fire command, wait for answer, print it. What i would like is to have script that would try to establish connections all at once and return echo "output from $server" and output of command as soon as it gets it, so not necessary in the list order.
I've been googling this for a while but didn't find an answer. I cannot cancel ssh session after command run as one thread suggested, because i need an output and i cannot use parallel gnu suggested in other threads. Also i cannot use any other tool, i cannot bring/install anything on this machine, only useable tool is GNU bash, version 4.1.2(1)-release.
Another question is how are ssh sessions like this limited? If i simply paste 5+ or so lines of "ssh connect, do some command" it actually doesn't do anything, or execute only on first from list. (it works if i paste 3-4 lines). Thank you
Have you tried this?
for server in list; do
ssh user#server "command" &
done
wait
echo finished
Update: Start subshells:
for server in list; do
(echo "output from $server"; ssh user#server "command"; echo End $server) &
done
wait
echo All subshells finished
There are several parallel SSH tools that can handle that for you:
http://code.google.com/p/pdsh/
http://sourceforge.net/projects/clusterssh/
http://code.google.com/p/sshpt/
http://code.google.com/p/parallel-ssh/
Also, you could be interested in configuration deployment solutions such as Chef, Puppet, Ansible, Fabric, etc (see this summary ).
A third option is to use a terminal broadcast such as pconsole
If you only can use GNU commands, you can write your script like this:
for server in $servers ; do
( { echo "output from $server" ; ssh user#$server "command" ; } | \
sed -e "s/^/$server:/" ) &
done
wait
and then sort the output to reconcile the lines.
I started with the shell hacks mentionned in this thread, then proceeded to something somewhat more robust : https://github.com/bearstech/pussh
It's my daily workhorse, and I basically run anything against 250 servers in 20 seconds (it's actually rate limited otherwise the connection rate kills my ssh-agent). I've been using this for years.
See for yourself from the man page (clone it and run 'man ./pussh.1') : https://github.com/bearstech/pussh/blob/master/pussh.1
Examples
Show all servers rootfs usage in descending order :
pussh -f servers df -h / |grep /dev |sort -rn -k5
Count the number of processors in a cluster :
pussh -f servers grep ^processor /proc/cpuinfo |wc -l
Show the processor models, sorted by occurence :
pussh -f servers sed -ne "s/^model name.*: //p" /proc/cpuinfo |sort |uniq -c
Fetch a list of installed package in one file per host :
pussh -f servers -o packages-for-%h dpkg --get-selections
Mass copy a file tree (broadcast) :
tar czf files.tar.gz ... && pussh -f servers -i files.tar.gz tar -xzC /to/dest
Mass copy several remote file trees (gather) :
pussh -f servers -o '|(mkdir -p %h && tar -xzC %h)' tar -czC /src/path .
Note that the pussh -u feature (upload and execute) was the main reason why I programmed this, no tools seemed to be able to do this. I still wonder if that's the case today.
You may like the parallel-ssh project with the pssh command:
pssh -h servers.txt -l user command
It will output one line per server when the command is successfully executed. With the -P option you can also see the output of the command.

A script to ssh into a remote folder and check all files?

I have a public/private key pair set up so I can ssh to a remote server without having to log in. I'm trying to write a shell script that will list all the folders in a particular directory on the remote server. My question is: how do I specify the remote location? Here's what I've got:
#!/bin/bash
for file in myname#example.com:dir/*
do
if [ -d "$file" ]
then
echo $file;
fi
done
Try this:
for file in `ssh myname#example.com 'ls -d dir/*/'`
do
echo $file;
done
Or simply:
ssh myname#example.com 'ls -d dir/*/'
Explanation:
The ssh command accepts an optional command after the hostname and, if a command is provided, it executes that command on login instead of the login shell; ssh then simply passes on the stdout from the command as its own stdout. Here we are simply passing the ls command.
ls -d dir/*/ is a trick to make ls skip regular files and list out only the directories.

Resources