I am writing a shell script on Solaris to check if the files on the Remote Host is done writing before transferring over to Local Host. I have done a skeleton, but there are certain parts I am not sure on how to do. I did a little reading on the commands to check file size, it is stat -c %s LogFiles.txt but I am not sure as to how to check it over in the Remote Host.
# Get File Size on Remote Host
INITIALSIZE =
sleep 5
# Get File Size on Remote Host Again
LATESTSIZE =
#Loop 5 times
for i in {1..5}
do
if [ "$INITIALSIZE" -ne "$LATESTSIZE"]
then
sleep 5
# Get File Size on Remote Host
LATESTSIZE=
else
scp -P 22 $id#$ip:$srcpath/\*.txt $destpath
break
done
Assuming that your measurement for 'done' is "file size constant for 5 sec", you can simply use ssh as follows:
ssh user#remote.machine "command to execute"
this can be piped or set as variable on the local machine e.g. in your case:
latestsize=$( ssh user#remote.machine "<sizedeterminer> <file>" )
Passwordless login of course would skip the askpass problem. See point 3.3 in this manual or an example here.
Related
I need to pass values from an array to the script on remote host.
The remote script creates files locally on each array value.
Yes, i can do it by:
for i in ${LIST[#]}
do ssh root#${servers} bash "/home/test.sh" "$i"
done
but this action is rather slow and it makes ssh session on every array value
ssh root#${servers} bash "/home/test.sh" "${LIST[#]}"
by this code i get an error:
bash: line 1338: command not found
How can i do it?
Use the connection-sharing feature of ssh so that you only have a single, preauthenticated connection that is used by each ssh process in your loop.
# This is the socket all of the following ssh processes will use
# to establish a connection to the remote host.
socket=~/.ssh/ssh_mux
# This starts a background process that does nothing except keep the
# authenticated connection open on the specified socket file.
ssh -N -M -o ControlPath="$socket" root#${servers} &
# Each ssh process should start much more quickly, as it doesn't have to
# go through the authentication protocol each time.
for i in "${LIST[#]}"; do
# This uses the existing connection to avoid having to authenticate again
ssh -o ControlPath="$socket" root#${servers} "bash /home/test.sh '$i'"
# The above command is still a little fragile, as it assumes that $i
# doesn't have a single quote in its name.
done
# This closes the connection master
ssh -o ControlPath="$socket" -O exit root#{servers}
The alternative is to try to move your loop into the remote command, but this is fragile as the array isn't defined on the remote host, and there is no good way to transfer each element in a way that protects each element. If you weren't concerned about word-splitting, you could use
ssh root#${servers} "for i in ${LIST[*]}; do bash /home/test.sh \$i; done"
but then you probably wouldn't be using an array in the first place.
My goal is to do the following:
1) Check how much memory is being used by each GPU on a specific server. I accomplish this with (nvidia-smi --query-gpu=memory.free --format=csv).
2) Find the GPU with the maximum free memory. I accomplish this with my_cmd(). It works for the remote server I am currently logged into.
3) If the maximum free memory on the remote server I'm logged into is less than 1000 MiB, SSH into each other GPU server in the cluster to find the maximum free memory available. These servers are labelled according to to_check.
My current issue:
The code below works when scriptuse is given the cd command, etc.
The code below fails when scriptuse is given mycmd. It gives me the error:
bash: my_cmd: command not found.
Now, I think there's more than one problem here. First, I think I'm not providing my_cmd properly to the ssh command. Second, when I use my_cmd, I don't think I'm successfully sshing into the other servers.
Can anyone point out what is wrong and how to fix it?
The complete bash script is below.
#/bin/bash
#https://stackoverflow.com/questions/45313313/nvidia-smi-command-in-bash-vs-in-terminal-for-maximum-of-an-array/45313404#45313404
my_cmd()
{
max_idx=0
max_mem=0
idx=0
{
read _; # discard first line (header)
while read -r mem _; do # for each subsequent line, read first word into mem
if (( mem > max_mem )); then # compare against maximum mem value seen
max_mem=$mem # ...if greater, then update both that max value
max_idx=$idx # ...and our stored index value.
fi
((++idx))
done
} < <(nvidia-smi --query-gpu=memory.free --format=csv)
echo "Maximum memory seen is $max_mem, at processor $idx"
}
tocheck=('4' '5' '6' '7' '8') #The GPUs to check
it1=1
#scriptuse="my_cmd"
scriptuse= "cd ~/spatial; pwd; echo $gpuval"
while [ $it1 -lt ${#tocheck[#]} ] ; do #While we stil don't have enough free memory
echo $it1
gpuval=${tocheck[$it1]}
ssh gpu${gpuval} "${scriptuse}"
it1=$[it1+1]
done
EDIT
Thank you very much for the help, but my problem is not yet solved. I have done this:
1) Remove my_cmd from my bash script. It now looks like this:
#/bin/bash
#https://stackoverflow.com/questions/45313313/nvidia-smi-command-in-bash-vs-in-terminal-for-maximum-of-an-array/45313404#45313404
tocheck=('4' '5' '6' '7' '8') #The GPUs to check
it1=1
scriptuse= "cd ~/spatial; echo $gpuval"
while [ $it1 -lt ${#tocheck[#]} ] ; do #While we stil don't have enough free memory
echo $it1
gpuval=${tocheck[$it1]}
ssh gpu${gpuval} "${scriptuse}" /my_script.sh
it1=$[it1+1]
done
2) Create a separate bash script called my_script.sh that contains my_cmd:
#/bin/bash
#https://stackoverflow.com/questions/45313313/nvidia-smi-command-in-bash-vs-in-terminal-for-maximum-of-an-array/45313404#45313404
max_idx=0
max_mem=0
idx=0
{
read _; # discard first line (header)
while read -r mem _; do # for each subsequent line, read first word into mem
if (( mem > max_mem )); then # compare against maximum mem value seen
max_mem=$mem # ...if greater, then update both that max value
max_idx=$idx # ...and our stored index value.
fi
((++idx))
done
} < <(nvidia-smi --query-gpu=memory.free --format=csv)
echo "Maximum memory seen is $max_mem, at processor $idx"
3) Ran chmod to ensure both files can be run.
4) Ensured both files exist on all GPUs in the cluster (they have a common storage).
5) Ran ./test_run, which is the bash script from step 1.
I get the error:
./test_run.sh: line 8: cd ~/spatial; echo : No such file or directory
1
bash: /my_script.sh: No such file or directory
2
bash: /my_script.sh: No such file or directory
3
bash: /my_script.sh: No such file or directory
4
bash: /my_script.sh: No such file or directory
EDIT: The final solution
Thanks to the accepted answer below and the discussion in the comments, here's what ended up working:
1) Leave my_script as it is in the previous edit.
2) The file test_run should look like this:
#/bin/bash
tocheck=('4' '5' '6' '7' '8') #The GPUs to check
it1=1
while [ $it1 -lt ${#tocheck[#]} ] ; do #While we still don't have enough free memory
echo $it1
gpuval=${tocheck[$it1]}
ssh gpu${gpuval} ~/spatial/my_script.sh
it1=$[it1+1]
done
I think the reason this works is that all of the GPUs on the cluster have a common storage, so they all have access to /user/spatial.
The environment your script is running in (your shell) is totally unrelated to the environment the remote host is running in (the remote shell). If you define a function my_cmd in your shell it will not be transmitted across the wire to the remote host's shell.
Try a simpler example:
$ foo() { echo foo; }
$ foo
foo
$ ssh remote-host foo
bash: foo: command not found
This simply isn't how SSH, Bash, and Linux/POSIX are designed. Now, ssh does update some parts of the remote environment (as detailed in man ssh), but this is limited to certain environment variables, not functions.
Notably, the remote shell might not even be the same type of shell as your (e.g. yours might be Bash, but the remote shell might be Zsh), so it's not possible generally to transmit shell functions across ssh.
A much simpler and more reliable option is to create a shell script (rather than a function) that you intend to be run on the remote shell, and ensure that script exists on the remote machine. For example:
# Copy the script to the remote host's /tmp directory
scp my_cmd.sh remote-host:/tmp
# Invoke the script on the remote host
$ ssh remote-host /tmp/my_cmd.sh
Edit:
./test_run.sh: line 8: cd ~/spatial; echo : No such file or directory
Are you sure ~/spatial exists on the remote host?
bash: /my_script.sh: No such file or directory
Are you sure /my_script.sh exists on the remote host?
Again, your remote host is a wholly different environment. Just because a file or directory exists on your local machine doesn't mean it exists on the remote host unless you put it there.
Try ssh [remote-host] 'ls ~' and ssh [remote-host] 'ls /' - I bet you'll see the directory and file don't exist.
WARNING: newbie with bash shell scripting.
I've created the script to connect to multiple remote machines, one by one, check if a certain file has certain text already in it, if it does, move to the next machine and make the same check, if not, append the text to the file, then move to the next machine.
Currently, the script connects to the first remote machine but then does nothing when it connects. If I type exit to close the remote machine's connection, it then continues running the script, which does me no good because I'm not connected to the remote machine any longer.
on a sidenote, I'm not even sure if the rest of the code is correct, so please let me know if there are any glaring mistakes. This is actually my first attempt at writing a shell script from scratch.
#!/bin/bash
REMOTE_IDS=( root#CENSOREDIPADDRESS1
root#CENSOREDIPADDRESS2
root#CENSOREDIPADDRESS3
)
for REMOTE in "{$REMOTE_IDS[#]}"
do
ssh -oStrictHostKeyChecking=no $REMOTE_IDS
if grep LogDNAFormat "/etc/syslog-ng/syslog-ng.conf"
then
echo $REMOTE
echo "syslog-ng already modified. Skipping."
exit
echo -
else
echo $REMOTE
echo "Modifiying..."
echo "\n" >> syslog-ng.conf
echo "### START syslog-ng LogDNA Logging Directives ###" >> syslog-ng.conf
echo "template LogDNAFormat { template(\"<key:CENSOREDKEY> <${PRI}>1 ${ISODATE} ${HOST} ${PROGRAM} ${PID} ${MSGID} - $MSG\n\");" >> syslog-ng.conf
echo "template_escape(no);" >> syslog-ng.conf
echo "};" >> syslog-ng.conf
echo "destination d_logdna {" >> syslog-ng.conf
echo "udp(\"syslog-a.logdna.com\" port(CENSOREDPORT)" >> syslog-ng.conf
echo "template(LogDNAFormat));" >> syslog-ng.conf
echo "};" >> syslog-ng.conf
echo "log {" >> syslog-ng.conf
echo "source(s_src);" >> syslog-ng.conf
echo "destination(d_logdna);" >> syslog-ng.conf
echo "};" >> syslog-ng.conf
echo "### END syslog-ng LogDNA logging directives ###" >> syslog-ng.conf
killall -s 9 syslog-ng
sleep 5
/etc/init.d/syslog start
echo -
fi
done
Great question: Automating procedures via ssh is a laudable goal.
Let's start off with the first error in your code:
ssh -oStrictHostKeyChecking=no $REMOTE_IDS
should be:
ssh -oStrictHostKeyChecking=no $REMOTE
But that won't do everything either. If you want to ssh to run a set of commands, you can, but you'll need to pass those commands in a string as an argment to ssh.
ssh -oStrictHostKeyChecking=no $REMOTE 'Lots of code goes here - newlines ok'
For that to work, you'll need to have passwordless ssh configured ( or you'll be prompted for credentials ). This is covered in steps 1) and 2) in Alexei Grochev's post. One option for passwordless logins is to put public keys on the hosts you want to manage and, if necessary, change the IdentityFile in your local ~/.ssh/config ( you may not need to do this if you are using a default public / private key pair ) .
You've got to be careful with ssh stealing your stdin ( I don't think you'll have a problem in your case ). In the cases that you suspect that the ssh command is reading all your stdin input, you'll need to supply the -n parameter to ssh ( again, I think your code does not suffer from this problem, but I didn't look to carefully ).
I agree with tadman's comment, that this is a good application for Ansible. However, I wouldn't learn Ansible for this task alone. If you intend on doing a lot of remote automation, Ansible would be well worth your time learning and applying to this problem.
What I would suggest is pssh and pscp. These tools are awesome and take care of the "for" loop for you. They also perform the ssh calls in parallel and collect the results.
Here are the steps I would recommend:
1) Install pssh (pscp comes along for the ride).
2) Write your bash program as a separate file. It's so much easier to debug and update , etc. if your program isn't in a bunch of echo statements. Those hurt. Even my original suggestion of ssh user#host 'long string of commands' is difficult to debug. Just create a program file that runs on the remote hosts and debug it on the remote host ( as you can ) .
3) Now go back to your control host ( with that bash program ). Push it to all of the hosts under management with pscp. The syntax is as follows:
# Your bash program is at <local-file-path>
chmod +x <local-file-path>
pscp -h<hosts-file> -l root <local-file-path> <remote-file-path>
The -h option specifies a lists of hosts. So the would look like this:
CENSOREDIPADDRESS1
CENSOREDIPADDRESS2
CENSOREDIPADDRESS3
Incidentally, if you did not set up your public/private keys, you can specify the -A parameter and pscp and pssh will ask you for the root user's password. This isn't great for automation, but if you are doing a one time task it is a lot easier than setting your public/private keys.
4) Now execute that program on the remote hosts:
pssh -h<hosts-file> -i <remote-file-path>
The -i parameter tells pssh to wait for the program to execute on all hosts and return the stdout and stderr results in line.
In summary, pssh/pscp are GREAT for small tasks like this. For larger tasks, consider Ansible ( it basically works by sending python scripts over ssh and executing them remotely ). Puppet/Chef are way overkill for this, but they are fantastic tools for keeping your data center in the state that you want it in.
you can do this with puppet/chef.
but this can also be done with bash if you have patience. I don't want to give actual code because I think its best to understand logic first.
however since you asked, here is the flow you should follow:
make sure you have keys setup for all machines
create config with all the servers
put all servers into an array
create a loop to call each box and run your script (before you will have to scp the script to the home dir on the box so make sure it good to run)
you can also do what you want better imho and that's how I've done it before.
1) make a script to read your file and put it on cron to run every minute or whatever time is best, say echo out #size of file to a log file
2) all servers will have those scripts running so now you just run your script to fetch the data across all servers (iterate through your array of servers in your config file)
^^ that right there can also be done with php where you have an instance of a webserver reading the file. you can also create a web server with bash...since its only for 1 task its not terribly insane.
have fun.
this is what i want to do :
#!/bin/bash
# start the tunnel
ssh tunnel#hostA -L 6000:hostB:22 -N
# this is the problem, i need to go next process after tunnel on
# Main proses that i must run under tunnel command
sftp -oPort=6000 user#localhost:/home/you <<GETME
lcd /home/me/temp
get *.tar
bye
GETME
echo " Job Done"
# i hope it can be add
# kill the tunnel after sftp done
right now i use 2 putty to run sftp to hostB, i think maybe it can be done in 1 single script
I have a text file in which I have a list of servers. I'm trying to read the server one by one from the file, SSH in the server and execute ls to see the directory contents. My loop runs just once when I run the SSH command, however, for SCP it runs for all servers in the text file and exits, I want the loop to run till the end of text file for SSH. Following is my bash script, how can I make it run for all the servers in the text file while doing SSH?
#!/bin/bash
while read line
do
name=$line
ssh abc_def#$line "hostname; ls;"
# scp /home/zahaib/nodes/fpl_* abc_def#$line:/home/abc_def/
done < $1
I run the script as $ ./script.sh hostnames.txt
The problem with this code is that ssh starts reading data from stdin, which you intended for read line. You can tell ssh to read from something else instead, like /dev/null, to avoid eating all the other hostnames.
#!/bin/bash
while read line
do
ssh abc_def#"$line" "hostname; ls;" < /dev/null
done < "$1"
A little more direct is to use the -n flag, which tells ssh not to read from standard input.
Change your loop to a for loop:
for server in $(cat hostnames.txt); do
# do your stuff here
done
It's not parallel ssh but it works.
I open-sourced a command line tool called Overcast to make this sort of thing easier.
First you import your servers:
overcast import server.01 --ip=1.1.1.1 --ssh-key=/path/to/key
overcast import server.02 --ip=1.1.1.2 --ssh-key=/path/to/key
Once that's done you can run commands across them using wildcards, like so:
overcast run server.* hostname "ls -Al" ./scriptfile
overcast push server.* /home/zahaib/nodes/fpl_* /home/abc_def/