Run Command on Multiple Macs from Server - macos

I am trying to force shutdown multiple mac computers every night which are all connected to a server. I am unsure if the best way to do this is by running a sudo shutdown command through a for loop using IP addresses or ssh'ing. Or any other method. Any advice would be appreciated!

I don't know any better method than ssh.
Generate and install your ssh key on those macs in the root account, in the file /var/root/.ssh/authorized_keys2 of each of them,
Ensure each of your mac has the line "PermitRootLogin yes" uncommented in the file /etc/ssh/sshd_config, if not change it and relaunch sshd.
And finaly use ssh to run the shutdown command.
Here is the command line in bash shell :
for host in host01 host02 host03; do ssh root#$host "shutdown -h"; done

Related

Bash script to make ssh becomes easier

I'm managing hundreds of network devices, and when i want to ssh to a device, I need to type in my jump server "ssh -l $user $ip/dnsname" in terminal.
I have an idea that's write a bash script and run it as a service. When I want to ssh to any device, I just need to type the IP address or DNS device name in the terminal and hit enter. It will execute the ssh command automatically.
But i'm new in bash, could you guys please give me an instruction?

How to return to the script once sudo is done

I am on a server and running a script file that has following code.
ssh username#servername
sudo su - root
cd /abc/xyz
mkdir asdfg
I am able to ssh... but then the next command is not working.. the script is not sudo-ing. any idea?
Edit: Able to create a mech id and then do the things.. though still looking for the answer to above question :|
First of all your command will "stuck" on the first line because it will go into an interactive mode. The ssh command will require a password to be provided by a user (unless there is an sshkey being used) . And if the ssh is logged into the remote server then it will wait for user commands from standard input.
Secondly the lines following the ssh command will be executed only when the first process has exited. This is why your script is not "sudoing" - it's waiting for the ssh to end.
So if your point is to run a command on a remote server then put the command as a parameter into the same line as ssh connection. In your case:
ssh user#server sudo su - root
But this will not be of satisfaction for you. I suggest you create a script of what you want to execute on the remote server and then execute the script.
ssh user#server scriptName
The sudo thing here is very tricky because again your script might get stuck in the interactive mode waiting for a password to be inserted so I suggest you think again on the basis of the script.
mb47!
You want to run the script on the remote computer, correct?
On the remote machine, create a file containing the commands you would like to execute.
Then, on the other machine, run ssh user#machine /path/to/script/you/created/earlier
I hope this helps!
ALinuxLover

Continuing shell script execution after SSHing into guest machine?

I have an Ubuntu guest box setup on my Windows host using Vagrant and VirtualBox. I'm trying to write a shell script that will...
vagrant up
vagrant ssh once vagrant up is complete
cd into a specific project directory in the guest machine once successfully SSHed into the guest machine
Right now my vagrant_shell_script.sh file contains the following:
vagrant up && vagrant ssh && echo 'cd vagrant/rails_tutorial/sample_app'
Everything works fine when I execute it in Git Bash, up to and including connecting via SSH to the guest machine, however after it successfully connects, the script seems to stop working and does not execute the final cd command. I presume this is because it is no longer able to communicate directly with my host machine through that particular Bash instance (please correct me if I'm wrong).
Is there any way to have it navigate directly to the target directory once the SSH connection is successful?
Please forgive me if this is a dumb question--relatively new to bash scripting.
This solved it. It's kind of hacky, but running
vagrant up && vagrant ssh -- -t 'cd /vagrant/rails_tutorial/sample_app; /bin/bash' gets you in. For some reason vagrant keeps kicking you out if you don't launch the shell.
vagrant ssh -- allows you to pass commands into the SSH client. This is vagrant's own utility. The next flag, -t is an SSH flag and it allows SSH to execute certain commands before it hands control back to you. You put your command after the -t flag, but make sure to end it with <last command> ; /bin/bash so that it launches a shell for you and you don't get kicked out.
you can also use Heredoc to run the commands after you ssh by using something similar to this in your script:
# Use heredoc to send script over ssh
$ssh_cmd << 'END_DOC'
cd <path>
commands
exit
END_DOC
echo $ssh_cmd

calling an interactive bash script over ssh

I'm writing a "tool" - a couple of bash scripts - that automate the installation and configuration on each server in a cluster.
The "tool" runs from a primary server. It tars and distributes it's self (via SCP) to every other server and untars the copies via "batch" SSH.
During set-up the tool issues remote commands such as the following from the primary server: echo './run_audit.sh' | ssh host4 'bash -s'. The approach works in many cases, except when there's interactive behavior since standard input is already in use.
Is there a way to run remote bash scripts interactively over SSH?
As a starting point, consider the following case: echo 'read -p "enter name:" name; echo "your name is $name"' | ssh host4 'bash -s'
In the case above the prompt never happens, how do I work around that?
Thanks in advance.
Run the command directly, like so:
ssh -t host4 bash ./run_audit.sh
For an encore, modify the shell script so it reads options from the command line or a configuration file instead of from stdin (or in preference to stdin).
I second Dennis Williamson's suggestion to look into puppet/etc instead.
Sounds like you might want to look into expect.
Do not pipe commands via stdin to ssh, but copy shell script to remote machine:
scp ./run_audit.sh host4:
and then:
ssh host4 run_audit.sh
For cluster deployments I'm using Fabric... it runs on top of SSH protocol, no daemons needed. It's easy as writing fabfile.py:
from fabric.api import run
def host_type():
run('uname -s')
and then:
$ fab -H localhost,linuxbox host_type
[localhost] run: uname -s
[localhost] out: Darwin
[linuxbox] run: uname -s
[linuxbox] out: Linux
Done.
Disconnecting from localhost... done.
Disconnecting from linuxbox... done.
Of course it can do more... including interactive commands, and relays on ~/.ssh directory files for SSH. More at fabfile.org. For sure you will forget bash for such tasks. ;-)

single line telnet commands using terminal

I need to pull something along the lines of "telnet root#192.168.2.99: irinject BACK"
however this refuses to work. There is no password required.
What is the correct syntax to perform this task using the terminal on Ubuntu 11.10?
If you absolutely must do it this way, use echo or etc. to pipe commands to the telnet session — and be ready to reinstall machines as they get hacked.
Strongly preferred is to use ssh with key access; you can even include the command that way.
ssh -i path/to/root-key root#host command

Resources