How to configure containerd to use a registry mirror? - containerd

If a system (e.g., a kubernetes node) is using containerd, how do I configure it to pull container images from a registry mirror instead of docker.io?

The answer seems to depend on version, but for 1.6+:
First, ensure that /etc/containerd/config.toml sets:
plugins."io.containerd.grpc.v1.cri".registry.config_path = "/etc/containerd/certs.d"
Second, create /etc/containerd/certs.d/docker.io/hosts.toml (and intervening directories as necessary) with content:
server = "https://registry-1.docker.io" # default after trying hosts
host."https://my-local-mirror".capabilities = ["pull", "resolve"]
(May need to restart containerd after modifying the first file? systemctl restart containerd Updates to the second path should be detected without restart.)
Note that earlier version 1.4 (e.g., in amazon-eks-ami up until a couple months ago) used a quite different method to configure the mirror.
If these changes are being automated, such as in a launch template user data script, the commands could be as follows. Note the escaping of quotation marks, and which side of the pipe needs extra privileges.
sudo mkdir -p /etc/containerd/certs.d/docker.io
echo 'plugins."io.containerd.grpc.v1.cri".registry.config_path = "/etc/containerd/certs.d"' | sudo tee -a /etc/containerd/config.toml
printf 'server = "https://registry-1.docker.io"\nhost."http://my-local-mirror".capabilities = ["pull", "resolve"]\n' | sudo tee /etc/containerd/certs.d/docker.io/hosts.toml
sudo systemctl restart containerd
For more recent installations, may not need to modify config.toml (i.e. if default already set appropriately). Also, may not need to use sudo depending on where these commands are run from (such as in a launch template, for AWS EC2).

Related

Open Distro for Elasticsearch: reset default admin password

I'm new to open distro for elasticsearch and trying to run it on the Kubernetes cluster. After deploying the cluster, I need to change the password for admin user.
I went through this post - default-password-reset
I came to know that, to change the password I need to do the following steps:
exec in one of the master nodes
generate a hash for the new password using /usr/share/elasticsearch/plugins/opendistro_security/tools/hash.sh script
update /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml with the new hash
run /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh with parameters
Questions:
Is there any way to set those (via env or elasticsearch.yml) during bootstrapping the cluster?
I had to recreate internal_users.yml file with the updated password hashes and mounted the file in /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml directory for database pods.
So, when the Elasticsearch nodes bootstrapped, it bootstrapped with the updated password for default users ( i.e. admin ).
I used bcrypt go package to generate password hash.
docker exec -ti ELASTIC_MASTER bash
/usr/share/elasticsearch/plugins/opendistro_security/tools/hash.sh
##enter pass
yum install nano
#replace generated hash with new one
nano /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml
#exec this command to take place
sh /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -icl -nhnv -cacert config/root-ca.pem -cert config/admin.pem -key config/admin-key.pem
You can also execute below commands to obtain value of username, password from you kubernetes cluster:
kubectl get secret -n wazuh elastic-cred -o go-template='{{.data.username | base64decode}}'
kubectl get secret -n wazuh elastic-cred -o go-template='{{.data.password | base64decode}}'
Note: '-n wazuh' indicates the namespace, use what applies to you
Ref: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html

How to safely run arbitrary code on a remote machine?

I would like to receive a shell command from a user, and run it as a Linux user with no real privileges.
Today I'm doing this: sudo -u {username} 'sh' '-c' $'{user_command}'
Is this safe?
Manually escaping the ' sounds fragile. I would put the command into a file and execute that file as a script. This avoids command injection by design.
Further note that even an unprivileged account will have read access to many files on the host system, like /etc/passwd or information from /proc. If they would run ps for example, they could see commands from other users.
Therefore I would recommend to run the command in a container. Install docker and run:
# let's say you stored the command in "user.sh" ...
docker run -v "${PWD}:/scripts" -it image_name bash /scripts/user.sh
Another thing which is relevant for security is that people could try to (a) DOS the host machine or (b) DOS other machines or attack them in a different way. For (a), make sure you put pretty strict resource constraints on the docker machine (mem, cpu, number of procs, etc...). For (b), disallow network access for the container.

Is it secure to store EC2 User-Data shell scripts in a private S3 bucket?

I have an EC2 ASG on AWS and I'm interested in storing the shell script that's used to instantiate any given instance in an S3 bucket and have it downloaded and run upon instantiation, but it all feels a little rickety even though I'm using an IAM Instance Role, transferring via HTTPS, and encrypting the script itself while at rest in the S3 bucket using KMS using S3 Server Side Encryption (because the KMS method was throwing an 'Unknown' error).
The Setup
Created an IAM Instance Role that gets assigned to any instance in my ASG upon instantiation, resulting in my AWS creds being baked into the instance as ENV vars
Uploaded and encrypted my Instance-Init.sh script to S3 resulting in a private endpoint like so : https://s3.amazonaws.com/super-secret-bucket/Instance-Init.sh
In The User-Data Field
I input the following into the User Data field when creating the Launch Configuration I want my ASG to use:
#!/bin/bash
apt-get update
apt-get -y install python-pip
apt-get -y install awscli
cd /home/ubuntu
aws s3 cp s3://super-secret-bucket/Instance-Init.sh . --region us-east-1
chmod +x Instance-Init.sh
. Instance-Init.sh
shred -u -z -n 27 Instance-Init.sh
The above does the following:
Updates package lists
Installs Python (required to run aws-cli)
Installs aws-cli
Changes to the /home/ubuntu user directory
Uses the aws-cli to download the Instance-Init.sh file from S3. Due to the IAM Role assigned to my instance, my AWS creds are automagically discovered by aws-cli. The IAM Role also grants my instance the permissions necessary to decrypt the file.
Makes it executable
Runs the script
Deletes the script after it's completed.
The Instance-Init.sh Script
The script itself will do stuff like setting env vars and docker run the containers that I need deployed on my instance. Kinda like so:
#!/bin/bash
export MONGO_USER='MyMongoUserName'
export MONGO_PASS='Top-Secret-Dont-Tell-Anyone'
docker login -u <username> -p <password> -e <email>
docker run - e MONGO_USER=${MONGO_USER} -e MONGO_PASS=${MONGO_PASS} --name MyContainerName quay.io/myQuayNameSpace/MyAppName:latest
Very Handy
This creates a very handy way to update User-Data scripts without the need to create a new Launch Config every time you need to make a minor change. And it does a great job of getting env vars out of your codebase and into a narrow, controllable space (the Instance-Init.sh script itself).
But it all feels a little insecure. The idea of putting my master DB creds into a file on S3 is unsettling to say the least.
The Questions
Is this a common practice or am I dreaming up a bad idea here?
Does the fact that the file is downloaded and stored (albeit briefly) on the fresh instance constitute a vulnerability at all?
Is there a better method for deleting the file in a more secure way?
Does it even matter whether the file is deleted after it's run? Considering the secrets are being transferred to env vars it almost seems redundant to delete the Instance-Init.sh file.
Is there something that I'm missing in my nascent days of ops?
Thanks for any help in advance.
What you are describing is almost exactly what we are using to instantiate Docker containers from our registry (we now use v2 self-hosted/private, s3-backed docker-registry instead of Quay) into production. FWIW, I had the same "this feels rickety" feeling that you describe when first treading this path, but after almost a year now of doing it -- and compared to the alternative of storing this sensitive configuration data in a repo or baked into the image -- I'm confident it's one of the better ways of handling this data. Now, that being said, we are currently looking at using Hashicorp's new Vault software for deploying configuration secrets to replace this "shared" encrypted secret shell script container (say that five times fast). We are thinking that Vault will be the equivalent of outsourcing crypto to the open source community (where it belongs), but for configuration storage.
In fewer words, we haven't run across many problems with a very similar situation we've been using for about a year, but we are now looking at using an external open source project (Hashicorp's Vault) to replace our homegrown method. Good luck!
An alternative to Vault is to use credstash, which leverages AWS KMS and DynamoDB to achieve a similar goal.
I actually use credstash to dynamically import sensitive configuration data at container startup via a simple entrypoint script - this way the sensitive data is not exposed via docker inspect or in docker logs etc.
Here's a sample entrypoint script (for a Python application) - the beauty here is you can still pass in credentials via environment variables for non-AWS/dev environments.
#!/bin/bash
set -e
# Activate virtual environment
. /app/venv/bin/activate
# Pull sensitive credentials from AWS credstash if CREDENTIAL_STORE is set with a little help from jq
# AWS_DEFAULT_REGION must also be set
# Note values are Base64 encoded in this example
if [[ -n $CREDENTIAL_STORE ]]; then
items=$(credstash -t $CREDENTIAL_STORE getall -f json | jq 'to_entries | .[]' -r)
keys=$(echo $items | jq .key -r)
for key in $keys
do
export $key=$(echo $items | jq 'select(.key=="'$key'") | .value' -r | base64 --decode)
done
fi
exec $#

Setting up Redis on Webfaction

What are the steps required to set up Redis database on Webfaction shared hosting account?
Introduction
Because of the special environment restrictions of Webfaction servers the installation instructions are not as straightforward as they would be. Nevertheless at the end you will have a fully functioning Redis server that stays up even after a reboot. I personally installed Redis by the following procedure about a half a year ago and it has been running flawlessy since. A little word of a warning though, half a year is not a long time, especially because the server have not been under a heavy use.
The instructions consists of five parts: Installation, Testing, Starting the Server, Managing the Server and Keeping the Server Running.
Installation
Login to your Webfaction shell
ssh foouser#foouser.webfactional.com
Download latest Redis from Redis download site.
> mkdir -p ~/src/
> cd ~/src/
> wget http://download.redis.io/releases/redis-2.6.16.tar.gz
> tar -xzf redis-2.6.16.tar.gz
> cd redis-2.6.16/
Before the make, see is your server Linux 32 or 64 bit. The installation script does not handle 32 bit environments well, at least on Webfaction's CentOS 5 machines. The command for bits is uname -m. If Linux is 32 bit the result will be i686, if 64 bit then x86_64. See this answer for details.
> uname -m
i686
If your server is 64 bit (x86_64) then just simply make.
> make
But if your server is 32 bit (i686) then you must do little extra stuff. There is a command make 32bit but it produces an error. Edit a line in the installation script to make make 32bit to work.
> nano ~/src/redis-2.6.16/src/Makefile
Change the line 214 from this
$(MAKE) CFLAGS="-m32" LDFLAGS="-m32"
to this
$(MAKE) CFLAGS="-m32 -march=i686" LDFLAGS="-m32 -march=i686"
and save. Then run the make with 32bit flag.
> cd ~/src/redis-2.6.16/ ## Note the dir, no trailing src/
> make 32bit
The executables were created into directory ~/src/redis-2.6.16/src/. The executables include redis-cli, redis-server, redis-benchmark and redis-sentinel.
Testing (optional)
As the output of the installation suggests, it would be nice to ensure that everything works as expected by running tests.
Hint: To run 'make test' is a good idea ;)
Unfortunately the testing requires tlc8.6.0 to be installed which is not the default at least on the machine web223. So you must install it first, from source. See Tcl/Tk installation notes and compiling notes.
> cd ~/src/
> wget http://prdownloads.sourceforge.net/tcl/tcl8.6.0-src.tar.gz
> tar -xzf tcl8.6.0-src.tar.gz
> cd tcl8.6.0-src/unix/
> ./configure --prefix=$HOME
> make
> make test # Optional, see notes below
> make install
Testing Tcl with make test will take time and will also fail due to WebFaction's environment restrictions. I suggest you skip this.
Now that we have Tlc installed we can run Redis tests. The tests will take a long time and also temporarily uses a quite large amount of memory.
> cd ~/src/redis-2.6.16/
> make test
After the tests you are ready to continue.
Starting the Server
First, create a custom application via Webfaction Control Panel (Custom app (listening on port)). Name it for example fooredis. Note that you do not have to create a domain or website for the app if Redis is used only locally i.e. from the same host.
Second, make a note about the socket port number that was given for the app. Let the example be 23015.
Copy the previously compiled executables to the app's directory. You may choose to copy all or only the ones you need.
> cd ~/webapps/fooredis/
> cp ~/src/redis-2.6.16/src/redis-server .
> cp ~/src/redis-2.6.16/src/redis-cli .
Copy also the sample configuration file. You will soon modify that.
> cp ~/src/redis-2.6.16/redis.conf .
Now Redis is already runnable. There is couple problems though. First the default Redis port 6379 might be already in use. Second, even if the port was free, yes, you could start the server but it stops running at the same moment you exit the shell. For the first the redis.conf must be edited and for the second, you need a daemon which is also solved by editing redis.conf.
Redis is able to run itself in the daemon mode. For that you need to set up a place where the daemon stores its process ids, PIDs. Usually pidfiles are stored in /var/run/ but because the environment restrictions you must select a place for them in your home directory. Because a reason explained later in the part Managing the Server, a good choice is to put the pidfile under the same directory as the executables. You do not have to create the file yourself, Redis creates it for you automatically.
Now open the redis.conf for editing.
> cd ~/webapps/fooredis/
> nano redis.conf
Change the configurations in the following manner.
daemonize no -> daemonize yes
pidfile /var/run/redis.pid -> pidfile /home/foouser/webapps/fooredis/redis.pid
port 6379 -> port 23015
Now finally, start Redis server. Specify the conf-file so Redis listens the right port and runs as a daemon.
> cd ~/webapps/fooredis/
> ./redis-server redis.conf
>
See it running.
> cd ~/webapps/fooredis/
> ./redis-cli -p 23015
redis 127.0.0.1:23015> SET myfeeling Phew.
OK
redis 127.0.0.1:23015> GET myfeeling
"Phew."
redis 127.0.0.1:23015> (ctrl-d)
>
Stop the server if you want to.
> ps -u $USER -o pid,command | grep redis
718 grep redis
10735 ./redis-server redis.conf
> kill 10735
or
> cat redis.pid | xargs kill
Managing the Server
For the ease of use and as a preparatory work for the next part, make a script that helps to open the client and start, restart and stop the server. An easy solution is to write a makefile. When writing a makefile, remember to use tabs instead of spaces.
> cd ~/webapps/fooredis/
> nano Makefile
# Redis Makefile
client cli:
./redis-cli -p 23015
start restart:
./redis-server redis.conf
stop:
cat redis.pid | xargs kill
The rules are quite self-explanatory. The special about the second rule is that while in daemon mode, calling the ./redis-server does not create a new process if there is a one running already.
The third rule has some quiet wisdom in it. If redis.pid was not stored under the directory of fooredis but for example to /var/run/redis.pid then it would not be so easy to stop the server. This is especially true if you run multiple Redis instances concurrently.
To execute a rule:
> make start
Keeping the Server Running
You now have an instance of Redis running in daemon mode which allows you to quit the shell without stopping it. This is still not enough. What if the process crashes? What if the server machine is rebooted? To cover these you have to create two cronjobs.
> export EDITOR=nano
> crontab -e
Add the following two lines and save.
*/5 * * * * make -C ~/webapps/fooredis/ -f ~/webapps/fooredis/Makefile start
#reboot make -C ~/webapps/fooredis/ -f ~/webapps/fooredis/Makefile start
The first one ensures each five minutes that fooredis is running. As said above this does not start new process if one is already running. The second one ensures that fooredis is started immediately after the server machine reboot and long before the first rule kicks in.
Some more deligate methods for this could be used, for example forever. See also this Webfaction Community thread for more about the topic.
Conclusion
Now you have it. Lots of things done but maybe more will come. Things you may like to do in the future which were not covered here includes the following.
Setting a password, preventing other users flushing your databases. (See redis.conf)
Limiting the memory usage (See redis.conf)
Logging the usage and errors (See redis.conf)
Backupping the data once in a while.
Any ideas, comments or corrections?
To summarize Akseli's excellent answer:
assume your user is named "magic_r_user"
cd ~
wget "http://download.redis.io/releases/redis-3.0.0.tar.gz"
tar -xzf redis-3.0.0.tar.gz
mv redis-3.0.0 redis
cd redis
make
make test
create a custom app "listening on port" through the Webfaction management website
assume we named it magic_r_app
assume it was assigned port 18932
cp ~/redis/redis.conf ~/webapps/magic_r_app/
vi ~/webapps/magic_r_app/redis.conf
daemonize yes
pidfile ~/webapps/magic_r_app/redis.pid
port 18932
test it
~/redis/src/redis-server ~/webapps/magic_r_app/redis.conf
~/redis/src/redis-cli -p 18932
ctrl-d
cat ~/webapps/magic_r_app/redis.pid | xargs kill
crontab -e
*/1 * * * * /home/magic_r_user/redis/src/redis-server /home/magic_r_user/webapps/magic_r_app/redis.conf &>> /home/magic_r_user/logs/user/cron.log
don't forget to set a password!
FYI, if you are installing redis 2.8.8+ you may get an error, undefined reference to __sync_add_and_fetch_4 when compiling. See http://www.eschrade.com/page/undefined-reference-to-__sync_add_and_fetch_4/ for information.
I've pasted the relevant portion from that page below in case the page ever goes offline. Essentially you need to export the CFLAGS variable and restart the build process.
[root#devvm1 redis-2.6.7]# export CFLAGS=-march=i686
[root#devvm1 redis-2.6.7]# make distclean
[root#devvm1 redis-2.6.7]# make

How can I automate running commands remotely over SSH to multiple servers in parallel?

I've searched around a bit for similar questions, but other than running one command or perhaps a few command with items such as:
ssh user#host -t sudo su -
However, what if I essentially need to run a script on (let's say) 15 servers at once. Is this doable in bash? In a perfect world I need to avoid installing applications if at all possible to pull this off. For argument's sake, let's just say that I need to do the following across 10 hosts:
Deploy a new Tomcat container
Deploy an application in the container, and configure it
Configure an Apache vhost
Reload Apache
I have a script that does all of that, but it relies on me logging into all the servers, pulling a script down from a repo, and then running it. If this isn't doable in bash, what alternatives do you suggest? Do I need a bigger hammer, such as Perl (Python might be preferred since I can guarantee Python is on all boxes in a RHEL environment thanks to yum/up2date)? If anyone can point to me to any useful information it'd be greatly appreciated, especially if it's doable in bash. I'll settle for Perl or Python, but I just don't know those as well (working on that). Thanks!
You can run a local script as shown by che and Yang, and/or you can use a Here document:
ssh root#server /bin/sh <<\EOF
wget http://server/warfile # Could use NFS here
cp app.war /location
command 1
command 2
/etc/init.d/httpd restart
EOF
Often, I'll just use the original Tcl version of Expect. You only need to have that on the local machine. If I'm inside a program using Perl, I do this with Net::SSH::Expect. Other languages have similar "expect" tools.
The issue of how to run commands on many servers at once came up on a Perl mailing list the other day and I'll give the same recommendation I gave there, which is to use gsh:
http://outflux.net/unix/software/gsh
gsh is similar to the "for box in box1_name box2_name box3_name" solution already given but I find gsh to be more convenient. You set up a /etc/ghosts file containing your servers in groups such as web, db, RHEL4, x86_64, or whatever (man ghosts) then you use that group when you call gsh.
[pdurbin#beamish ~]$ gsh web "cat /etc/redhat-release; uname -r"
www-2.foo.com: Red Hat Enterprise Linux AS release 4 (Nahant Update 7)
www-2.foo.com: 2.6.9-78.0.1.ELsmp
www-3.foo.com: Red Hat Enterprise Linux AS release 4 (Nahant Update 7)
www-3.foo.com: 2.6.9-78.0.1.ELsmp
www-4.foo.com: Red Hat Enterprise Linux Server release 5.2 (Tikanga)
www-4.foo.com: 2.6.18-92.1.13.el5
www-5.foo.com: Red Hat Enterprise Linux Server release 5.2 (Tikanga)
www-5.foo.com: 2.6.18-92.1.13.el5
[pdurbin#beamish ~]$
You can also combine or split ghost groups, using web+db or web-RHEL4, for example.
I'll also mention that while I have never used shmux, its website contains a list of software (including gsh) that lets you run commands on many servers at once. Capistrano has already been mentioned and (from what I understand) could be on that list as well.
Take a look at Expect (man expect)
I've accomplished similar tasks in the past using Expect.
You can pipe the local script to the remote server and execute it with one command:
ssh -t user#host 'sh' < path_to_script
This can be further automated by using public key authentication and wrapping with scripts to perform parallel execution.
You can try paramiko. It's a pure-python ssh client. You can program your ssh sessions. Nothing to install on remote machines.
See this great article on how to use it.
To give you the structure, without actual code.
Use scp to copy your install/setup script to the target box.
Use ssh to invoke your script on the remote box.
pssh may be interesting since, unlike most solutions mentioned here, the commands are run in parallel.
(For my own use, I wrote a simpler small script very similar to GavinCattell's one, it is documented here - in french).
Have you looked at things like Puppet or Cfengine. They can do what you want and probably much more.
For those that stumble across this question, I'll include an answer that uses Fabric, which solves exactly the problem described above: Running arbitrary commands on multiple hosts over ssh.
Once fabric is installed, you'd create a fabfile.py, and implement tasks that can be run on your remote hosts. For example, a task to Reload Apache might look like this:
from fabric.api import env, run
env.hosts = ['host1#example.com', 'host2#example.com']
def reload():
""" Reload Apache """
run("sudo /etc/init.d/apache2 reload")
Then, on your local machine, run fab reload and the sudo /etc/init.d/apache2 reload command would get run on all the hosts specified in env.hosts.
You can do it the same way you did before, just script it instead of doing it manually. The following code remotes to machine named 'loca' and runs two commands there. What you need to do is simply insert commands you want to run there.
che#ovecka ~ $ ssh loca 'uname -a; echo something_else'
Linux loca 2.6.25.9 #1 (blahblahblah)
something_else
Then, to iterate through all the machines, do something like:
for box in box1_name box2_name box3_name
do
ssh $box 'commmands_to_run_everywhere'
done
In order to make this ssh thing work without entering passwords all the time, you'll need to set up key authentication. You can read about it at IBM developerworks.
You can run the same command on several servers at once with a tool like cluster ssh. The link is to a discussion of cluster ssh on the Debian package of the day blog.
Well, for step 1 and 2 isn't there a tomcat manager web interface; you could script that with curl or zsh with the libwww plug in.
For SSH you're looking to:
1) not get prompted for a password (use keys)
2) pass the command(s) on SSH's commandline, this is similar to rsh in a trusted network.
Other posts have shown you what to do, and I'd probably use sh too but I'd be tempted to use perl like ssh tomcatuser#server perl -e 'do-everything-on-one-line;' or you could do this:
either scp the_package.tbz tomcatuser#server:the_place/.
ssh tomcatuser#server /bin/sh <<\EOF
define stuff like TOMCAT_WEBAPPS=/usr/local/share/tomcat/webapps
tar xj the_package.tbz or rsync rsync://repository/the_package_place
mv $TOMCAT_WEBAPPS/old_war $TOMCAT_WEBAPPS/old_war.old
mv $THE_PLACE/new_war $TOMCAT_WEBAPPS/new_war
touch $TOMCAT_WEBAPPS/new_war [you don't normally have to restart tomcat]
mv $THE_PLACE/vhost_file $APACHE_VHOST_DIR/vhost_file
$APACHECTL restart [might need to login as apache user to move that file and restart]
EOF
You want DSH or distributed shell, which is used in clusters a lot. Here is the link: dsh
You basically have node groups (a file with lists of nodes in them) and you specify which node group you wish to run commands on then you would use dsh, like you would ssh to run commands on them.
dsh -a /path/to/some/command/or/script
It will run the command on all the machines at the same time and return the output prefixed with the hostname. The command or script has to be present on the system, so a shared NFS directory can be useful for these sorts of things.
Creates hostname ssh command of all machines accessed.
by Quierati
http://pastebin.com/pddEQWq2
#Use in .bashrc
#Use "HashKnownHosts no" in ~/.ssh/config or /etc/ssh/ssh_config
# If known_hosts is encrypted and delete known_hosts
[ ! -d ~/bin ] && mkdir ~/bin
for host in `cut -d, -f1 ~/.ssh/known_hosts|cut -f1 -d " "`;
do
[ ! -s ~/bin/$host ] && echo ssh $host '$*' > ~/bin/$host
done
[ -d ~/bin ] && chmod -R 700 ~/bin
export PATH=$PATH:~/bin
Ex Execute:
$for i in hostname{1..10}; do $i who;done
There is a tool called FLATT (FLexible Automation and Troubleshooting Tool) that allows you to execute scripts on multiple Unix/Linux hosts with a click of a button. It is a desktop GUI app that runs on Mac and Windows but there is also a command line java client.
You can create batch jobs and reuse on multiple hosts.
Requires Java 1.6 or higher.
Although it's a complex topic, I can highly recommend Capistrano.
I'm not sure if this method will work for everything that you want, but you can try something like this:
$ cat your_script.sh | ssh your_host bash
Which will run the script (which resides locally) on the remote server.
Just read a new blog using setsid without any further installation/configuration besides the mainstream kernel. Tested/Verified under Ubuntu14.04.
While the author has a very clear explanation and sample code as well, here's the magic part for a quick glance:
#----------------------------------------------------------------------
# Create a temp script to echo the SSH password, used by SSH_ASKPASS
#----------------------------------------------------------------------
SSH_ASKPASS_SCRIPT=/tmp/ssh-askpass-script
cat > ${SSH_ASKPASS_SCRIPT} <<EOL
#!/bin/bash
echo "${PASS}"
EOL
chmod u+x ${SSH_ASKPASS_SCRIPT}
# Tell SSH to read in the output of the provided script as the password.
# We still have to use setsid to eliminate access to a terminal and thus avoid
# it ignoring this and asking for a password.
export SSH_ASKPASS=${SSH_ASKPASS_SCRIPT}
......
......
# Log in to the remote server and run the above command.
# The use of setsid is a part of the machinations to stop ssh
# prompting for a password.
setsid ssh ${SSH_OPTIONS} ${USER}#${SERVER} "ls -rlt"
Easiest way I found without installing or configuring much software is using plain old tmux. Say you have 9 linux servers. Pick a box as your main. Start a tmux session:
tmux
Then create 9 split tmux panes by doing this 8 times:
ctrl-b + %
Now SSH into each box in each pane. You'll need to know some tmux shortcuts. To navigate, press:
ctrl+b <arrow-keys>
Once your logged in to all your boxes on each pane. Now turn on pane synchronization where it lets you type the same thing into each box:
ctrl+b :setw synchronize-panes on
now when you press any keys, it will show up on every pane. to turn it off, just make on to off. to cycle resize panes, press ctrl+b < space-bar >.
This works alot better for me since I need to see each terminal output as sometimes servers crash or hang for whatever reason when downloading or upgrade software. Any issues, you can just isolate and resolve individually.

Resources