running a server in tmux with ansible - ansible

trying to setup a staging server for an API which I'm building using Django - and so far I was cutting corners, starting the thing using python manage.py runserver. But now that the setup grew a bit more complex, I decided to build an ansible playbok. Everything worked fine until I got to launching gunicorn - because I want it to run inside a tmux session. Manual process doesn't seem to trivially translate to ansible. I've been manually creating tmux session:
tmux new-session -A -s api
and then running gunicorn inside this new "environment" (subshell?)
The thing is (as is probably obvious to ansible veterans), when I get to running the first step, my playbook just hangs, and never gets to the next step, which is where gunicorn is to be started. I suppose this is because I'm starting a new shell with tmux, and ansible is lost, not hearing back (because, my guess, it's still waiting for a response on the original shell? which will never come). Is there a right way to execute the "tmux" step, letting ansible use it as a context/environment for the next step, or should I just be content with ansible doing the setup, and do the tmux thing manually? I had a similar problem, when dealing with the fact that gunicorn is inside a virtualenv, but the workaround is to use a full path, which includes the virtualenv guts. Not sure if there's a similar workaround with tmux...
thanks y'all

tmux immediately attaches to the new session, and doesn't exit until you detach from the session or the last process in the session ends. Until tmux exits, the rest of your script hangs.
You can use the -D option to prevent attaching to the session, whether or not it needs to be created.
tmux new-session -AD -s api
The rest of your script can now proceed.
tmux new-session -AD -s api is a shortcut for
tmux has-session -t api || tmux new-session -d -s api

Related

How to know what initial commands being executed right after a SSH login?

I was provided a tool to do a SSH to a remote host. The remote host is a new docker to be created. I was trying to understand if there are commands being executed right after the SSH (i.e. probably using ssh -t <some commands>).
It seems like the .bash_history does not include those cmds. In such case, what else can I do to figure out what cmds being executed right after my login? Thank you.
To find out the actual commands that are executed, you could add "set -v" or "set -x" to the shell initialization file(s) on the system you are ssh-ing to.
See man bash (the "INVOCATION" section) to find out which files will executed so that you can figure out which file to add the "set" command to.
You will probably want to do that temporarily ... because the output is verbose.
Another approach would be to configure sshd to set the logging level to DEBUG and see what commands are requested. However, note that sshd DEBUG logging is a user privacy violation.
If you are trying to do this kind of stuff to find out what is happening on the first "boot" of a docker instance, try putting the (temporarily) config changes into the docker image that you are starting.
The bash history only contains command lines that are submitted to the shell via a shell command prompt.

multiple commands for an alias in bash, when the first is ssh

This question has a good answer for how to put multiple command in an alias for bash.
But how would you do it in the case where you first need to ssh into a server, then do something like change a directory and then launch jupyter notebook?
I tried something like:
alias shortcut='ssh user#server -p 1234 -L 5678:localhost:91011; cd ~/somedir; jupyter notebook --ip=127.0.0.1
Maybe it's because my ssh requires me to type in a password, the last 2 commands aren't being executed.
There are some possible improvements for further convenience, if allowed by the system configuration.
If your need include executing a series of commands on the remote host, and you need to repeat this often, it's reasonable to put the commands in their own shell script and place it on the remote host.
For example in this case the script could be just
#!/bin/sh
cd ~/somedir && jupyter notebook --ip=127.0.0.1
Saving them in a file, add execution bit to it, and you can start the session like ssh user#server -p 1234 -L 5678:localhost:91011 path/to/script.sh
This is touched in this question but my preferred way is the low-score one about putting the script on remote -- I'd like to have each resource reside where they belong.
There's also the problem about what you want to do after starting the session. It seems the command is to start a server process that runs the Jupyter web service. If you just want to stay in the SSH session while monitoring the server, then the simple command should suffice. But if you want to keep the server in the background and log the output (and likely leave the SSH session for now) it's possible to run the server with nohup and redirect its output, by putting in the script something like
nohup jupyter notebook --ip="127.0.0.1" >> stdout.log 2>> stderr.log &
echo "$!" > jupyter-notebook.pid
The second command saves the PID in the file so it'll be easier to check or terminate it later without manually searching for the background process.

Tmux restore creates an empty session

I'm using the tmux-continuum and tmux-resurrect plugins. If I kill tmux (restarting my machine for example) and then run the following commands:
$ tmux ls
$ failed to connect to server: No such file or directory
Then when I start tmux it automatically restores my saved sessions plus an unnamed session (usually 0)
$ tmux
$ tmux ls
0: 1 windows (created...)
saved_session_1: 1 windows (created...)
saved_session_2: 1 windows (created...)
...
My current workflow goes like this:
Start tmux
Detach from tmux
Attach to the unnamed session
Kill unnamed session
Attach to one of my saved sessions
I don't want to have to repeat this every time I restart tmux. How can I restore my saved tmux sessions without creating the unnamed session?
If I understand correctly your question this is a common issue with tmux-resurrect. The solution given here (currently the last comment in the Github discussion) has worked for me.
Add the following to your .tmux.conf and then do source ~.tmux.conf(if that is the path of your conf file):
set -g #resurrect-hook-pre-restore-pane-processes 'tmux switch-client -n && tmux kill-session -t=0'
This a hook for tmux-resurrect which tells it to kill session 0 before restoring the panels.
Note: since the name of the session (-t=0) is hardcoded it will work only for that session, hence only if you do restore when you start the tmux server the first time, if you restore from sessions after 0 nothing will happen (which is nice to avoid kill sessions accidentally).
Run the following command (preferably make an alias it):
$ tmux new-session -A -s [session-name]
Flag meaning:
-s refers to session name.
-A In case session-name exists, command will act like attach-session instead of new-session.
Refer to the man page for official documentation:
$ man tmux
new-session [-AdDEP] [-c start-directory] [-F format] [-n window-name] [-s session-name] [-t
target-session] [-x width] [-y height] [shell-command]
(alias: new)
Create a new session with name session-name.
The new session is attached to the current terminal unless -d is given. window-name and
shell-command are the name of and shell command to execute in the initial window. If -d
is used, -x and -y specify the size of the initial window (80 by 24 if not given).
If run from a terminal, any termios(4) special characters are saved and used for new win‐
dows in the new session.
The -A flag makes new-session behave like attach-session if session-name already exists;
in this case, -D behaves like -d to attach-session.
If -t is given, the new session is grouped with target-session. This means they share
the same set of windows - all windows from target-session are linked to the new session,
any new windows are linked to both sessions and any windows closed removed from both ses‐
sions. The current and previous window and any session options remain independent and
either session may be killed without affecting the other. -n and shell-command are
invalid if -t is used.
The -P option prints information about the new session after it has been created. By
default, it uses the format ‘#{session_name}:’ but a different format may be specified
with -F.
If -E is used, the update-environment option will not be applied.
You could simply launch tmux by tmux a (i.e. attach to "existing" sessions). This would trigger tmux-continuum to first restore all sessions and then you'll get attached to one of them.
Works fine for me. I'm running tmux 3.0a with tmux-resurrect and tmux-continuum plugins.

Run ssh and immediately execute command [duplicate]

This question already has answers here:
Can I ssh somewhere, run some commands, and then leave myself a prompt?
(5 answers)
Closed 7 years ago.
I'm trying to find UNIX or bash command to run a command after connecting to an ssh server. For example:
ssh name#ip "tmux list-sessions"
The above code works, it lists the sessions, but it then immediately disconnects. Putting it in the sshrc on the server side works, but I need to be able to type it in client side. I want to be able to run a command, it logs in, opens up the window, then runs the command I've set. Ive tried
[command] | ssh name#ip
ssh name#ip [command]
ssh name#ip "[command]"
ssh -t name#ip [command]
ssh -t 'command; bash -l'
will execute the command and then start up a login shell when it completes. For example:
ssh -t user#domain.example 'cd /some/path; bash -l'
This isn't quite what you're looking for, but I've found it useful in similar circumstances.
I recently added the following to my $HOME/.bashrc (something similar should be possible with shells other than bash):
if [ -f $HOME/.add-screen-to-history ] ; then
history -s 'screen -dr'
fi
I keep a screen session running on one particular machine, and I've had problems with ssh connections to that machine being dropped, requiring me to re-run screen -dr every time I reconnect.
With that addition, and after creating that (empty) file in my home directory, I automatically have the screen -dr command in my history when my shell starts. After reconnecting, I can just type Control-P Enter and I'm back in my screen session -- or I can ignore it. It's flexible, but not quite automatic, and in your case it's easier than typing tmux list-sessions.
You might want to make the history -s command unconditional.
This does require updating your $HOME/.bashrc on each of the target systems, which might or might not make it unsuitable for your purposes.
You can use the LocalCommand command-line option if the PermitLocalCommand option is enabled:
ssh username#hostname -o LocalCommand="tmux list-sessions"
For more details about the available options, see the ssh_config man page.

How can I automate running commands remotely over SSH to multiple servers in parallel?

I've searched around a bit for similar questions, but other than running one command or perhaps a few command with items such as:
ssh user#host -t sudo su -
However, what if I essentially need to run a script on (let's say) 15 servers at once. Is this doable in bash? In a perfect world I need to avoid installing applications if at all possible to pull this off. For argument's sake, let's just say that I need to do the following across 10 hosts:
Deploy a new Tomcat container
Deploy an application in the container, and configure it
Configure an Apache vhost
Reload Apache
I have a script that does all of that, but it relies on me logging into all the servers, pulling a script down from a repo, and then running it. If this isn't doable in bash, what alternatives do you suggest? Do I need a bigger hammer, such as Perl (Python might be preferred since I can guarantee Python is on all boxes in a RHEL environment thanks to yum/up2date)? If anyone can point to me to any useful information it'd be greatly appreciated, especially if it's doable in bash. I'll settle for Perl or Python, but I just don't know those as well (working on that). Thanks!
You can run a local script as shown by che and Yang, and/or you can use a Here document:
ssh root#server /bin/sh <<\EOF
wget http://server/warfile # Could use NFS here
cp app.war /location
command 1
command 2
/etc/init.d/httpd restart
EOF
Often, I'll just use the original Tcl version of Expect. You only need to have that on the local machine. If I'm inside a program using Perl, I do this with Net::SSH::Expect. Other languages have similar "expect" tools.
The issue of how to run commands on many servers at once came up on a Perl mailing list the other day and I'll give the same recommendation I gave there, which is to use gsh:
http://outflux.net/unix/software/gsh
gsh is similar to the "for box in box1_name box2_name box3_name" solution already given but I find gsh to be more convenient. You set up a /etc/ghosts file containing your servers in groups such as web, db, RHEL4, x86_64, or whatever (man ghosts) then you use that group when you call gsh.
[pdurbin#beamish ~]$ gsh web "cat /etc/redhat-release; uname -r"
www-2.foo.com: Red Hat Enterprise Linux AS release 4 (Nahant Update 7)
www-2.foo.com: 2.6.9-78.0.1.ELsmp
www-3.foo.com: Red Hat Enterprise Linux AS release 4 (Nahant Update 7)
www-3.foo.com: 2.6.9-78.0.1.ELsmp
www-4.foo.com: Red Hat Enterprise Linux Server release 5.2 (Tikanga)
www-4.foo.com: 2.6.18-92.1.13.el5
www-5.foo.com: Red Hat Enterprise Linux Server release 5.2 (Tikanga)
www-5.foo.com: 2.6.18-92.1.13.el5
[pdurbin#beamish ~]$
You can also combine or split ghost groups, using web+db or web-RHEL4, for example.
I'll also mention that while I have never used shmux, its website contains a list of software (including gsh) that lets you run commands on many servers at once. Capistrano has already been mentioned and (from what I understand) could be on that list as well.
Take a look at Expect (man expect)
I've accomplished similar tasks in the past using Expect.
You can pipe the local script to the remote server and execute it with one command:
ssh -t user#host 'sh' < path_to_script
This can be further automated by using public key authentication and wrapping with scripts to perform parallel execution.
You can try paramiko. It's a pure-python ssh client. You can program your ssh sessions. Nothing to install on remote machines.
See this great article on how to use it.
To give you the structure, without actual code.
Use scp to copy your install/setup script to the target box.
Use ssh to invoke your script on the remote box.
pssh may be interesting since, unlike most solutions mentioned here, the commands are run in parallel.
(For my own use, I wrote a simpler small script very similar to GavinCattell's one, it is documented here - in french).
Have you looked at things like Puppet or Cfengine. They can do what you want and probably much more.
For those that stumble across this question, I'll include an answer that uses Fabric, which solves exactly the problem described above: Running arbitrary commands on multiple hosts over ssh.
Once fabric is installed, you'd create a fabfile.py, and implement tasks that can be run on your remote hosts. For example, a task to Reload Apache might look like this:
from fabric.api import env, run
env.hosts = ['host1#example.com', 'host2#example.com']
def reload():
""" Reload Apache """
run("sudo /etc/init.d/apache2 reload")
Then, on your local machine, run fab reload and the sudo /etc/init.d/apache2 reload command would get run on all the hosts specified in env.hosts.
You can do it the same way you did before, just script it instead of doing it manually. The following code remotes to machine named 'loca' and runs two commands there. What you need to do is simply insert commands you want to run there.
che#ovecka ~ $ ssh loca 'uname -a; echo something_else'
Linux loca 2.6.25.9 #1 (blahblahblah)
something_else
Then, to iterate through all the machines, do something like:
for box in box1_name box2_name box3_name
do
ssh $box 'commmands_to_run_everywhere'
done
In order to make this ssh thing work without entering passwords all the time, you'll need to set up key authentication. You can read about it at IBM developerworks.
You can run the same command on several servers at once with a tool like cluster ssh. The link is to a discussion of cluster ssh on the Debian package of the day blog.
Well, for step 1 and 2 isn't there a tomcat manager web interface; you could script that with curl or zsh with the libwww plug in.
For SSH you're looking to:
1) not get prompted for a password (use keys)
2) pass the command(s) on SSH's commandline, this is similar to rsh in a trusted network.
Other posts have shown you what to do, and I'd probably use sh too but I'd be tempted to use perl like ssh tomcatuser#server perl -e 'do-everything-on-one-line;' or you could do this:
either scp the_package.tbz tomcatuser#server:the_place/.
ssh tomcatuser#server /bin/sh <<\EOF
define stuff like TOMCAT_WEBAPPS=/usr/local/share/tomcat/webapps
tar xj the_package.tbz or rsync rsync://repository/the_package_place
mv $TOMCAT_WEBAPPS/old_war $TOMCAT_WEBAPPS/old_war.old
mv $THE_PLACE/new_war $TOMCAT_WEBAPPS/new_war
touch $TOMCAT_WEBAPPS/new_war [you don't normally have to restart tomcat]
mv $THE_PLACE/vhost_file $APACHE_VHOST_DIR/vhost_file
$APACHECTL restart [might need to login as apache user to move that file and restart]
EOF
You want DSH or distributed shell, which is used in clusters a lot. Here is the link: dsh
You basically have node groups (a file with lists of nodes in them) and you specify which node group you wish to run commands on then you would use dsh, like you would ssh to run commands on them.
dsh -a /path/to/some/command/or/script
It will run the command on all the machines at the same time and return the output prefixed with the hostname. The command or script has to be present on the system, so a shared NFS directory can be useful for these sorts of things.
Creates hostname ssh command of all machines accessed.
by Quierati
http://pastebin.com/pddEQWq2
#Use in .bashrc
#Use "HashKnownHosts no" in ~/.ssh/config or /etc/ssh/ssh_config
# If known_hosts is encrypted and delete known_hosts
[ ! -d ~/bin ] && mkdir ~/bin
for host in `cut -d, -f1 ~/.ssh/known_hosts|cut -f1 -d " "`;
do
[ ! -s ~/bin/$host ] && echo ssh $host '$*' > ~/bin/$host
done
[ -d ~/bin ] && chmod -R 700 ~/bin
export PATH=$PATH:~/bin
Ex Execute:
$for i in hostname{1..10}; do $i who;done
There is a tool called FLATT (FLexible Automation and Troubleshooting Tool) that allows you to execute scripts on multiple Unix/Linux hosts with a click of a button. It is a desktop GUI app that runs on Mac and Windows but there is also a command line java client.
You can create batch jobs and reuse on multiple hosts.
Requires Java 1.6 or higher.
Although it's a complex topic, I can highly recommend Capistrano.
I'm not sure if this method will work for everything that you want, but you can try something like this:
$ cat your_script.sh | ssh your_host bash
Which will run the script (which resides locally) on the remote server.
Just read a new blog using setsid without any further installation/configuration besides the mainstream kernel. Tested/Verified under Ubuntu14.04.
While the author has a very clear explanation and sample code as well, here's the magic part for a quick glance:
#----------------------------------------------------------------------
# Create a temp script to echo the SSH password, used by SSH_ASKPASS
#----------------------------------------------------------------------
SSH_ASKPASS_SCRIPT=/tmp/ssh-askpass-script
cat > ${SSH_ASKPASS_SCRIPT} <<EOL
#!/bin/bash
echo "${PASS}"
EOL
chmod u+x ${SSH_ASKPASS_SCRIPT}
# Tell SSH to read in the output of the provided script as the password.
# We still have to use setsid to eliminate access to a terminal and thus avoid
# it ignoring this and asking for a password.
export SSH_ASKPASS=${SSH_ASKPASS_SCRIPT}
......
......
# Log in to the remote server and run the above command.
# The use of setsid is a part of the machinations to stop ssh
# prompting for a password.
setsid ssh ${SSH_OPTIONS} ${USER}#${SERVER} "ls -rlt"
Easiest way I found without installing or configuring much software is using plain old tmux. Say you have 9 linux servers. Pick a box as your main. Start a tmux session:
tmux
Then create 9 split tmux panes by doing this 8 times:
ctrl-b + %
Now SSH into each box in each pane. You'll need to know some tmux shortcuts. To navigate, press:
ctrl+b <arrow-keys>
Once your logged in to all your boxes on each pane. Now turn on pane synchronization where it lets you type the same thing into each box:
ctrl+b :setw synchronize-panes on
now when you press any keys, it will show up on every pane. to turn it off, just make on to off. to cycle resize panes, press ctrl+b < space-bar >.
This works alot better for me since I need to see each terminal output as sometimes servers crash or hang for whatever reason when downloading or upgrade software. Any issues, you can just isolate and resolve individually.

Resources