ansible to answer the prompts with pre answers - ansible

I have to uninstall a software and while un-installing that, it prompts to seek an answer as yes or no .. given the sample below.
# /opt/altiris/notification/nsagent/bin/aex-uninstall
This will remove the Symantec Management Agent for UNIX, Linux and Mac software from your system.
Are you sure you want to continue [Yy/Nn]?
Now, as i have multiple Linux systems to do it hence i'm looking for ansible to do the Job for me , So, i have just tested the same with ansible adhoc way as follows an it works with shell module...
# ansible all -m shell -a 'echo "y" | /opt/altiris/notification/nsagent/bin/aex-uninstall'
dev-karn | SUCCESS | rc=0 >>
This will remove the Symantec Management Agent for UNIX, Linux and Mac software from your system.
Are you sure you want to continue [Yy/Nn]?
Uninstalling dependant solutions...
Uninstalling dependant solutions finished.
Removing Symantec Management Agent for UNIX, Linux and Mac package from the system...
Removing wrapper scripts and links for applications...
Sending uninstall events to NS
Stopping Symantec Management Agent for UNIX, Linux and Mac: [ OK ]
Remove non packaged files.
Symantec Management Agent for UNIX, Linux and Mac Configuration utility.
Removing aex-* links in /usr/bin
Removing RC init links and scripts
Cleaning up after final package removal.
Removal finished.
is there a bettwr way to do it...
any ideas will be much appreciated.

You should use Ansible expect module.

The yes utility of linux shell repeatedly outputs y which fulfills the neccessity of every pre-answers seeking prompt by a program and so as to my requirement as the program i'm running that needs only answer is yes to proceed, hence i used it as below with ansible shell module ...
$ ansible all -m shell -a '/bin/yes | /opt/altiris/notification/nsagent/bin/aex-uninstall'
So, even echo "y" is not neeed.
Thnx - Karn

Related

How to make ssh receive the password from stdin ON WINDOWS

Having read this question and my answer there, I would like to do a similar thing on Windows.
My Linux solution is this:
#!/bin/bash
[[ $1 =~ password: ]] && cat || SSH_ASKPASS="$0" DISPLAY=nothing:0 exec setsid "$#"
How can I do a similar thing on Windows, something I can use like this from a Windows Command Prompt or batch file:
C:> echo password | pass ssh user#host ...
Points to note:
ssh here was installed using the free edition of crwsync. It uses Cygwin DLLs but does not require a Cygwin install.
the solution should not require further dependencies: it work from a typical Windows Command Prompt or batch file.
I'm looking for an answer to the above, even if the answer is "it can't be done". I know I can use keys (and their relative merits), or other tools such as Python/Paramiko, PuTTY plink, and so-on. I know I can do it in a Cygwin environment. I don't want to do those things... I need to do it from a plain old Windows command prompt or batch file without incurring additional dependencies because, if this is possible, it will reduce existing dependencies.
Here is what I have so far:
#echo off
echo.%1 | findstr /C:"password">nul
if errorlevel 1 (
set SSH_ASKPASS="%0"
set DISPLAY="nothing:0"
%*
) else (
findstr "^"
)
The idea is to save that as, say pass.bat and use it like this:
C:> echo password | pass.bat ssh user#host ...
What happens is that the SSH session is launched but ssh still interactively prompts for the password. I think that, in theory, the script is ok becuse the below works:
C:> echo mypassword | pass.bat pass.bat "password"
mypassword
As far as I understand, the underlying Cygwin DLLs should see the Windows environment so the setting of SSH_ASKPASS should propagate into ssh.
I think the problem is that ssh is connected to the terminal. According to man ssh, If ssh needs a passphrase, it will read the passphrase from the current terminal if it was run from a terminal. This is why I use setsid in the Linux example. I think a way to detach the process from the terminal in Windows is required but I am not sure there is one (I did try start /B).
So I'm stuck - I don't know enough about scripting windows to know what should work. Any solution that uses native windows techniques (i.e. batch or perhaps powershell) and does not require anything not available on a vanilla Windows would be welcome.
The solution will be used by a cross platform application that I am working on that needs to use SSH to interact with an external service. The current prototype version is Python and is aready wired up to launch ssh as a subprocess. The Linux version already uses the above method so I would like a Windows solution that does not require reworking of the application.
SSH will never read password from stdin. I would give a shot sshpass utility, which is quite standard for this task. The other common solution is using expect script (which should work the same way on the Cygwin as on Linux).

How to clear terminal after SSH login?

I am trying to connect to a server via ssh. Once connected, terminal should be cleared.
Due to generated keys, I can connect to the server via ssh usr#svr without being prompted a password. This works.
In order to get rid off
The programs included with the Debian GNU/Linux system are free
software; the exact distribution terms for each program are described
in the individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
I would usually just type clear. However, I would prefer not to type this every time but automate the procedure instead.
ssh usr#svr "clear" --> "TERM environment variable not set.". I googled several solutions about unset environment variables, but without success.
So instead, I tried ssh -t usr#vr "clear"; this successfully clears the terminal, but also closes the connection right away ("Connection to IP closed."). Computer connects to server, clears the screen, closes the connection.
Next attempt was to create a bash script on the server to be run after connecting to it.
#/bin/bash
clear
## cl.sh, chmod +x
ssh usr#svr ./cl.sh --> "TERM environment variable not set.".
Another attempt was to create a bash script connecting to the server and clearing the terminal via ENDSSH.
#/bin/bash
ssh usr#svr <<'ENDSSH'
clear
ENDSSH
## sc.sh, chmod +x
Running this results in:
> ./sc.sh
Pseudo-terminal will not be allocated because stdin is not a terminal.
Linux raspberrypi 3.18.7-v7+ #755 SMP PREEMPT Thu Feb 12 17:20:48 GMT 2015 armv7l
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
TERM environment variable not set.
I am a beginner at this, so please be patient if I have made a very obvious mistake. I tried to be as detailed as possible and researched this before posting, but could not find an answer to my question. For example, commands other than "clear" work (ssh usr#svr ls), but that does not help me.
I have found another easy solution
ssh -t usr#svr 'clear;bash'
The text your question refers to is part of the message-of-the-day (MOTD).
If you can become root on the server, you can just modify that message in /etc/motd. Note that depending on the server's distribution, this file will usually be generated somehow (overwriting any changes), e.g. on Debian it is generated from /etc/motd.tail at boot, so you might have to change that file instead.
See manpage motd(5).
To prevent that message from being printed you can create a file named .hushlogin in your home directory (on the server). SSH to the server and run the command touch ~/.hushlogin. If that file exists then the login shell will no longer print the motd (message of the day) which is what you are seeing.
All startup messages are defined in motd file (/etc/motd).
However if you'd like to having your console cleared to increase privacy, add the following code:
[ -x /usr/bin/clear_console ] && /usr/bin/clear_console -q
Either to your .bash_profile or .bash_logout (on logout). Or by using clear command.
Clearing screen on logout is the default behaviour on Debian Linux distribution.

Bash script to ssh into computers run a command and direct output to append to a .txt on server

How would I go about creating a Bash script that will ssh into a list of computers and run a command and have the output of that command append to a file on the server?
posting this as an answer since I do not have the required reputation for making comments...
You should use the following shell syntax :
for ip in $(<filename.txt); do ssh "$ip" 'yourcommand >> yourfile'; done;
Pro tip: If you foresee doing this a lot -- you have a bunch of servers on which you must routinely issue commands, capture output, whatever -- it would pay to setup and use Ansible or any of the commonly available infra orchestration tools like Chef/Puppet etc. The reason I recommend ansible is that it requires minimal setup, and that too only on the master machine. It also supports ad-hoc commands pretty well.
Ps: I do not have experience with Chef/Puppet, I've just used Ansible.

Automatically copying files from a Linux machine to a Windows machine

I need to automatically copy files from a linux machine to a windows one every day.
I'm looking for something simple and secure like scp, rsync, sftp. Unfortunately, I'm at a loss of how to set this up on the Windows machine.
Does anyone know how to do this?
You can try mounting the Windows drive as a mount point on the Linux machine, using smbfs; you would then be able to use normal Linux scripting and copying tools such as cron and scp/rsync to do the copying.
You can find rsync for windows in cygwin, with that you can setup a rsync server on the windows box and run a cron job on your linux machine rsync'ing all the files to the windows machine. We used to do that and it worked fine.
"I'm at a loss of how to set this up on the Windows machine." Windows is the client or the server? At a loss means what, specifically? What can't you do?
"linux machine to a windows" can be done two ways.
Linux is client. Windows runs an FTP or SCP or SSH server. Linux has a client and pushes the file to Windows. Look at FileZilla for free windows FTP server. Also, windows often has an FTP service that's turned off. Turn it on.
Windows is client. Windows periodically pulls the file from the linux server. This is easier, since Linux already has all the necessary servers available. You do, howeveevr, need to start them on Linux.
There are scores of sftp, scp clients for Windows. Windows comes with an ftp client. Google for sftp client. You'll find WinSCP, Putty, filezilla, and list free country list of sftp clients.
I haven't used it in years now, but you could try Unison from http://www.cis.upenn.edu/~bcpierce/unison/
It could be done with 'smbclient', which acts much like an FTP client to a Windows share. Check out the manpage: man smbclient and look for ways to script it with the -c option, or man expect to drive it.
Here's how I'd probably do it though:
Pick which user you're going to be
when you sync the files. Log in as
this user and type 'id', and get the
numeric ID. You will use this ID in
step 4
Become 'root'
mkdir /mnt/sharename
Edit your /etc/fstab file and add an entry something like this. Replace the user ID of 500 with your user ID. Replace sharename with your windows share name. Replance WINDOWSHOSTNAME with your host name or IP address. If you don't know the shares, run smbclient -L WINDOWSHOSTNAME.
//WINDOWSHOSTNAME/sharename /mnt/sharename cifs credentials=/root/smblogin,uid=500,noauto,user 0 0
Edit /root/smblogin and put the following two lines in it
username=YOUR_WINDOWS_USERNAME
password=YOUR_WINDOWS_PASSWOD
Log in as the user from step 1.
Try mounting the share: mount /mnt/sharename
If that succeeds, then write a script to do it automatically. Let's call it 'backup.sh':
#!/bin/sh
df | grep -q /mnt/sharename
if test $? -ne 0 ; then
mount /mnt/sharename
fi
cp -r /path/to/dir /mnt/sharename/destination/
Use cron to run the script.
Type crontab -e
Put the following in the file:
PATH=/bin:/usr/bin
# Backup at 2:15 A.M. every day. Run 'man 5 crontab' for help on the time format
15 2 * * * /path/to/backup.sh
You may try WinSCP and its scripting support. And Windows support some kind of cron-like operation in its management stuff, don't they?

How can I automate running commands remotely over SSH to multiple servers in parallel?

I've searched around a bit for similar questions, but other than running one command or perhaps a few command with items such as:
ssh user#host -t sudo su -
However, what if I essentially need to run a script on (let's say) 15 servers at once. Is this doable in bash? In a perfect world I need to avoid installing applications if at all possible to pull this off. For argument's sake, let's just say that I need to do the following across 10 hosts:
Deploy a new Tomcat container
Deploy an application in the container, and configure it
Configure an Apache vhost
Reload Apache
I have a script that does all of that, but it relies on me logging into all the servers, pulling a script down from a repo, and then running it. If this isn't doable in bash, what alternatives do you suggest? Do I need a bigger hammer, such as Perl (Python might be preferred since I can guarantee Python is on all boxes in a RHEL environment thanks to yum/up2date)? If anyone can point to me to any useful information it'd be greatly appreciated, especially if it's doable in bash. I'll settle for Perl or Python, but I just don't know those as well (working on that). Thanks!
You can run a local script as shown by che and Yang, and/or you can use a Here document:
ssh root#server /bin/sh <<\EOF
wget http://server/warfile # Could use NFS here
cp app.war /location
command 1
command 2
/etc/init.d/httpd restart
EOF
Often, I'll just use the original Tcl version of Expect. You only need to have that on the local machine. If I'm inside a program using Perl, I do this with Net::SSH::Expect. Other languages have similar "expect" tools.
The issue of how to run commands on many servers at once came up on a Perl mailing list the other day and I'll give the same recommendation I gave there, which is to use gsh:
http://outflux.net/unix/software/gsh
gsh is similar to the "for box in box1_name box2_name box3_name" solution already given but I find gsh to be more convenient. You set up a /etc/ghosts file containing your servers in groups such as web, db, RHEL4, x86_64, or whatever (man ghosts) then you use that group when you call gsh.
[pdurbin#beamish ~]$ gsh web "cat /etc/redhat-release; uname -r"
www-2.foo.com: Red Hat Enterprise Linux AS release 4 (Nahant Update 7)
www-2.foo.com: 2.6.9-78.0.1.ELsmp
www-3.foo.com: Red Hat Enterprise Linux AS release 4 (Nahant Update 7)
www-3.foo.com: 2.6.9-78.0.1.ELsmp
www-4.foo.com: Red Hat Enterprise Linux Server release 5.2 (Tikanga)
www-4.foo.com: 2.6.18-92.1.13.el5
www-5.foo.com: Red Hat Enterprise Linux Server release 5.2 (Tikanga)
www-5.foo.com: 2.6.18-92.1.13.el5
[pdurbin#beamish ~]$
You can also combine or split ghost groups, using web+db or web-RHEL4, for example.
I'll also mention that while I have never used shmux, its website contains a list of software (including gsh) that lets you run commands on many servers at once. Capistrano has already been mentioned and (from what I understand) could be on that list as well.
Take a look at Expect (man expect)
I've accomplished similar tasks in the past using Expect.
You can pipe the local script to the remote server and execute it with one command:
ssh -t user#host 'sh' < path_to_script
This can be further automated by using public key authentication and wrapping with scripts to perform parallel execution.
You can try paramiko. It's a pure-python ssh client. You can program your ssh sessions. Nothing to install on remote machines.
See this great article on how to use it.
To give you the structure, without actual code.
Use scp to copy your install/setup script to the target box.
Use ssh to invoke your script on the remote box.
pssh may be interesting since, unlike most solutions mentioned here, the commands are run in parallel.
(For my own use, I wrote a simpler small script very similar to GavinCattell's one, it is documented here - in french).
Have you looked at things like Puppet or Cfengine. They can do what you want and probably much more.
For those that stumble across this question, I'll include an answer that uses Fabric, which solves exactly the problem described above: Running arbitrary commands on multiple hosts over ssh.
Once fabric is installed, you'd create a fabfile.py, and implement tasks that can be run on your remote hosts. For example, a task to Reload Apache might look like this:
from fabric.api import env, run
env.hosts = ['host1#example.com', 'host2#example.com']
def reload():
""" Reload Apache """
run("sudo /etc/init.d/apache2 reload")
Then, on your local machine, run fab reload and the sudo /etc/init.d/apache2 reload command would get run on all the hosts specified in env.hosts.
You can do it the same way you did before, just script it instead of doing it manually. The following code remotes to machine named 'loca' and runs two commands there. What you need to do is simply insert commands you want to run there.
che#ovecka ~ $ ssh loca 'uname -a; echo something_else'
Linux loca 2.6.25.9 #1 (blahblahblah)
something_else
Then, to iterate through all the machines, do something like:
for box in box1_name box2_name box3_name
do
ssh $box 'commmands_to_run_everywhere'
done
In order to make this ssh thing work without entering passwords all the time, you'll need to set up key authentication. You can read about it at IBM developerworks.
You can run the same command on several servers at once with a tool like cluster ssh. The link is to a discussion of cluster ssh on the Debian package of the day blog.
Well, for step 1 and 2 isn't there a tomcat manager web interface; you could script that with curl or zsh with the libwww plug in.
For SSH you're looking to:
1) not get prompted for a password (use keys)
2) pass the command(s) on SSH's commandline, this is similar to rsh in a trusted network.
Other posts have shown you what to do, and I'd probably use sh too but I'd be tempted to use perl like ssh tomcatuser#server perl -e 'do-everything-on-one-line;' or you could do this:
either scp the_package.tbz tomcatuser#server:the_place/.
ssh tomcatuser#server /bin/sh <<\EOF
define stuff like TOMCAT_WEBAPPS=/usr/local/share/tomcat/webapps
tar xj the_package.tbz or rsync rsync://repository/the_package_place
mv $TOMCAT_WEBAPPS/old_war $TOMCAT_WEBAPPS/old_war.old
mv $THE_PLACE/new_war $TOMCAT_WEBAPPS/new_war
touch $TOMCAT_WEBAPPS/new_war [you don't normally have to restart tomcat]
mv $THE_PLACE/vhost_file $APACHE_VHOST_DIR/vhost_file
$APACHECTL restart [might need to login as apache user to move that file and restart]
EOF
You want DSH or distributed shell, which is used in clusters a lot. Here is the link: dsh
You basically have node groups (a file with lists of nodes in them) and you specify which node group you wish to run commands on then you would use dsh, like you would ssh to run commands on them.
dsh -a /path/to/some/command/or/script
It will run the command on all the machines at the same time and return the output prefixed with the hostname. The command or script has to be present on the system, so a shared NFS directory can be useful for these sorts of things.
Creates hostname ssh command of all machines accessed.
by Quierati
http://pastebin.com/pddEQWq2
#Use in .bashrc
#Use "HashKnownHosts no" in ~/.ssh/config or /etc/ssh/ssh_config
# If known_hosts is encrypted and delete known_hosts
[ ! -d ~/bin ] && mkdir ~/bin
for host in `cut -d, -f1 ~/.ssh/known_hosts|cut -f1 -d " "`;
do
[ ! -s ~/bin/$host ] && echo ssh $host '$*' > ~/bin/$host
done
[ -d ~/bin ] && chmod -R 700 ~/bin
export PATH=$PATH:~/bin
Ex Execute:
$for i in hostname{1..10}; do $i who;done
There is a tool called FLATT (FLexible Automation and Troubleshooting Tool) that allows you to execute scripts on multiple Unix/Linux hosts with a click of a button. It is a desktop GUI app that runs on Mac and Windows but there is also a command line java client.
You can create batch jobs and reuse on multiple hosts.
Requires Java 1.6 or higher.
Although it's a complex topic, I can highly recommend Capistrano.
I'm not sure if this method will work for everything that you want, but you can try something like this:
$ cat your_script.sh | ssh your_host bash
Which will run the script (which resides locally) on the remote server.
Just read a new blog using setsid without any further installation/configuration besides the mainstream kernel. Tested/Verified under Ubuntu14.04.
While the author has a very clear explanation and sample code as well, here's the magic part for a quick glance:
#----------------------------------------------------------------------
# Create a temp script to echo the SSH password, used by SSH_ASKPASS
#----------------------------------------------------------------------
SSH_ASKPASS_SCRIPT=/tmp/ssh-askpass-script
cat > ${SSH_ASKPASS_SCRIPT} <<EOL
#!/bin/bash
echo "${PASS}"
EOL
chmod u+x ${SSH_ASKPASS_SCRIPT}
# Tell SSH to read in the output of the provided script as the password.
# We still have to use setsid to eliminate access to a terminal and thus avoid
# it ignoring this and asking for a password.
export SSH_ASKPASS=${SSH_ASKPASS_SCRIPT}
......
......
# Log in to the remote server and run the above command.
# The use of setsid is a part of the machinations to stop ssh
# prompting for a password.
setsid ssh ${SSH_OPTIONS} ${USER}#${SERVER} "ls -rlt"
Easiest way I found without installing or configuring much software is using plain old tmux. Say you have 9 linux servers. Pick a box as your main. Start a tmux session:
tmux
Then create 9 split tmux panes by doing this 8 times:
ctrl-b + %
Now SSH into each box in each pane. You'll need to know some tmux shortcuts. To navigate, press:
ctrl+b <arrow-keys>
Once your logged in to all your boxes on each pane. Now turn on pane synchronization where it lets you type the same thing into each box:
ctrl+b :setw synchronize-panes on
now when you press any keys, it will show up on every pane. to turn it off, just make on to off. to cycle resize panes, press ctrl+b < space-bar >.
This works alot better for me since I need to see each terminal output as sometimes servers crash or hang for whatever reason when downloading or upgrade software. Any issues, you can just isolate and resolve individually.

Resources