Can anyone help me with a command running on a legacy network device (switch) from an Ansible (v2.5) connection? Because the network device is legacy I cannot use an Ansible module, which is why I have to resort to raw. Also, it is worth pointing out that I have no control over the legacy device - I cannot replace it with a more up-to-date model, or alter the configuration in any but the most trivial ways.
The Ansible command I want to run is quite basic. Something like this (obviously not this, in reality the command is 'nsshow', but I don't want to focus on that as I am sure there will be other commands with the same problem):
- name: "Device Command"
raw: "command1"
register: command
That does work, but not in the way required. The problem is that the command runs in the wrong context on the target device, so even though the command runs successfully the output it produces is not useful.
I have done some investigation with SSH commands to discover the nature of the problem. Initially I tried using SSH to connect to the device and entering the command manually. That worked, and I could see from the command prompt on the device that the command was running in the correct context. The next thing I tried was running the command as a pipeline. I found that the first of these commands didn't work (in the same way as Ansible), but that the second worked correctly:
echo -e "command1" | ssh myaccount#mydevice
echo -e "command1" | ssh -tt myaccount#mydevice
So, it seems that the problem relates to pseudo terminals. I realised I needed the -tt option when the first command gave a warning error 'Pseudo-terminal will not be allocated because stdin is not a terminal'. Going back to the Ansible command I can see that if I run Ansible with verbose output that -tt is used on the SSH command line. So why doesn't the Ansible command work? I then discovered that this command also hits the warning error problem when run from the command line:
ssh -tt myaccount#mydevice command1
I think that is more like what Ansible is doing than the pipeline examples I used above and that this explains why Ansible is not working.
Of course within Ansible I can run the command like this, which does work, but I'd rather avoid it.
- name: "Device Command"
local_action:
module: shell echo -e "command1" | ssh -tt myaccount#{{ inventory_hostname }}
register: command
So, the actual question is 'how can I run an Ansible play that runs a raw command on the target device that avoids the psuedo terminal issue'?
Thanks in advance for any help or suggestions.
You can always add ansible_ssh_args to your play:
- hosts: legacy_device
vars:
ansible_ssh_args: -tt
tasks:
- name: "Device Command"
raw: "command1"
register: command
or better yet to the inventory, or host_vars, or group_vars.
Related
I'm coding a distributed application and I need to spawn some process in multiple machines using ssh. The problem is that this command blocks the terminal, so I need that this command runs as a process. So that I can continue giving commands and moving to another machines.
For example
the command is something like:
make run
this command blocks my terminal, but I want to ssh to another machine and run this command, and another commands.
For now I'm with this script that is not working:
#!/bin/bash
HOSTS=(ex1 ex2)
COMPILE_SCRIPT="make"
RUN_CHEFS="make run &"
CLIENT_SCRIPT="make client"
# Compile the project
ssh "${HOSTS[0]}" "${COMPILE_SCRIPT}"
# Run the command in the hosts
for HOSTNAME in ${HOSTS[*]} ; do
ssh "${HOSTNAME}" "${RUN_CHEFS}"
done
ssh "${HOSTS[0]}" "${CLIENT_SCRIPT}"
One option you have is to use a utility such as mussh, which lets you do things like:
mussh -l username -m -h host1 host2 host3 -c "make run"
-m flag means run concurrently.
Writing a script to ssh into multiple machines and running multiple commands. However once the ssh is successful I get the following from each host without a single commands executing, the for loop works fine and I am able to ssh successfully into all the machines.
Connection closed by host port 22
If I exclude the -tt flag in ssh I also get,
Pseudo-terminal will not be allocated because stdin is not a terminal.
How do I get the following script to execute successfully on the machines.
Below is the script I am using
for vm_ip in "${vm_ip_array[#]}"
do
ssh -tt -i {$key_pair} {$username}#${vm_ip} << HERE
[do multiple stuff here like update packages and other maintainence stuff (sudo commands)]
exit
HERE
done
Additional Info : I run a few export statements as well, will that be causing an issue?
Would it be recommended to have the multiple commands as a script on the individual machines? Updating the scripts this way is a nightmare though.
Maybe it's an escaping issue. The HERE document is subject of parameter expansion if the word is not quoted.
So try:
ssh -tt -i {$key_pair} {$username}#${vm_ip} << 'HERE'
# ^ ^
Try and use shellcheck for debugging
For instance, it would tell you:
{$keypair}: This { is literal. Check expression (missing ;/\n?) or quote it.
In other words, I would use ${key_pair}, not {$key_pair}:
#! /bin/bash
for vm_ip in "${vm_ip_array[#]}"
do
ssh -tt -i "${key_pair}" "{$username}#${vm_ip}" << HERE
[do multiple stuff here like update packages and other maintainence stuff (sudo commands)]
exit
HERE
done
I've a user foo which is able to do passwordless ssh to A(self) and B. The playbook requires sudo access inside which I'm escalating with become and the below command is working fine.
ansible-playbook -i ../inventory.ini --user=foo --become --become-user=root echo_playbook.yml
But the above command is part of a shell script which doesn't have permission for foo. So when I use sudo to trigger that shell script, ansible is saying host unreachable. So I tried the ansible command with sudo as shown below and same. It showed host is unreachable.
sudo ansible-playbook -i ../inventory.ini --user=foo --become --become-user=root echo_playbook.yml
I agree that sudo is escalating the ansible-playbook to root. But I'm also providing the --user to tell ansible that "foo" user needs to be used for ssh.
Basically to access the playbook I need sudo. To connect to other servers I need foo user. To execute the actions inside the playbook (commands in playbook) I need sudo again (which I am using become for).
Am I doing anything wrong? Can anybody tell me the exact command for the ansible-playbook for the above scenario where ansible-playbook needs to run as sudo ansible-playbook?
I'm not entirely clear on exactly where you're stuck. I don't think you're confused between the remote user and the local user. If the playbook works as foo, and from what you describe, I can only guess that ~foo/.ssh/id_rsa or another automatically provided key authenticates foo. But you can generate a key for any user and allow it access to the remote foo if you'd prefer. Or, you can run the playbook as another user. It's up to you. The only thing that won't work is relying on the environment or configuration of particular users and then not providing it.
the above command is part of a shell script which doesn't have permission for foo.
What I'm hearing is that:
a user foo can successfully run ansible job
a script runs (under root?) that cannot run the ansible job
If you're happy with how ansible works for the foo user, you can switch to the foo user to run the ansible:
sudo -u foo ansible-playbook ...
If the script runs as root, sudo will always succeed. Otherwise, you can configure sudo to allow one user to access another for one command or more.
I would like to know how can I pass multiple show commands in ios_command module in ad-hoc mode.
Sample with just one command:
ansible all -m ios_command -a "commands='show version'"
Now here I would like to send another command, say show run or any other.
Any suggestions on this would be appreciated.
You need to pass a list and you can do it using JSON string:
ansible all -m ios_command -a "commands='[ \"show version\", \"show run\" ]'"
If you leave the spaces out, you can squeeze to 'commands=["show version","show run"]'
I use the following:
ansible ios-device -m ios_command -a commands="{{ lookup('file', 'commands.txt') }}" -u username -k
where commands.txt contains
show version
You can add more commands on each line of the 'commands.txt' file.
I'm writing a "tool" - a couple of bash scripts - that automate the installation and configuration on each server in a cluster.
The "tool" runs from a primary server. It tars and distributes it's self (via SCP) to every other server and untars the copies via "batch" SSH.
During set-up the tool issues remote commands such as the following from the primary server: echo './run_audit.sh' | ssh host4 'bash -s'. The approach works in many cases, except when there's interactive behavior since standard input is already in use.
Is there a way to run remote bash scripts interactively over SSH?
As a starting point, consider the following case: echo 'read -p "enter name:" name; echo "your name is $name"' | ssh host4 'bash -s'
In the case above the prompt never happens, how do I work around that?
Thanks in advance.
Run the command directly, like so:
ssh -t host4 bash ./run_audit.sh
For an encore, modify the shell script so it reads options from the command line or a configuration file instead of from stdin (or in preference to stdin).
I second Dennis Williamson's suggestion to look into puppet/etc instead.
Sounds like you might want to look into expect.
Do not pipe commands via stdin to ssh, but copy shell script to remote machine:
scp ./run_audit.sh host4:
and then:
ssh host4 run_audit.sh
For cluster deployments I'm using Fabric... it runs on top of SSH protocol, no daemons needed. It's easy as writing fabfile.py:
from fabric.api import run
def host_type():
run('uname -s')
and then:
$ fab -H localhost,linuxbox host_type
[localhost] run: uname -s
[localhost] out: Darwin
[linuxbox] run: uname -s
[linuxbox] out: Linux
Done.
Disconnecting from localhost... done.
Disconnecting from linuxbox... done.
Of course it can do more... including interactive commands, and relays on ~/.ssh directory files for SSH. More at fabfile.org. For sure you will forget bash for such tasks. ;-)