I have a couple of immutable files on my server, like for example /etc/passwd. This means that every time a user is added, edited or removed on the system (manually, by dpkg or any other way), we need to run chattr -i /etc/passwd and chattr +i /etc/passwd before and after the command in question. The problem is that we don't always know up front this will happen.
Is there a way in ansible to plan the following in a playbook or role:
do a task
check if there's an error like operation not permitted: <filename>
chattr -i <filename>
redo the task
chattr +i <filename> (even if the task gave another error this time round)
In more general terms: is it possible to wrap an ansible task in two other tasks?
Or, if I put the chattr +i in a handler, is there a way to have the chattr -i defined somewhere and have it run when necessary?
Related
I am trying to create directory using below command
ansible app -m file -a "path=/home/user/test mode = 777 state = directory" -b
I am getting below error message. Could any one advise me what I am doing wrong here?
ERROR! this task 'file' has extra params, which is only allowed in the
following modules: ansible.builtin.raw, ansible.legacy.add_host,
ansible.builtin.meta, ansible.legacy.include,
ansible.legacy.import_role, script, ansible.legacy.raw, group_by,
ansible.builtin.shell, ansible.legacy.win_command, include, shell,
include_vars, ansible.builtin.import_tasks, add_host,
ansible.builtin.include_vars, ansible.legacy.include_role,
ansible.builtin.include_role, ansible.legacy.include_vars,
ansible.legacy.win_shell, ansible.legacy.group_by, import_tasks,
ansible.builtin.set_fact, ansible.builtin.command,
ansible.builtin.include_tasks, include_tasks, ansible.builtin.script,
ansible.builtin.include, raw, meta, ansible.legacy.set_fact,
ansible.builtin.add_host, ansible.legacy.script,
ansible.legacy.import_tasks, win_command, ansible.builtin.win_shell,
include_role, win_shell, set_fact, ansible.legacy.shell,
ansible.legacy.command, import_role, ansible.legacy.meta,
ansible.builtin.import_role, ansible.legacy.include_tasks,
ansible.builtin.group_by, ansible.builtin.win_command, command
The simplest example of an ad hoc command to create a directory using Ansible is
ansible nodes -a "mkdir /BYANSIBLE"
Try this:
ansible app -m file -a "dest=/home/user/test state=directory"
or
ansible localhost -m file -a "dest=/home/user/test state=directory"
In general, for an Ansible ad-hoc command, any module arguments are supplied using the -a option, as one single string. Due to this, the way Ansible distinguishes the individual arguments, is by spaces.
You need to remove the extra spaces you have on either side of the equal signs for the ad-hoc command to work.
Hence, the following would work:
ansible app -m file -a "path=/home/user/test mode=755 state=directory" -b
Also, check out some of the related examples in the Ansible docs.
Side note: I've modified the permissions to 755 (read/write/execute for owner + read/execute for group & others). Its almost never a good idea to give 777 permissions (full read/write/execute) on directories.
I want to use ansible to manage multi service deployment, each service have many roles. Now I want to package one service as a tgz or zip file(same as helm charts), and use one file to override variables when use ansible execute(like helm install xx.tgz --values values.yml)
I am new to ansible, and have checked ansible-playbook -h, but no find some related information
Usage: ansible-playbook [options] playbook.yml [playbook2 ...]
Runs Ansible playbooks, executing the defined tasks on the targeted hosts.
Options:
--ask-vault-pass ask for vault password
-C, --check don't make any changes; instead, try to predict some
of the changes that may occur
-D, --diff when changing (small) files and templates, show the
differences in those files; works great with --check
-e EXTRA_VARS, --extra-vars=EXTRA_VARS
set additional variables as key=value or YAML/JSON, if
filename prepend with #
--flush-cache clear the fact cache for every host in inventory
--force-handlers run handlers even if a task fails
-f FORKS, --forks=FORKS
specify number of parallel processes to use
(default=5)
-h, --help show this help message and exit
-i INVENTORY, --inventory=INVENTORY, --inventory-file=INVENTORY
specify inventory host path or comma separated host
list. --inventory-file is deprecated
-l SUBSET, --limit=SUBSET
further limit selected hosts to an additional pattern
--list-hosts outputs a list of matching hosts; does not execute
anything else
--list-tags list all available tags
--list-tasks list all tasks that would be executed
-M MODULE_PATH, --module-path=MODULE_PATH
prepend colon-separated path(s) to module library (def
ault=~/.ansible/plugins/modules:/usr/share/ansible/plu
gins/modules)
--skip-tags=SKIP_TAGS
only run plays and tasks whose tags do not match these
values
--start-at-task=START_AT_TASK
start the playbook at the task matching this name
--step one-step-at-a-time: confirm each task before running
--syntax-check perform a syntax check on the playbook, but do not
execute it
-t TAGS, --tags=TAGS only run plays and tasks tagged with these values
--vault-id=VAULT_IDS the vault identity to use
--vault-password-file=VAULT_PASSWORD_FILES
vault password file
-v, --verbose verbose mode (-vvv for more, -vvvv to enable
connection debugging)
--version show program's version number, config file location,
configured module search path, module location,
executable location and exit
Connection Options:
control as whom and how to connect to hosts
-k, --ask-pass ask for connection password
--private-key=PRIVATE_KEY_FILE, --key-file=PRIVATE_KEY_FILE
use this file to authenticate the connection
-u REMOTE_USER, --user=REMOTE_USER
connect as this user (default=None)
-c CONNECTION, --connection=CONNECTION
connection type to use (default=smart)
-T TIMEOUT, --timeout=TIMEOUT
override the connection timeout in seconds
(default=10)
--ssh-common-args=SSH_COMMON_ARGS
specify common arguments to pass to sftp/scp/ssh (e.g.
ProxyCommand)
--sftp-extra-args=SFTP_EXTRA_ARGS
specify extra arguments to pass to sftp only (e.g. -f,
-l)
--scp-extra-args=SCP_EXTRA_ARGS
specify extra arguments to pass to scp only (e.g. -l)
--ssh-extra-args=SSH_EXTRA_ARGS
specify extra arguments to pass to ssh only (e.g. -R)
Privilege Escalation Options:
control how and which user you become as on target hosts
-b, --become run operations with become (does not imply password
prompting)
--become-method=BECOME_METHOD
privilege escalation method to use (default=sudo), use
`ansible-doc -t become -l` to list valid choices.
--become-user=BECOME_USER
run operations as this user (default=root)
-K, --ask-become-pass
ask for privilege escalation password
version:ansible 2.8.0
You can't package playbooks as zip or tgz files. One option would be to use ansible-pull. It checks a repository out and runs a specified playbook: https://docs.ansible.com/ansible/latest/cli/ansible-pull.html
I am trying to use ansible dynamic inventory.
I am getting the results when I run,
$ ./ec2.py --list
...
...
"ec2": [
"xx.xx.xx.xx"
]
}
But when I try to run it with ansible command, it does not run successfully.
$ ansible -i ec2.py -e "ansible_ssh_port=3003" -m ping
Usage: ansible <host-pattern> [options]
Define and run a single task 'playbook' against a set of hosts
Options:
-a MODULE_ARGS, --args=MODULE_ARGS
module arguments
--ask-vault-pass ask for vault password
-B SECONDS, --background=SECONDS
run asynchronously, failing after X seconds
(default=N/A)
-C, --check don't make any changes; instead, try to predict some
of the changes that may occur
-D, --diff when changing (small) files and templates, show the
differences in those files; works great with --check
-e EXTRA_VARS, --extra-vars=EXTRA_VARS
set additional variables as key=value or YAML/JSON, if
filename prepend with #
-f FORKS, --forks=FORKS
specify number of parallel processes to use
(default=5)
-h, --help show this help message and exit
-i INVENTORY, --inventory=INVENTORY, --inventory-file=INVENTORY
specify inventory host path or comma separated host
list. --inventory-file is deprecated
-l SUBSET, --limit=SUBSET
further limit selected hosts to an additional pattern
--list-hosts outputs a list of matching hosts; does not execute
anything else
-m MODULE_NAME, --module-name=MODULE_NAME
module name to execute (default=command)
-M MODULE_PATH, --module-path=MODULE_PATH
prepend colon-separated path(s) to module library (def
ault=['/Users/luvpreetsingh/.ansible/plugins/modules',
'/usr/share/ansible/plugins/modules'])
-o, --one-line condense output
--playbook-dir=BASEDIR
Since this tool does not use playbooks, use this as a
subsitute playbook directory.This sets the relative
path for many features including roles/ group_vars/
etc.
-P POLL_INTERVAL, --poll=POLL_INTERVAL
set the poll interval if using -B (default=15)
--syntax-check perform a syntax check on the playbook, but do not
execute it
-t TREE, --tree=TREE log output to this directory
--vault-id=VAULT_IDS the vault identity to use
--vault-password-file=VAULT_PASSWORD_FILES
vault password file
-v, --verbose verbose mode (-vvv for more, -vvvv to enable
connection debugging)
--version show program's version number and exit
Connection Options:
control as whom and how to connect to hosts
-k, --ask-pass ask for connection password
--private-key=PRIVATE_KEY_FILE, --key-file=PRIVATE_KEY_FILE
use this file to authenticate the connection
-u REMOTE_USER, --user=REMOTE_USER
connect as this user (default=None)
-c CONNECTION, --connection=CONNECTION
connection type to use (default=smart)
-T TIMEOUT, --timeout=TIMEOUT
override the connection timeout in seconds
(default=10)
--ssh-common-args=SSH_COMMON_ARGS
specify common arguments to pass to sftp/scp/ssh (e.g.
ProxyCommand)
--sftp-extra-args=SFTP_EXTRA_ARGS
specify extra arguments to pass to sftp only (e.g. -f,
-l)
--scp-extra-args=SCP_EXTRA_ARGS
specify extra arguments to pass to scp only (e.g. -l)
--ssh-extra-args=SSH_EXTRA_ARGS
specify extra arguments to pass to ssh only (e.g. -R)
Privilege Escalation Options:
control how and which user you become as on target hosts
-s, --sudo run operations with sudo (nopasswd) (deprecated, use
become)
-U SUDO_USER, --sudo-user=SUDO_USER
desired sudo user (default=root) (deprecated, use
become)
-S, --su run operations with su (deprecated, use become)
-R SU_USER, --su-user=SU_USER
run operations with su as this user (default=None)
(deprecated, use become)
-b, --become run operations with become (does not imply password
prompting)
--become-method=BECOME_METHOD
privilege escalation method to use (default=sudo),
valid choices: [ sudo | su | pbrun | pfexec | doas |
dzdo | ksu | runas | pmrun | enable ]
--become-user=BECOME_USER
run operations as this user (default=root)
--ask-sudo-pass ask for sudo password (deprecated, use become)
--ask-su-pass ask for su password (deprecated, use become)
-K, --ask-become-pass
ask for privilege escalation password
Some modules do not make sense in Ad-Hoc (include, meta, etc)
ERROR! Missing target hosts
I get this long statement with the error of missing target hosts.
Then, when I run by specifying the region, it does not give any error but does not return any instances.
$ ansible -i ec2.py us-west-2 -e "ansible_ssh_port=3003" -m ping
[WARNING]: Could not match supplied host pattern, ignoring: us-west-2
[WARNING]: No hosts matched, nothing to do
Firstly, why is it not running? what am I doing wrong?
Secondly, why do the error changes when I specify the region? Is specifying region mandatory? shouldn't it pick the region from ec2.ini?
You do not have any hosts listed. Try adding all at the end of your ansible command:
ansible -i ec2.py -e "ansible_ssh_port=3003" -m ping all
The above command is partly right. The inventory has been specified with the -i argument but the target host group has not been specified. The command should be
ansible all -i ec2.py -e "ansible_ssh_port=3003" -m ping
# The command syntax could be written as below
ansible <target_host_group> -i <inventory> -e "extra_vars" -m "module_name" -a "module_arguments"
Please refer to the documentation on Working with patterns and Ad-Hoc Commands from the ansible documentation.
Can anyone help me with a command running on a legacy network device (switch) from an Ansible (v2.5) connection? Because the network device is legacy I cannot use an Ansible module, which is why I have to resort to raw. Also, it is worth pointing out that I have no control over the legacy device - I cannot replace it with a more up-to-date model, or alter the configuration in any but the most trivial ways.
The Ansible command I want to run is quite basic. Something like this (obviously not this, in reality the command is 'nsshow', but I don't want to focus on that as I am sure there will be other commands with the same problem):
- name: "Device Command"
raw: "command1"
register: command
That does work, but not in the way required. The problem is that the command runs in the wrong context on the target device, so even though the command runs successfully the output it produces is not useful.
I have done some investigation with SSH commands to discover the nature of the problem. Initially I tried using SSH to connect to the device and entering the command manually. That worked, and I could see from the command prompt on the device that the command was running in the correct context. The next thing I tried was running the command as a pipeline. I found that the first of these commands didn't work (in the same way as Ansible), but that the second worked correctly:
echo -e "command1" | ssh myaccount#mydevice
echo -e "command1" | ssh -tt myaccount#mydevice
So, it seems that the problem relates to pseudo terminals. I realised I needed the -tt option when the first command gave a warning error 'Pseudo-terminal will not be allocated because stdin is not a terminal'. Going back to the Ansible command I can see that if I run Ansible with verbose output that -tt is used on the SSH command line. So why doesn't the Ansible command work? I then discovered that this command also hits the warning error problem when run from the command line:
ssh -tt myaccount#mydevice command1
I think that is more like what Ansible is doing than the pipeline examples I used above and that this explains why Ansible is not working.
Of course within Ansible I can run the command like this, which does work, but I'd rather avoid it.
- name: "Device Command"
local_action:
module: shell echo -e "command1" | ssh -tt myaccount#{{ inventory_hostname }}
register: command
So, the actual question is 'how can I run an Ansible play that runs a raw command on the target device that avoids the psuedo terminal issue'?
Thanks in advance for any help or suggestions.
You can always add ansible_ssh_args to your play:
- hosts: legacy_device
vars:
ansible_ssh_args: -tt
tasks:
- name: "Device Command"
raw: "command1"
register: command
or better yet to the inventory, or host_vars, or group_vars.
I'm currently using the ad-hoc command ansible ubuntu -a "ls -l /var/run/reboot-required" to get a list of servers that require reboot. However, the end result is a list of all servers, and either the info about the indicated file or an error that the file does not exist.
I'm familiar enough with playbooks to create one that actually does the reboot, but I don't want that. I just want a nice (and relatively neat) list of servers that still require a reboot.
A more generic solution of getting a list of servers that meet some criteria (e.g. have a variable set) would also be quite helpful.
Not easy because the proper way is checking the existence of the file with stat, saving it to a variable and create a list when: var.stat.exists.
If you want to do in one line and you don't mind using bash scripting, do:
ansible ubuntu -m stat -a "path=/var/run/reboot-required" -o | grep -v '{"exists": false}' | awk -F\| '{ print $1 }'
Hope it helps