how to execute multiline command in ansible 2.9.10 in fedora 32 - ansible

I want to execute a command using ansible 2.9.10 in remote machine, first I tried like this:
ansible kubernetes-root -m command -a "cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"registry-mirrors":[
"https://kfwkfulq.mirror.aliyuncs.com",
"https://2lqq34jg.mirror.aliyuncs.com",
"https://pee6w651.mirror.aliyuncs.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn",
"https://registry.docker-cn.com"
]
}"
obviously it is not working.so I read this guide and tried like this:
- hosts: kubernetes-root
remote_user: root
tasks:
- name: add docker config
shell: >
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"registry-mirrors":[
"https://kfwkfulq.mirror.aliyuncs.com",
"https://2lqq34jg.mirror.aliyuncs.com",
"https://pee6w651.mirror.aliyuncs.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn",
"https://registry.docker-cn.com"
]
}
and execute it like this:
[dolphin#MiWiFi-R4CM-srv playboook]$ ansible-playbook add-docker-config.yaml
[WARNING]: Invalid characters were found in group names but not replaced, use
-vvvv to see details
ERROR! We were unable to read either as JSON nor YAML, these are the errors we got from each:
JSON: Expecting value: line 1 column 1 (char 0)
Syntax Error while loading YAML.
could not find expected ':'
The error appears to be in '/home/dolphin/source-share/source/dolphin/dolphin-scripts/ansible/playboook/add-docker-config.yaml': line 7, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
cat > /etc/docker/daemon.json <<EOF
{
^ here
is there anyway to achive this?how to fix it?

your playbook should work fine, you just have to add some indentation after the shell clause line, and change the > to |:
here is the updated PB:
---
- name: play name
hosts: dell420
gather_facts: false
vars:
tasks:
- name: run shell task
shell: |
cat > /tmp/temp.file << EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"registry-mirrors":[
"https://kfwkfulq.mirror.aliyuncs.com",
"https://2lqq34jg.mirror.aliyuncs.com",
"https://pee6w651.mirror.aliyuncs.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn",
"https://registry.docker-cn.com"
]
}
EOF
Not sure what is wrong with the ad-hoc command, i tried a few things but didnt manage to make it work.
hope these help
EDIT:
as pointed out by Zeitounator, the ad-hoc command will work if you use shell module instead of command. example:
ansible -i hosts dell420 -m shell -a 'cat > /tmp/temp.file <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"registry-mirrors":[
"https://kfwkfulq.mirror.aliyuncs.com",
"https://2lqq34jg.mirror.aliyuncs.com",
"https://pee6w651.mirror.aliyuncs.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn",
"https://registry.docker-cn.com"
]
}
EOF
'

Related

'Permission Denied' while configuring Filebeat and Logstash using Bash Script

I am writing a script that creates a logstash conf file and adds the configuration, and removes the existing filebeat config file and creates a new one.
I am using cat, but when I run the script, I get:
./script.sh: /etc/logstash/conf.d/apache.conf : Permission denied
./script.sh: /etc/filebeat/filebeat.yml: Permission denied
This is the script. I have tried using sudo chown -R.
Am I missing something or is there a better way to configure my file?
#!/bin/bash
sudo rm /etc/filebeat/filebeat.yml
cat > "/etc/filebeat/filebeat.yml" <<EOF
filebeat.inputs:
- type: filestream
id: my-filestream-id
enabled: true
paths:
- /home/ubuntu/logs/.*log
setup.kibana:
output.logstash:
hosts: ["169.254.169.254:5044"]
EOF
sudo touch /etc/logstash/conf.d/apache.conf
sudo cat > "/etc/logstash/conf.d/apache.conf " <<EOF
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["169.254.169.254"]
}
}
EOF
The main problem here is because of how redirections work.
According to this answer:
All redirections (including >) are applied before executing the actual command. In other words, your shell first tries to open /etc/php5/apache2/php.ini for writing using your account, then runs a completely useless sudo cat.
You can easily solve your problem by using tee (with sudo) instead of cat. Then, your script should be like this:
#!/bin/bash
sudo rm /etc/filebeat/filebeat.yml
sudo tee "/etc/filebeat/filebeat.yml" << EOF
filebeat.inputs:
- type: filestream
id: my-filestream-id
enabled: true
paths:
- /home/ubuntu/logs/.*log
setup.kibana:
output.logstash:
hosts: ["169.254.169.254:5044"]
EOF
sudo touch /etc/logstash/conf.d/apache.conf
sudo tee "/etc/logstash/conf.d/apache.conf" << EOF
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["169.254.169.254"]
}
}
EOF

Issue in ansible playbook command?

I am trying to execute a command on docker on other machine from my machine. When I execute this command:
- name: Add header
command: docker exec cli bash -l -c "echo '{"payload":{"header":{"channel_header":{"channel_id":"gll", "type":2}},"data":{"config_update":'$(cat jaguar_update.json)'}}}' | jq . > jaguar_update_in_envelope.json"
through ansible playbook, I am getting the error shown below.
fatal:[
command-task
]:FAILED! =>{
"changed":true,
"cmd":[ ],
"delta":"0:00:00.131115",
"end":"2019-07-11 17:32:44.651504",
"msg":"non-zero return code",
"rc":4,
"start":"2019-07-11 17:32:44.520389",
"stderr":"mesg: ttyname
failed: Inappropriate ioctl for device\nparse error: Invalid numeric
literal at line 1, column 9",
"stderr_lines":[
"mesg: ttyname failed:
Inappropriate ioctl for device",
"parse error: Invalid numeric literal
at line 1, column 9"
],
"stdout":"",
"stdout_lines":[
]
}
But if I manually execute command in the docker container, it works fine and I don't get any issue.
EDIT:
As suggested i tried with shell module
shell: docker exec cli -it bash -l -c "echo '{"payload":{"header":{"channel_header":{"channel_id":"gll", "type":2}},"data":{"config_update":'$(cat jaguar_update.json)'}}}' | jq . > jaguar_update_in_envelope.json"
But i get below error as
fatal: [command-task]: FAILED! => {"changed": true, "cmd": "docker
exec cli -it bash -l -c echo
'{\"payload\":{\"header\":{\"channel_header\":{\"channel_id\":\"gll\",
\"type\":2}},\"data\":{\"config_update\":'$(cat
jaguar_update.json)'}}}' | jq . > jaguar_update_in_envelope.json",
"delta": "0:00:00.110341", "end": "2019-07-12 10:21:45.204049", "msg":
"non-zero return code", "rc": 4, "start": "2019-07-12
10:21:45.093708", "stderr": "cat: jaguar_update.json: No such file or
directory\nparse error: Invalid numeric literal at line 1, column 4",
"stderr_lines": ["cat: jaguar_update.json: No such file or directory",
"parse error: Invalid numeric literal at line 1, column 4"], "stdout":
"", "stdout_lines": []}
All the files 'jaguar_update.json' present in the working directory. I have confirmed the working directory.
Above commands works if i put it in a shell script file then execute the shell script from ansible.
As everyone has mentioned this does need you to use shell instead of command. Now you want to simplify this command so it can run first in bash. Which can be done easily using printf
$ printf "%s%s%s" '{"payload":{"header":{"channel_header":{"channel_id":"gll", "type":2}},"data":{"config_update":' $(<jaguar_update.json'}}}' | jq . > jaguar_update_in_envelope.json
$ cat jaguar_update_in_envelope.json
{
"payload": {
"header": {
"channel_header": {
"channel_id": "gll",
"type": 2
}
},
"data": {
"config_update": {
"name": "tarun"
}
}
}
}
So now our commands runs without issues. Next is to move it with bash -l -c format. So instead using -c which requires us to pass the whole command as one parameter, we use the multiline commands
$ bash -l <<EOF
printf "%s%s%s" '{"payload":{"header":{"channel_header":{"channel_id":"gll", "type":2}},"data":{"config_update":' $(<jaguar_update.json) '}}}' | jq . > jaguar_update_in_envelope.json
EOF
But this fails with an error
{"payload":{"header":{"channel_header":{"channel_id":"gll", "type":2}},"data":{"config_update":{bash: line 2: name:: command not found
bash: line 3: syntax error near unexpected token `}'
bash: line 3: `} '}}}' | jq . > jaguar_update_in_envelope.json'
This is because the EOF format will treat each new line as a different command. So we need to replace all new line characters
bash -l <<EOF
printf "%s%s%s" '{"payload":{"header":{"channel_header":{"channel_id":"gll", "type":2}},"data":{"config_update":' $(sed -E 's|"|\\"|g' jaguar_update.json | tr -d '\n') '}}}' | jq . > jaguar_update_in_envelope.json
EOF
And now in ansible
- name: a play that runs entirely on the ansible host
hosts: 127.0.0.1
connection: local
tasks:
- name: Solve the problem
shell: |
bash -l <<EOF
printf "%s%s%s" '{"payload":{"header":{"channel_header":{"channel_id":"gll", "type":2}},"data":{"config_update":' $(sed -E 's|"|\\"|g' jaguar_update.json | tr -d '\n') '}}}' | jq . > jaguar_update_in_envelope.json
EOF
And the result
$ ansible-playbook test.yml
PLAY [a play that runs entirely on the ansible host] *********************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************************************
ok: [127.0.0.1]
TASK [Solve the problem] *************************************************************************************************************************************************************
changed: [127.0.0.1]
PLAY RECAP ***************************************************************************************************************************************************************************
127.0.0.1 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
$ cat jaguar_update_in_envelope.json
{
"payload": {
"header": {
"channel_header": {
"channel_id": "gll",
"type": 2
}
},
"data": {
"config_update": {
"name": "tarun"
}
}
}
}
To avoid any complexity, try as in this question to wrap your command in a script, and call that script (with command or shell)
- name: Add header
raw: /path/to/script/docker-add-header.sh
And in /path/to/script/docker-add-header.sh:
docker exec cli -it bash -l -c "echo '{"payload":{"header":{"channel_header":{"channel_id":"gll", "type":2}},"data":{"config_update":'$(cat jaguar_update.json)'}}}' | jq . > jaguar_update_in_envelope.json
Try to make the script work first alone (no Ansible).
See (if it is not working, even outside any Ansible call), to escape nested double-quotes:
docker exec cli -it bash -l -c "echo '{\"payload\":{\"header\":...
c.f. the docs -
The command(s) will not be processed through the shell, so variables like $HOME and operations like "<", ">", "|", ";" and "&" will not work. Use the shell module if you need these features.
shell pretty literally submits a script to the sh command parser.
Another note - you end the single-quote before the $(cat jaguar_update.json) and restart it after, but don't use any double quoting around it. Your output may handle that, but I wanted to call attention in case it matters.

How do I pass a dictionary to an ansible ad-hoc command?

If I have an ansible ad-hoc command that wants a dictionary or list valued argument, like the queries argument to postgresql_query, how do I invoke that in ansible ad-hoc commands?
Do I have to write a one-command playbook? I'm looking for a way to minimise the numbers of layers of confusing quoting (shell, yaml/json, etc) involved.
The ansible docs mention accepting structured forms for variables. So I tried the yaml and json syntax for the arguments:
ansible -m postgresql_query -sU postgres -a '{"queries":["SELECT 1", "SELECT 2"]}'
... but got ERROR! this task 'postgresql_query' has extra params, which is only allowed in the following modules: ....
the same is true if I #include a file with yaml or json contents like
cat > 'query.yml' <<'__END__'
queries:
- "SELECT 1"
- "SELECT 2"
__END__
ansible -m postgresql_query -sU postgres -a #queries.yml
You can define a dictionary in a JSON variable to pass it as parameter next:
ansible -m module_name -e '{"dict": {"key": "value"}}' -a "param={{ dict }}"
(parameters positions are arbitrary)
I have most of a solution - a way to express something like a shell script or query payload without extra quoting. But it's ugly:
ansible hostname -m postgresql_query -sU postgres -a 'query="{{query}}"' -e #/dev/stdin <<'__END__'
query: |
SELECT 'no special quotes needed' AS "multiline
identifier works fine" FROM
generate_series(1,2)
__END__
Not only is that shamefully awful, but it doesn't seem to work for lists (arrays):
ansible hostname -m postgresql_query -sU postgres -vvv -a 'query="{{query}}"' -e #/dev/stdin <<'__END__'
queries:
- |
SELECT 1
- |
SELECT 2
__END__
fails with
hostname | FAILED! => {
"changed": false,
"err": "syntax error at or near \"<\"\nLINE 1: <bound method Templar._query_lookup of <ansible.template.Tem...\n ^\n",
"invocation": {
"module_args": {
"autocommit": false,
"conninfo": "",
"queries": null,
"query": "<bound method Templar._query_lookup of <ansible.template.Templar object at 0x7f72531c61d0>>"
}
},
"msg": "Database query failed"
}
so it looks like some kind of lazy evaluation is breaking things.

Ansible fortios_config backups not working

I use Fortinet for firewall automation, but i get the error "Error reading running config" . I already followed this website: https://github.com/ansible/ansible/issues/33392
But do not find any solution. Please tell me what am I doing wrong ?
Ansible version: 2.7.0
Python version: 2.7.5
Fortinet: 60E
FortiOS version: 6.0.2
Here is what I am trying:
FortiOS.yml playbook:
---
- name: FortiOS Firewall Settings
hosts: fortiFW
connection: local
vars_files:
- /etc/ansible/vars/FortiOS_Settings_vars.yml
tasks:
- name: Backup current config
fortios_config:
host: 192.168.1.99
username: admin
password: Password#123
backup: yes
backup_path: /etc/ansible/forti_backup
Here is what I get as error:
ok: [192.168.1.99] META: ran handlers Read vars_file
'/etc/ansible/vars/FortiOS_Settings_vars.yml'
TASK [Backup current config]
**************************************************************************************************************************************************************************************************************** task path: /etc/ansible/FortiOS_Settings_test.yml:8 <192.168.1.99>
ESTABLISH LOCAL CONNECTION FOR USER: root <192.168.1.99> EXEC /bin/sh
-c 'echo ~root && sleep 0' <192.168.1.99> EXEC /bin/sh -c '( umask 77 && mkdir -p "echo
/root/.ansible/tmp/ansible-tmp-1539674386.05-16470854685226" && echo
ansible-tmp-1539674386.05-16470854685226="echo
/root/.ansible/tmp/ansible-tmp-1539674386.05-16470854685226" ) &&
sleep 0' Using module file
/usr/lib/python2.7/site-packages/ansible/modules/network/fortios/fortios_config.py
<192.168.1.99> PUT
/root/.ansible/tmp/ansible-local-6154Uq5Dmw/tmpt6JukB TO
/root/.ansible/tmp/ansible-tmp-1539674386.05-16470854685226/AnsiballZ_fortios_config.py
<192.168.1.99> EXEC /bin/sh -c 'chmod u+x
/root/.ansible/tmp/ansible-tmp-1539674386.05-16470854685226/
/root/.ansible/tmp/ansible-tmp-1539674386.05-16470854685226/AnsiballZ_fortios_config.py
&& sleep 0' <192.168.1.99> EXEC /bin/sh -c '/usr/bin/python
/root/.ansible/tmp/ansible-tmp-1539674386.05-16470854685226/AnsiballZ_fortios_config.py
&& sleep 0' <192.168.1.99> EXEC /bin/sh -c 'rm -f -r
/root/.ansible/tmp/ansible-tmp-1539674386.05-16470854685226/ >
/dev/null 2>&1 && sleep 0' The full traceback is: WARNING: The below
traceback may not be related to the actual failure. File
"/tmp/ansible_fortios_config_payload_b6IQmy/main.py", line 132, in
main
f.load_config(path=module.params['filter']) File "/usr/lib/python2.7/site-packages/pyFG/fortios.py", line 212, in
load_config
config_text = self.execute_command(command) File "/usr/lib/python2.7/site-packages/pyFG/fortios.py", line 154, in
execute_command
output = output + self._read_wrapper(o) File "/usr/lib/python2.7/site-packages/pyFG/fortios.py", line 120, in
_read_wrapper
return py23_compat.text_type(data)
fatal: [192.168.1.99]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"backup": true,
"backup_filename": null,
"backup_path": "/etc/ansible/forti_backup",
"config_file": null,
"file_mode": false,
"filter": "",
"host": "192.168.1.99",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"src": null,
"timeout": 60,
"username": "admin",
"vdom": null
}
},
"msg": "Error reading running config" }
When working with this module, I had the same issue. I looked into the source code of the module, and found that this error occurs when filter is set to "" -> empty string. You can get facts about the device when changing filter to something like "firewall address". But then you will only get back the options from exactly that, like if you would've typed "show firewall address" on the CLI of the device.
I'm currently working on a solution to use Ansible for FortiGate automation, but it's not looking good. E.g. FortiGates additionally do not support Netconf, so you can't use Netconf to send commands to the device.
So therefore, you're not doing anything wrong, but the modules is either not optimized, or I guessed that maybe the full-configuration is too big to be read by the module, so that you have to use the filter option to shrink it.

Empty string in Ansible responses

I am developing RouterOS network module for Ansible 2.5.
RouterOS shell can print a few messages that should be detected in on_open_shell() event and either skipped or dismissed automatically. These are Do you want to see the software license? [Y/n]: and a few others, all of which are well documented here in the MikroTik Wiki.
Here is how I'm doing this:
def on_open_shell(self):
try:
if not prompt.strip().endswith(b'>'):
self._exec_cli_command(b' ')
except AnsibleConnectionFailure:
raise AnsibleConnectionFailure('unable to bypass license prompt')
It indeed does bypass the license prompt. However it seems that the \n response from the RouterOS device counts as a reply for the actual commands that follow. So, if I have two tasks in my playbook like this:
---
- hosts: routeros
gather_facts: no
connection: network_cli
tasks:
- routeros_command:
commands:
- /system resource print
- /system routerboard print
register: result
- name: Print result
debug: var=result.stdout_lines
This is the output I get:
ok: [example] => {
"result.stdout_lines": [
[
""
],
[
"uptime: 12h33m29s",
" version: 6.42.1 (stable)",
" build-time: Apr/23/2018 10:46:55",
" free-memory: 231.0MiB",
" total-memory: 249.5MiB",
" cpu: Intel(R)",
" cpu-count: 1",
" cpu-frequency: 2700MHz",
" cpu-load: 2%",
" free-hdd-space: 943.8MiB",
" total-hdd-space: 984.3MiB",
" write-sect-since-reboot: 7048",
" write-sect-total: 7048",
" architecture-name: x86",
" board-name: x86",
" platform: MikroTik"
]
]
}
As you can see, the output seems to be offset by 1. What should I do to correct this?
It turns out that the problem was in the regular expression that defined the shell prompt. I had it defined like this:
terminal_stdout_re = [
re.compile(br"\[\w+\#[\w\-\.]+\] ?>"),
# other cases
]
It did not match the end of prompt which caused Ansible to think that there was a newline before the actual command output. Here is the correct regexp:
terminal_stdout_re = [
re.compile(br"\[\w+\#[\w\-\.]+\] ?> ?$"),
# other cases
]

Resources