Is it possible to execute a local script on a remote host in Ansible without copying it across and then executing it?
The script, shell and command modules all seems like they might be the answer but I'm not sure which is best.
The script module describes itself as "Runs a local script on a remote node after transferring it" but the examples given don't suggest a copy operation - e.g. no src, dest - so maybe this is the answer?
script module FTW
tasks:
- name: Ensure docker repo is added
script: "{{ role_path }}/files/add-docker-repo.sh"
register: dockeraddrepo
notify: Done dockeraddrepo
when: ansible_local.dockeraddrepo | d(0) == 0
handlers:
- name: Done dockeraddrepo
copy:
content: '{{ dockeraddrepo }}'
dest: /etc/ansible/facts.d/dockeraddrepo.fact
Related
File is opened if I run the following command from shell:
ls -l /tmp/uname -n ---file1
I am trying to run it with fetch:
- name: COPY file from remote machine to local
fetch:
src: /tmp/`uname -n`---mem-swap.out
dest: /tmp
But it gives me the error:
file not found: /tmp/uname -n---mem-swap.out
Is it possible to execute it in "src"
Thank you.
This is not possible. A lookup would help if your were running the playbook on a local system, but unfortunately lookups don't run on remotely managed nodes.
As per the other answer, you can run a task first.
But if you are collecting facts first, why not use an Ansible variable?
- name: COPY file from remote machine to local
fetch:
src: /tmp/{{ ansible_nodename }}---mem-swap.out
dest: /tmp
It's possible to register the result and concatenate the src string. For example
- commnad: "uname -n"
register: result
- fetch:
src: "/tmp/{{ result.stdout }}---mem-swap.out"
dest: "/tmp"
I write a simple playbook to copy some configuration files on a certain machine.
I need to copy this file in a different host too for backup. Is possible to declare different host in the same playbook?
I need this cause my "backup host" can be different and I retrieve it from the hostname I use.
I tried both copy and raw module and nothing seems to work
here the example of playbook
- name: find file
find:
file_type: directory
paths: /prd/viv/dat/repository/
patterns: "{{inventory_hostname}}"
recurse: yes
register: find
delegate_to: localhost
- name: Copy MASTER
raw: echo xmonit$(echo {{find.files[0].path}} | cut -d "/" -f7 )
delegate_to: localhost
register: xmonit
- debug:
msg: "{{xmonit.stdout}}"
- name: Copy MASTER raw
raw: sshpass -p "mypass" scp {{find.files[0].path}}/master.cfg myuser#{{xmonit.stdout}}:/prd
delegate_to: localhost
#- name: Copy MASTER
#copy:
#src: "{{find.files[0].path}}/master.cfg"
#dest: /prd/cnf/dat/{{inventory_hostname}}/
edit: if I use the copy module the destination remains that of the main host while the goal is to copy to a third host.
I need to declare a different host for this single task
- name: Copy MASTER
copy:
src: "{{find.files[0].path}}/master.cfg"
dest: /prd/cnf/dat/{{inventory_hostname}}/
Like Zeitounator told me in the comments copy module are the best way to act.
like this it work for me
- name: Copy MASTER
copy:
src: "{{find.files[0].path}}/master.cfg"
dest: /prd/cnf/dat/{{inventory_hostname}}/
delegate_to: xmonit.stdout_lines[0]
There are 3 hosts in my play.
[machines]
MachineA
MachineB
MachineC
MongoDB runs on these servers. And one of these servers can be a MasterDB of Mongo.
So, each of these machines can be a 'Master'. This is determined by setting the fact if the machine is master, in this example only MachineA is targeted:
- name: check if master
shell: 'shell command to check if master'
set_fact: MasterHost="machineA"
when: 'shell command to check if master'.stdout == "true"
This is also done for MachineB and MachineC.
Mission to achieve: To run commands only on on the Master machine, which has the fact "MasterHost".
I tried the delegate_to module, but delegate_to also uses the two other machines:
- name: some task
copy: src=/tmp/test.txt dest=/tmp/test.txt
delegate_to: "{{ MasterHost }}"
I want to target the master it in my playbook and run only commands on the master, not in the shell via the --limit option.
Assuming the command run to check whether the host is the master or not is not costly, you can go without setting a specific fact:
- name: check if master
shell: 'shell command to check if master'
register: master_check
- name: some task
copy: src=/tmp/test.txt dest=/tmp/test.txt
when: master_check.stdout == "true"
Run the play on all hosts and only the one that is the master will run some task.
Eventually, this was my answer. Sorry for the first post, still learning how to make a good post. Hihi
- name: Check which host is master
shell: mongo --quiet --eval 'db.isMaster().ismaster'
register: mongoMaster
- name: Set fact for mongoMasterr
set_fact: MongoMasterHost="{{ item }}"
with_items: "{{ groups['HOSTS'] }}"
when: mongoMaster.stdout == "true"
- name: Copy local backup.tgz to master /var/lib/mongodb/backup
copy: src=/tmp/backup.tgz dest=/var/lib/backup/backup.tgz
when: mongoMaster.stdout == "true"
Thanks for helping and pointing me toward the right direction.
I'm trying to run a python script from an ansible script. I would think this would be an easy thing to do, but I can't figure it out. I've got a project structure like this:
playbook-folder
roles
stagecode
files
mypythonscript.py
tasks
main.yml
release.yml
I'm trying to run mypythonscript.py within a task in main.yml (which is a role used in release.yml). Here's the task:
- name: run my script!
command: ./roles/stagecode/files/mypythonscript.py
args:
chdir: /dir/to/be/run/in
delegate_to: 127.0.0.1
run_once: true
I've also tried ../files/mypythonscript.py. I thought the path for ansible would be relative to the playbook, but I guess not?
I also tried debugging to figure out where I am in the middle of the script, but no luck there either.
- name: figure out where we are
stat: path=.
delegate_to: 127.0.0.1
run_once: true
register: righthere
- name: print where we are
debug: msg="{{righthere.stat.path}}"
delegate_to: 127.0.0.1
run_once: true
That just prints out ".". So helpful ...
try to use script directive, it works for me
my main.yml
---
- name: execute install script
script: get-pip.py
and get-pip.py file should be in files in the same role
If you want to be able to use a relative path to your script rather than an absolute path then you might be better using the role_path magic variable to find the path to the role and work from there.
With the structure you are using in the question the following should work:
- name: run my script!
command: ./mypythonscript.py
args:
chdir: "{{ role_path }}"/files
delegate_to: 127.0.0.1
run_once: true
An alternative/straight forward solution:
Let's say you have already built your virtual env under ./env1 and used pip3 install the needed python modules.
Now write playbook task like:
- name: Run a script using an executable in a system path
script: ./test.py
args:
executable: ./env1/bin/python
register: python_result
- name: Get stdout or stderr from the output
debug:
var: python_result.stdout
If you want to execute the inline script without having a separate script file (for example, as molecule test) you can write something like this:
- name: Test database connection
ansible.builtin.command: |
python3 -c
"
import psycopg2;
psycopg2.connect(
host='127.0.0.1',
dbname='db',
user='user',
password='password'
);
"
You can even insert Ansible variables in this string.
Error MsgI am trying to execute a ansible playbook using roles. I have some variables, which I defined in main.yaml of vars section. I am copying these variables (main.yaml) from another remote machine.
My question is, my playbook throws an error for the first time even though it copies the main.yaml file in my vars section. When I run for second time, it executes playbook well. I am understanding, for the first time though the file is copied it doesn't read it as it was not present before the playbook was running. Is there a way I can run it successfully without an error for the first time.
Image roles will give clear idea about roles and sub files. Roles
site.yaml
---
- name: Testing the Mini project
hosts: all
roles:
- test
tasks/main.yaml
---
- name: Converting Mysql to CSV file
command: mysqldump -u root -padmin -T/tmp charan test --fields-terminated-by=,
when: inventory_hostname == "ravi"
- name: Converting CSV file to yaml format
shell: python /tmp/test.py > /tmp/main.yaml
when: inventory_hostname == "ravi"
- name:Copying yaml file from remote node to vars
shell: sshpass -p admin scp -r root#192.168.56.101:/tmp/main.yaml /etc/ansible/Test/roles/vars/main.yaml
when: inventory_hostname == "charan"
- name:Install Application as per the table
apt: name={{ item.Application }} state=present
when: inventory_hostname == {{ item.Username }}
with_items: user_app
/vars/main.yaml This will be copied from remote machine.
---
user_app:
- {Username: '"ravi"' , Application: curl}
- {Username: '"charan"' , Application: git}
Take a look at the include_vars task. It may do what you need. It looks like you need to be explicitly including /vars/main.yaml in a task before your apt task where you reference the variables.