After running the the below Ansible Yaml file the output shows file is created and the content is changed
The YAML File
---
- hosts: all
gather_facts: yes
connection: local
tasks:
- name: Check the date on the server.
action: command touch /opt/b
- name: cat the Content
action: command cat /opt/b
Running the Playbook
root#my-ubuntu:/var/lib/awx/projects/test# ansible-playbook main.yml
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [ansible-ubuntu-1604-db]
TASK [Check the date on the server.] *******************************************
changed: [ansible-ubuntu-1604-db]
[WARNING]: Consider using file module with state=touch rather than running touch
TASK [cat the Content] *********************************************************
changed: [ansible-ubuntu-1604-db]
PLAY RECAP *********************************************************************
ansible-ubuntu-1604-db : ok=3 changed=2 unreachable=0 failed=0
The Message Display changed=2 and tasks doesnt created any file
ubuntu#ansible-ubuntu-1604-db:~$ ls -l /opt/
total 0
The Env
Ansible Controller on the MAC Local Desktop
Taget Node is on Cloud
With connection: local in your playbook, you tell Ansible to execute all tasks on your local ansible controller. So file is created on your local machine.
Remove connection: local and try again.
Related
I am executing some ansible playbooks via gitlab-ci and what I could see is
Ansible playbook executing successfully through pipeline, but it doesn't produce the output it is intended to do
When I retry the gitlab job, it produces the output I needed.
This is one of the many playbooks I am executing through gitlab:
1_ca.yaml
---
- hosts: 127.0.0.1
connection: local
tasks:
- name: Create ca-csr.json
become: true
copy:
dest: ca-csr.json
content: '{"CN":"Kubernetes","key":{"algo":"rsa","size":2048},"names":[{"C":"US","L":"Portland","O":"Kubernetes","OU":"CA","ST":"Oregon"}]}'
- name: Create ca-config.json
become: true
copy:
dest: ca-config.json
content: '{"signing":{"default":{"expiry":"8760h"},"profiles":{"kubernetes":{"usages":["signing","key encipherment","server auth","client auth"],"expiry":"8760h"}}}}'
- name: Create the ca.pem & ca-key.pem
# become: true
shell: |
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
Basically what does this do is, it creates some certs I needed.
But in the first attempt even though pipeline passes and it doesn't generate these certs. When I restart (running the same job for the second time) that particular job in gitlab it generates these certs.
Why this is happening?
This is how my .gitlab-ci.yaml looks like:
Create-Certificates:
stage: ansible-play-books-create-certs
retry:
max: 2
when:
- always
script:
- echo "Executing ansible playbooks for generating certficates"
- ansible-playbook ./ansible-playbooks/1_ca/1_ca.yaml
- ansible-playbook ./ansible-playbooks/1_ca/2_admin.yaml
- ansible-playbook ./ansible-playbooks/1_ca/3_kubelet.yaml
- ansible-playbook ./ansible-playbooks/1_ca/4_kube-controller.yaml
- ansible-playbook ./ansible-playbooks/1_ca/5_kube-proxy.yaml
- ansible-playbook ./ansible-playbooks/1_ca/6_kube-scheduler.yaml
- ansible-playbook ./ansible-playbooks/1_ca/7_kube-api-server.yaml
- ansible-playbook ./ansible-playbooks/1_ca/8_service-account.yaml
- ansible-playbook ./ansible-playbooks/1_ca/9_distribute-client-server-cert.yaml
# when: delayed
# start_in: 1 minutes
tags:
- banuka-gcp-k8s-hard-way
PS: These ansible playbooks are executing in the ansible host itself, not in remote servers. So I can log into the ansible master server and check if these files are created or not.
running your playbook without the last shell module produces the follwing output:
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [127.0.0.1] **************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok: [127.0.0.1]
TASK [Create ca-csr.json] *****************************************************************************************************************************************************************************************
[WARNING]: File './ca-csr.json' created with default permissions '600'. The previous default was '666'. Specify 'mode' to avoid this warning.
changed: [127.0.0.1]
TASK [Create ca-config.json] **************************************************************************************************************************************************************************************
[WARNING]: File './ca-config.json' created with default permissions '600'. The previous default was '666'. Specify 'mode' to avoid this warning.
changed: [127.0.0.1]
PLAY RECAP ********************************************************************************************************************************************************************************************************
127.0.0.1 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
and checking the existence:
$ ls ca* -al
-rw------- 1 root root 155 Aug 17 02:48 ca-config.json
-rw------- 1 root root 129 Aug 17 02:48 ca-csr.json
so although it's quite dirty way of writing a playbook - it works.
Why is it dirty ? :
you're not using any inventory
you should use local_action and not connection: local for local tasks
you are misusing ansible that is multi-node configuration management to do a bash script task
so in conclusion - there's nothing wrong with your ansible playbook - or maybe the file permissions (?) and if it does not run - you should look more in the gitlab-ci direction.
you need to provide more details on Gitlab-CI setup but - maybe the stage is not correct ?
I want to list the java processes running in the mentioned hosts. However am not getting the desired o/p
Have created a ansible-playbook file to read the shell commands.
name: 'Check the running java processes'
hosts: all
tasks:
- name: 'check running processes'
shell: ps -ef | grep -i java```
output: PLAY [Check the running java processes] ****************************************
TASK [setup] *******************************************************************
ok: [target1]
ok: [target2]
TASK [check running processes] *************************************************
changed: [target1]
changed: [target2]
PLAY RECAP *********************************************************************
target1 : ok=2 changed=1 unreachable=0 failed=0
target2 : ok=2 changed=1 unreachable=0 failed=0
However, it's not listing the processes.
You could see the result with your actual playbook by running ansible in verbose mode.
However, the good way to do this is to register the output of your task and display its content in a debug task (or use it in any other task/loop).
To know how your registered var looks like, you can read the doc about common and shell's specific return values... or you can simply debug the full var in your playbook and have a look at it.
Here is an example just to get the full stdout of the command:
---
- name: Check the running java processes
hosts: all
tasks:
- name: Check running processes
shell: ps -ef | grep -i java
register: ps_cmd
- name: Show captured processes
debug:
var: ps_cmd.stdout
I'm doing exactly like all the tutorials, don't have typos, and even able to run alone the main.yml inside /roles/x
but when I run the play that should call it - nothing really happens
parent
---
- name: Install / Upgrade tagger
hosts: tagger
roles:
- tagger
/roles/tagger/tasks/main.yml
---
- command: echo 1
need to say I'm running everything in localhost.
tried also
ansible-playbook -i "localhost" -c local tagger.yml
ansible-playbook -i "localhost" -c local tagger.yml
[WARNING]: Host file not found: localhost
[WARNING]: provided hosts list is empty, only localhost is available
PLAY [build tagger docker] *****************************************************
TASK [setup] *******************************************************************
ok: [localhost]
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0
Using the commandline you gave:
$ ansible-playbook -i "localhost" -c local tagger.yml
ERROR: Unable to find an inventory file, specify one with -i ?
With the obvious correction (adding a comma):
$ ansible-playbook -i "localhost," -c local tagger.yml
PLAY [Install / Upgrade tagger] ***********************************************
skipping: no hosts matched
PLAY RECAP ********************************************************************
That still doesn't match your output, but it does indicate the problem. localhost is never tagger. Perhaps you are using a hosts.ini file and not telling us about it? Or a specific version of ansible that is different than mine? In any case, I changed hosts: tagger to hosts: all as follows:
---
- name: Install / Upgrade tagger
hosts: all
roles:
- tagger
I then reran:
$ ansible-playbook -i "localhost," -c local tagger.yml
PLAY [Install / Upgrade tagger] ***********************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [tagger | command echo 1] ***********************************************
changed: [localhost]
PLAY RECAP ********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0
So there are the two fixes necessary.
I'm writing an Ansible script to setup a zookeeper cluster. After extract zookeeper tar ball:
unarchive: src={{tmp_dir}}/zookeeper.tar.gz dest=/opt/zookeeper copy=no
I get a zookeeper directory that contains version number:
[root#sc-rdops-vm09-dhcp-2-154 conf]# ls /opt/zookeeper
zookeeper-3.4.8
To proceed, I have to know name of the extracted sub-directory. For example, I need to copy a configure to /opt/zookeeper/zookeeper-3.4.8/conf.
How to get zookeeper-3.4.8 with Ansible statement?
Try with this:
➜ ~ cat become.yml
---
- hosts: localhost
user: vagrant
tasks:
- shell: ls /opt/zookeeper
register: path
- debug: var=path.stdout
➜ ~ ansible-playbook -i hosts-slaves become.yml
[WARNING]: provided hosts list is empty, only localhost is available
PLAY ***************************************************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [command] *****************************************************************
changed: [localhost]
TASK [debug] *******************************************************************
ok: [localhost] => {
"path.stdout": "zookeeper-3.4.8"
}
PLAY RECAP *********************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0
Then you could use {{ path.stdout }} to set the name of the path, in the next tasks.
Alternatively, before you proceed with the next steps you could rename this directory
- name: Rename directory
shell: mv `ls -d -1 zookeeper*` zookeeper
The above shell command substitution (inside backticks) should work.
I defined a envs.sh script inside the /etc/profile.d/ folder.
When executing a ansible-playbook, I'm trying to get the value of this env var but it instead throws me an error:
Ansible test:
debug: msg="{{ ansible_env.NGN_VAL }} is an environment variable"
Error:
fatal: [xxx.yyy.zzz.kkk] => One or more undefined variables: 'dict object' has no attribute 'NGN_VAL'
FATAL: all hosts have already failed -- aborting
Why it doesn't execute the scripts inside that folder? When I connect through ssh, I echo it and it displays its value. How do I set remote environment variables and obtain them during ansible execution?
Thanks
Test with this if you want catch the environment variable.
[jenkins#scsblnx-828575 jenkins]$ cat mypass
export MYPASSWD=3455637
[jenkins#scsblnx-828575 jenkins]$ cat test.yml
- hosts: all
user: jenkins
tasks:
- name: Test variables.
shell: source /apps/opt/jenkins/mypass && echo $MYPASSWD
register: myenvpass
- debug: var=myenvpass.stdout
[jenkins#scsblnx-828575 jenkins]$ ansible-playbook -i hosts test.yml
PLAY ***************************************************************************
TASK [setup] *******************************************************************
ok: [127.0.0.1]
TASK [Test variables.] *********************************************************
changed: [127.0.0.1]
TASK [debug] *******************************************************************
ok: [127.0.0.1] => {
"myenvpass.stdout": "3455637"
}
PLAY RECAP *********************************************************************
127.0.0.1 : ok=3 changed=1 unreachable=0 failed=0
In my point of view you can accomplish using a mix of ansible vault and system environment variables.
Create and encrypt a file with ansible vault (this is where you put the content of your remote env variable):
$ ansible-vault create vars_environment.yml
Put your password to encrypt this file and write content of your variable, for example:
ngn_val: supersecret
I use to load my permanent environment variables in /etc/profile.d directory but they are more paths:
/etc/profile.d
/etc/profile
/etc/environment
~/.profile
It is a good way to write a template in your playbook, although your can use the copy module:
---
# environment.yml
- name: Get enviroment variable server
hosts: debian.siccamdb.sm
vars:
dest_profile_environment: /etc/profile.d/environment.sh
vars_files:
- vars_environment.yml
tasks:
- name: "Template {{ dest_profile_enviroment }}"
template:
src: environment.sh.j2
dest: "{{ dest_profile_environment }}"
mode: '0644'
- name: Load env variable
shell: ". {{ dest_profile_environment }} && echo $NGN_VAL"
register: env_variable
- name: Debug env variable
debug:
var: env_variable.stdout_lines[0]
The important thing here is in the shell task because you have to source your environment variable and the use it by the register keyword which can be used to capture the output of a command in a variable.
This is the template's content in this example the file environment.sh.j2:
export NGN_VAL={{ ngn_val }}
Type your password o the vars_environment.yml file after run the playbook:
ansible-playbook --vault-id #prompt environment.yml
The output is the following:
PLAY [Get enviroment variable server] *************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************
ok: [debian.siccamdb.sm]
TASK [Template {{ dest_profile_enviroment }}] *****************************************************************************************************************************************************************************************
changed: [debian.siccamdb.sm]
TASK [Load env variable] **************************************************************************************************************************************************************************************************************
changed: [debian.siccamdb.sm]
TASK [Debug env variable] *************************************************************************************************************************************************************************************************************
ok: [debian.siccamdb.sm] => {
"env_variable.stdout_lines[0]": "supersecret"
}
PLAY RECAP ****************************************************************************************************************************************************************************************************************************
debian.siccamdb.sm : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Another way to accomplish is using custom facts in your remote server. This is a good practice if you want to leave the content of your variable in the remote server and using in playbooks without the needs of ansible vault.