I am running Packer + Ansible provisioner from the Bitbucket pipeline. but ansible not becoming root even become: true is given. Packer is used to create an Amazon Linux AMI and Ansible provisioner is used to run some server hardening scripts and configurations.
output from simple id command:
When run from Pipeline
TASK [aws-basic : debug] *****************************************
ok: [default] => {
"command_output.stdout_lines": [
"uid=1000(ec2-user) gid=1000(ec2-user) groups=1000(ec2-user),4(adm),10(wheel),190(systemd-journal)"
]
}
When running from Locally
TASK [aws-basic : debug] *****************************************
ok: [default] => {
"command_output.stdout_lines": [
"uid=0(root) gid=0(root) groups=0(root)"
]
}
Following is my Ansible Playbook with two roles
- name: AWS EC2 AMLinux Configuration playbook
hosts: default
remote_user: ec2-user
connection: ssh
become: true
vars:
_date: "{{ansible_date_time.iso8601}}"
reop_path: /usr/tmp/
roles:
- role: role-1
- role: role-2
Packer ansible provisioner config
provisioner "ansible" {
playbook_file = "../ansible/aws-ec2-base.yml"
extra_arguments = ["--extra-vars", "api_key=${var.api_key}"]
galaxy_file = "../ansible/requirements.yml"
ansible_ssh_extra_args = ["-oHostKeyAlgorithms=+ssh-rsa -oPubkeyAcceptedKeyTypes=+ssh-rsa"]
}
Even putting become_user: root in the ansible-playbook is not working.
Any reason this only happens in the bitbucket pipeline? I am using an ubuntu docker image with Ansible and Packer installed.
My gut is there would be some config in each system that triggers a different behaviour. I'd try
ansible-config dump --only-changed
in both your local workstation and the CI system and try to peek any difference that might be causing this.
This issue was caused because of the use of an older version of the packer plugin.
can also resolve the issue by using a bitbucket runner.
Related
i am using packer for the automated image creation on openstack. i am using ansible for provisioning.
my packer machine is different from my ansible machine and in this case how can i use the provisioner:
"provisioners": [
{
"type": "ansible",
"playbook_file": "./playbook.yml",
"extra_arguments": "-vvvv"
}
],
playbook.yml is in the different machine. how can i redirect packer to my ansible machine ip address and the location of the yaml file.
Short answer, you can't. Ansible control machine needs to be the same as the Packer host.
sure, thanks for the reply. however when I am doing that the ansible playbook is throwing an error that "it must be run as root" so where do I need to mention that since I am using the cloud image of centos where the username is centos and my yaml file as follows:
hosts: all
gather_facts: true
user: centos
become: yes
become_method: sudo
in my builder i have mentiooned the user:centos
I am new to ansible, and I am trying to use ansible on some lxc containers.
My problem is that I don't want to install ssh on my containers. So
What I tried:
I tried to use this connection plugin but it seams that it does not work with ansible 2.
After understanding that chifflier connection plugin doesn't work, I tried to use the connection plugin from openstack.
After some failed attempts I dived into the code, and I understand
that the plugin doesn't have the information that the host I am talking with is a container.(because the code never reached this point)
My current setup:
{Ansbile host}---|ssh|---{vm}--|ansible connection plugin|---{container1}
My ansible.cfg:
[defaults]
connection_plugins = /home/jkarr/ansible-test/connection_plugins/ssh
inventory = inventory
My inventory:
[hosts]
vm ansible_host=192.168.28.12
[containers]
mailserver physical_host=vm container_name=mailserver
my group vars:
ansible_host: "{{ physical_hostname }}"
ansible_ssh_extra_args: "{{ container_name }}"
ansible_user: containeruser
container_name: "{{ inventory_hostname }}"
physical_hostname: "{{ hostvars[physical_host]['ansible_host'] }}"
My testing playbook:
- name: Test Playbook
hosts: containers
gather_facts: true
tasks:
- name: testfile
copy:
content: "Test"
dest: /tmp/test
The output is:
fatal: [mailserver]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname mailserver: No address associated with hostname\r\n",
"unreachable": true
}
Ansible version is: 2.3.1.0
So what am I doing wrong? any tips?
Thanks in advance!
Update 1:
Based on eric answer I am now using this connection plug-in.
I update the my inventory and it looks like:
[hosts]
vm ansible_host=192.168.28.12
[containers]
mailserver physical_host=vm ansible_connection=lxc
After running my playbook I took:
<192.168.28.12> THIS IS A LOCAL LXC DIR
fatal: [mailserver]: FAILED! => {
"failed": true,
"msg": "192.168.28.12 is not running"
}
Which is weird because 192.168.28.12 is the vm and the container is called mailserver. Also I verified that the container is running.
Also why it says that 192.168.28.12 is local lxc dir?
Update 2:
I remove my group_vars, my ansible.cfg and the connection plugin from the playbook and I got the this error:
<mailserver> THIS IS A LOCAL LXC DIR
fatal: [mailserver]: FAILED! => {
"failed": true,
"msg": "mailserver is not running"
}
You should take a look at this lxc connection plugin. It might fit your needs.
Edit : lxc connection plugin is actually part of Ansible.
Just add ansible_connection=lxc in your inventory or group vars.
I'm trying something similar.
I want to configure a host over ssh using ansible and run lxc containers on the host, which are also configured using ansible:
ansible control node ----> host-a -----------> container-a
ssh lxc-attach
The issue with the lxc connection module is, that it only works for local lxc containers. There is no way to get it working through ssh.
At the moment the only way seems to be a direct ssh connection or a ssh connection through the first host:
ssh
ansible control node ----> container-a
or
ssh ssh
ansible control node ----> host-a ----> container-a
Both require sshd installed in the container. But the second way doesn't need port forwarding or multiple ip addresses.
Did you get a working solution?
I've etcd running on the Ansible control machine (local). I can get and put the values as shown below but Ansible wouldn't get values, any thoughts?
I can also get the value using curl
I got this simple playbook
#!/usr/bin/env ansible-playbook
---
- name: simple ansible playbook ping
hosts: all
gather_facts: false
tasks:
- name: look up value in etcd
debug: msg="{{ lookup('etcd', 'weather') }}"
And running this playbook wouldn't fetch values from etcd
TASK: [look up value in etcd] *************************************************
ok: [app1.test.com] => {
"msg": ""
}
ok: [app2.test.com] => {
"msg": ""
}
Currently (31.05.2016) Ansible etcd lookup plugin support only calls to v1 API and not compatible with newer etcd instances that publish v2 API endpoint.
Here is the issue.
You can use my quickly patched etcd2.py lookup plugin.
Place it into lookup_plugins subdirectory near you playbook (or into Ansible global lookup_plugins path).
Use lookup('etcd2', 'weather') in your playbook.
Background
I am experimenting with Ansible (1.9.4) roles and I am trying to get the hang of role dependencies.
I have created the following roles:
A role that installs the Oracle JDK (ansible-java8)
A role that installs Tomcat (ansible-tomcat7)
The second role defines the first as a dependency in /ansible-tomcat7/meta/main.yml:
dependencies:
- { role: java8 }
I also included a requirements.yml file with the following:
- name: java8
src: 'https://github.com/gregwhitaker/ansible-java8'
I have added the following configuration to my /etc/ansible/ansible.cfg to configure my roles_path to a place in my home directory:
roles_path = ~/ansible/roles
I then installed the ansible-java8 role as java8 using the following command:
ansible-galaxy install -r requirements.yml
Once the command was ran I can see the java8 role in the ~/ansible/roles directory.
However, when I run a playbook that calls the tomcat7 role only that role is executed. The java8 role is not executed before the tomcat7 role.
PLAY [all] ********************************************************************
GATHERING FACTS ***************************************************************
ok: [default]
TASK: [Install Tomcat7 (Ubuntu)] **********************************************
changed: [default] => (item=tomcat7,libtcnative-1,libapr1)
TASK: [Install Tomcat7 (Debian)] **********************************************
skipping: [default]
TASK: [Install Tomcat7 (Amazon Linux)] ****************************************
skipping: [default]
PLAY RECAP ********************************************************************
default
Questions
Is this the correct way to define dependent roles or have I totally missed something?
Am I correct in thinking that since I marked the tomcat7 role as depending on java8 that the java8 role should have been located from the roles_path and ran first?
What mistake am I making that is causing the java8 role to not run before the tomcat7 role?
This turned out to be a problem with how I was testing the role.
I was telling Vagrant to provision my test box using the following site.yml file:
- hosts: all
sudo: yes
tasks:
- include: tasks/main.yml
This was obviously causing Ansible to only run the Tomcat tasks and not take into account that this was actually a role and not just a playbook with some tasks in it.
The site.yml playbook I am using for testing is at the root of the repository so once I changed it to reference the repository as a role everything started working.
- hosts: all
sudo: yes
roles:
- { role: '../ansible-tomcat7' }
when trying to run ansible on cloud9,
some of my task have:
sudo_user: emr-user
HOSTS file:
[development]
localhost ansible_connection=local ansible_ssh_user=ubuntu
Running with:
ansible-playbook -i hosts site.yml --limit=development
keeps failing on this task with:
failed: [localhost] => {"failed": true, "parsed": false}
[sudo via ansible, key=zacflhyhixxhiajrlmtitjxgpxqimnmn] password:
I believe it is related to the fact the cloud9 runs on password-less ubuntu root
I was able to bypass it using sudo su and then running:
ansible-playbook -i hosts site.yml --limit=development
but it doesn't feel right. any other ideas?