packer with ansible provisioner - ansible

i am using packer for the automated image creation on openstack. i am using ansible for provisioning.
my packer machine is different from my ansible machine and in this case how can i use the provisioner:
"provisioners": [
{
"type": "ansible",
"playbook_file": "./playbook.yml",
"extra_arguments": "-vvvv"
}
],
playbook.yml is in the different machine. how can i redirect packer to my ansible machine ip address and the location of the yaml file.

Short answer, you can't. Ansible control machine needs to be the same as the Packer host.

sure, thanks for the reply. however when I am doing that the ansible playbook is throwing an error that "it must be run as root" so where do I need to mention that since I am using the cloud image of centos where the username is centos and my yaml file as follows:
hosts: all
gather_facts: true
user: centos
become: yes
become_method: sudo
in my builder i have mentiooned the user:centos

Related

Ansible not becoming root when run from Bitbucket Pipeline

I am running Packer + Ansible provisioner from the Bitbucket pipeline. but ansible not becoming root even become: true is given. Packer is used to create an Amazon Linux AMI and Ansible provisioner is used to run some server hardening scripts and configurations.
output from simple id command:
When run from Pipeline
TASK [aws-basic : debug] *****************************************
ok: [default] => {
"command_output.stdout_lines": [
"uid=1000(ec2-user) gid=1000(ec2-user) groups=1000(ec2-user),4(adm),10(wheel),190(systemd-journal)"
]
}
When running from Locally
TASK [aws-basic : debug] *****************************************
ok: [default] => {
"command_output.stdout_lines": [
"uid=0(root) gid=0(root) groups=0(root)"
]
}
Following is my Ansible Playbook with two roles
- name: AWS EC2 AMLinux Configuration playbook
hosts: default
remote_user: ec2-user
connection: ssh
become: true
vars:
_date: "{{ansible_date_time.iso8601}}"
reop_path: /usr/tmp/
roles:
- role: role-1
- role: role-2
Packer ansible provisioner config
provisioner "ansible" {
playbook_file = "../ansible/aws-ec2-base.yml"
extra_arguments = ["--extra-vars", "api_key=${var.api_key}"]
galaxy_file = "../ansible/requirements.yml"
ansible_ssh_extra_args = ["-oHostKeyAlgorithms=+ssh-rsa -oPubkeyAcceptedKeyTypes=+ssh-rsa"]
}
Even putting become_user: root in the ansible-playbook is not working.
Any reason this only happens in the bitbucket pipeline? I am using an ubuntu docker image with Ansible and Packer installed.
My gut is there would be some config in each system that triggers a different behaviour. I'd try
ansible-config dump --only-changed
in both your local workstation and the CI system and try to peek any difference that might be causing this.
This issue was caused because of the use of an older version of the packer plugin.
can also resolve the issue by using a bitbucket runner.

SSH-less LXC containers using Ansible

I am new to ansible, and I am trying to use ansible on some lxc containers.
My problem is that I don't want to install ssh on my containers. So
What I tried:
I tried to use this connection plugin but it seams that it does not work with ansible 2.
After understanding that chifflier connection plugin doesn't work, I tried to use the connection plugin from openstack.
After some failed attempts I dived into the code, and I understand
that the plugin doesn't have the information that the host I am talking with is a container.(because the code never reached this point)
My current setup:
{Ansbile host}---|ssh|---{vm}--|ansible connection plugin|---{container1}
My ansible.cfg:
[defaults]
connection_plugins = /home/jkarr/ansible-test/connection_plugins/ssh
inventory = inventory
My inventory:
[hosts]
vm ansible_host=192.168.28.12
[containers]
mailserver physical_host=vm container_name=mailserver
my group vars:
ansible_host: "{{ physical_hostname }}"
ansible_ssh_extra_args: "{{ container_name }}"
ansible_user: containeruser
container_name: "{{ inventory_hostname }}"
physical_hostname: "{{ hostvars[physical_host]['ansible_host'] }}"
My testing playbook:
- name: Test Playbook
hosts: containers
gather_facts: true
tasks:
- name: testfile
copy:
content: "Test"
dest: /tmp/test
The output is:
fatal: [mailserver]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname mailserver: No address associated with hostname\r\n",
"unreachable": true
}
Ansible version is: 2.3.1.0
So what am I doing wrong? any tips?
Thanks in advance!
Update 1:
Based on eric answer I am now using this connection plug-in.
I update the my inventory and it looks like:
[hosts]
vm ansible_host=192.168.28.12
[containers]
mailserver physical_host=vm ansible_connection=lxc
After running my playbook I took:
<192.168.28.12> THIS IS A LOCAL LXC DIR
fatal: [mailserver]: FAILED! => {
"failed": true,
"msg": "192.168.28.12 is not running"
}
Which is weird because 192.168.28.12 is the vm and the container is called mailserver. Also I verified that the container is running.
Also why it says that 192.168.28.12 is local lxc dir?
Update 2:
I remove my group_vars, my ansible.cfg and the connection plugin from the playbook and I got the this error:
<mailserver> THIS IS A LOCAL LXC DIR
fatal: [mailserver]: FAILED! => {
"failed": true,
"msg": "mailserver is not running"
}
You should take a look at this lxc connection plugin. It might fit your needs.
Edit : lxc connection plugin is actually part of Ansible.
Just add ansible_connection=lxc in your inventory or group vars.
I'm trying something similar.
I want to configure a host over ssh using ansible and run lxc containers on the host, which are also configured using ansible:
ansible control node ----> host-a -----------> container-a
ssh lxc-attach
The issue with the lxc connection module is, that it only works for local lxc containers. There is no way to get it working through ssh.
At the moment the only way seems to be a direct ssh connection or a ssh connection through the first host:
ssh
ansible control node ----> container-a
or
ssh ssh
ansible control node ----> host-a ----> container-a
Both require sshd installed in the container. But the second way doesn't need port forwarding or multiple ip addresses.
Did you get a working solution?

Ansible differentiate local and remote connection

We have lots of playbooks that we use to setup our remote instances. We would like to use those playbooks when bringing up our local environment for test purposes as well.
Is it possible to differentiate between a playbook running locally and remotely?
I'm looking for something like:
- name: install apache2
apt: name=apache2 update_cache=yes state=latest
when: ansible.connection_type == 'local'
Which means I only want to install apache when running ansible against my local environment.
I will then execute it with:
ansible-playbook -i /root/ansible-config/ec2.py -c local myplaybook.yml
Is it possible?
There is ansible_connection variable for every host.

set ansible-playbook user variable dynamically based on the ec2 distros

I'm creating an ansible playbook that goes through a group of AWS EC2 hosts and install some basic packages. Before the playbook can execute any tasks, the playbook needs to login to each host (2 type of distros AWS Linux or Ubuntu) with correct user: {{ userXXX }} this is the part that I'm not too sure how to pass in the correct user login, it would be either ec2-user or ubuntu.
- name: setup package agent
hosts: ec2_distros_hosts
user: "{{ ansible_user_id }}"
roles:
- role: package_agent_install
I was assuming ansible_user_id would work based of the reserved variable from ansible but that is not the case here. I don't want to create 2 separate playbook for different distros, is there an elegant solution to dynamically lookup user login and used as the user: ?
Here is the failed cmd with unknown user ansible-playbook -i inventory/ec2.py agent.yml
You have several ways to accomplish your task:
1. Create ansible user with the same name on every host
If you have one, you can use user: ansible_user in your playbook.
2. Tag every host with suitable login_name
You can create a tag (e.g. login_name) for every ec2 host and specify user in it. For Ubuntu hosts – ubuntu, for AWS Linux hosts – ec2-user.
After doing so, you can use user: "{{ec2_tag_login_name}}" in your playbook – this will take username from login_name tag of the host.
3. Patch the ec2.py script for your needs
It seems there is no decent way to get exact platform name from AMI, but you can use something like this:
image_name = getattr(conn.get_image(image_id=getattr(instance,'image_id')),'name')
login_name = 'user'
if 'ubuntu' in image_name:
login_name = 'ubuntu'
elif 'amzn' in image_name:
login_name = 'ec2-user'
setattr(instance, 'image_name', image_name)
setattr(instance, 'login_name', login_name)
Paste this code just before self.add_instance(instance, region) in ec2.py with the same indentation. It fetches image name and do some guess work to define login_name. Then you can use user: "{{ec2_login_name}}" in your playbook.
You can set variables based on EC2 instance tags. If you tag instances with the distro name then you can set Ansible's ssh username for each distro via group_vars files.
Example group_vars file for Ubuntu relative to your playbook: group_vars/tag_Distro_Ubuntu.yml
---
ansible_user: ubuntu
Any instances tagged Distro: Ubuntu will connect with the ubuntu user. Create a separate group_vars file per distro tag to accommodate other distros.

Does Vagrant create the /etc/sudoers.d/vagrant file, or does it need to be added manually to use ansible sudo_user?

Some ubuntu cloud images such as this one :
http://cloud-images.ubuntu.com/vagrant/precise/20140120/precise-server-cloudimg-amd64-vagrant-disk1.box
have the file /etc/sudoers.d/vagrant, with the content vagrant ALL=(ALL) NOPASSWD:ALL
Other boxes such as this one https://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_ubuntu-13.10_chef-provisionerless.box
doesn't have it, and as a result ansible commands with sudo_user don't work.
I can add the file with :
- name: ensure Vagrant is a sudoer
copy: content="vagrant ALL=(ALL) NOPASSWD:ALL" dest=/etc/sudoers.d/vagrant owner=root group=root
sudo: yes
I'm wondering if something Vagrant should be doing, because this task
will not be applicable when running the ansible playbook on a real (non vagrant) machine.
Vagrant requires privileged access to the VM, either using config.ssh.username = "root", or more commonly, via sudo. The Bento Ubuntu boxes currently configure it directly to /etc/sudoers.
I don't know what ansible's sudo_user does or means, but I guess your provisioning is overriding /etc/sudoers. In this case you really need to ensure you don't lose Vagrant's sudo access to the VM. Or build your own base box which uses sudoers.d.
As a side note, if you create a /etc/sudoers.d/ file, you should also set it's mode to 0440 or at least some older sudo versions refuse to apply it.

Resources