Run Ansible playbook from Cloud-Init - ansible

I have been learning Cloud-Init for several days to do an automatic deployment. To achieve this, and apply certain configurations, I am using Ansible playbooks. The problem that I have found is that I am not able to make the playbook run directly on the operating system that is being installed.
I leave you the user-data file that I am using.
#cloud-config
autoinstall:
version: 1
identity:
hostname: hostname
password: "$6$cOciYeIErEet80Rv$YX8qt6vizXgcUkgIPSKD1qNZNxe77tSWOY3k/0.i8D8EpApaGNuyucxJvONmZiRj4rVM3L6EE4sLKcnzYVcMj/ "
username: ubuntu
storage:
layout:
name: direct
locale: es_ES
timezone: "Europe/Madrid"
keyboard:
layout: es
packages:
- sshpass
- ansible
- git
late-commands:
- git clone https://github.com/MarcOrfilaCarreras/dotfiles /target/root/dotfiles
- ansible-playbook -i inventory-test /root/dotfiles/ansible/playbooks/docker.yml -u ubuntu -e "ansible_password=ubuntu" -e "ansible_become_pass=ubuntu"
PS: I am using Ubuntu Server 22.04, the Ansible command is temporary and only for testing and I know that I have to change the identity fields.

If you want to configure localhost, it's better to use local transport (which is -c local in command line).
Basically, change ansible call to:
ansible-playbook -i inventory-test /root/dotfiles/ansible/playbooks/docker.yml -c local
This will bypass all SSH things and run locally.

Related

Ansible having problem authenticating with Google Cloud Platform

We are using Ansible to deploy an image to Google Kubernetes Cluster (GKE).
We have setup Ubuntu 20.04 and Python 3.8.5.
playbook.main.yml:
---
- hosts: localhost
vars:
k8s_file_path: /home/pesinn/Documents/...
become: yes
become_method: sudo
roles:
- k8s
main.yml:
- name: First Deployment
k8s:
kubeconfig: /home/pesinn/.kube/config
src: "{{k8s_file_path}}/deployment.yml"
When trying to deploy the image defined in deployment.yml file, by running the playbook, we get this error:
kubernetes.config.config_exception.ConfigException: cmd-path: process returned 1
Cmd: /home/pesinn/y/google-cloud-sdk/bin/gcloud config config-helper --format=json
Stderr: WARNING: Could not open the configuration file: [/root/.config/gcloud/configurations/config_default].
ERROR: (gcloud.config.config-helper) You do not currently have an active account selected.
Please run:
$ gcloud auth login
What we've already done
Initialized the cloud: gcloud init
Logged in and chosen a project gcloud auth login
Run export GOOGLE_APPLICATION_CREDENTIALS="path_to_service_account_key.json"
Run gcloud container clusters get-credentials {gke_project} --region {region}
Run the playbook sudo ansible-playbook playbook.main.yml -vvv
Run gcloud config config-helper --format=json on the local machine without any problems
What is very strange here is that we're logged in for sure. We can access the GKE cluster through kubectl command on the local machine. However, Ansible complains about us not being logged in. Also, in the error logs, we see that it is trying to open /root/.config/gcloud/configurations/config_default. Our default config file is, on the other hand, located in the home folder.
This error occurs randomly. Sometimes Ansible can detect our login and deploys the image, but sometimes it gives us this error. Both scenarios can happens without any code changes being made.
For some reason, ansible does not use GCP's default environment variables for authentication.
You can set
GCP_AUTH_KIND
GCP_SERVICE_ACCOUNT_EMAIL
GCP_SERVICE_ACCOUNT_FILE
GCP_SCOPES
GCP_SERVICE_ACCOUNT_FILE is the equivalent of GOOGLE_APPLICATION_CREDENTIALS
Reference: https://docs.ansible.com/ansible/latest/scenario_guides/guide_gce.html#providing-credentials-as-environment-variables

Using ping on localhost in a playbook

I am unable to run ping commands from a ansible host (using localhost, see below).
I built a simple playbook to run ping using the command module:
---
#
- name: GET INFO
hosts: localhost
tasks:
- name: return motd to registered var
command: "/usr/bin/ping 10.39.120.129"
register: mymotd
- name: debug output
debug: var=mymotd
However, I this error: "ping: socket: Operation not permitted"
Seems like there is a permissions issue. However, looking at the /usr/bin directory, it looks like ping would be executable to me:
"-rwxr-xr-x. 1 root root 66176 Aug 4 2017 ping",
I cannot become or use sudo, it seems like tower is locked down for that and I don't have the authority to change it either.
Anyone have any suggestions? What brought me to this, is that I am trying to run ping in a custom module and getting a similar issue.
Thanks
ping binary needs to have the SETUID bit set to be fully runable as a normal user, which is not the case on your server.
You need to run as root:
chmod u+s $(which ping)
If you don't have root access and cannot have this done by an admin, I'm affraid you're stuck... unless the server you are trying to ping is a machine you can manage with ansible.
In this later case, there is a ping module you can use. It is not ICMP ping as said in the doc. See if this can be used in your situation.
One of the numerous ref I could find about ping permissions: https://ubuntuforums.org/showthread.php?t=927709

/ect/ansible file is not available in Mac OS

I used pip to install Ansible in MacOS. But I cannot find the /etc/ansible folder. Neither the inventory file.
I want to run my playbook in minikube environment. But the playbook returns,
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
[WARNING]: Could not match supplied host pattern, ignoring: 192.168.99.105
How to solve this issue?
I looked into this matter and using Ansible for managing minikube is not an easy topic. Let me elaborate on that:
The main issue is cited below:
Most Ansible modules that execute under a POSIX environment require a Python interpreter on the target host. Unless configured otherwise, Ansible will attempt to discover a suitable Python interpreter on each target host the first time a Python module is executed for that host.
-- Ansible Docs
What that means is that most of the modules will be unusable. Even ping
Steps to reproduce:
Install Ansible
Install Virtualbox
Install minikube
Start minikube
SSH into minikube
Configure Ansible
Test
Install Ansible
As the original poster said it can be installed through pip.
For example:
$ pip3 install ansible
Install VirtualBox
Please download and install appropriate version for your system.
Install minikube
Please follow this site: Kubernetes.io
Start minikube
You can start minikube by invoking command:
$ minikube start --vm-driver=virtualbox
Parameter --vm-driver=virtualbox is important because it will be useful later for connecting to the minikube.
Please wait for minikube to successfully deploy on the Virtualbox.
SSH into minikube
It is necessary to know the IP address of minikube inside the Virtualbox.
One way of getting this IP is:
Open Virtualbox
Click on the minikube virtual machine for it to show
Enter root for account name. It should not ask for password
Execute command: $ ip a | less and find the address of network interface. It should be in format of 192.168.99.XX
From terminal that was used to start minikube please run below command:
$ minikube ssh
Command above will ssh to newly created minikube environment and it will store a private key in location:
HOME_DIRECTORY .minikube/machines/minikube/id_rsa
id_rsa will be needed to connect to the minikube
Try to login to minikube by invoking command:
ssh -i PATH_TO/id_rsa docker#IP_ADDRESS
If login has happened correctly there should be no issues with Ansible
Configure Ansible
For using ansible-playbook 2 files will be needed:
Hosts file with information about hosts
Playbook file with statements what you require from Ansible to do
Example hosts file:
[minikube_env]
minikube ansible_host=IP_ADDRESS ansible_ssh_private_key_file=./id_rsa
[minikube_env:vars]
ansible_user=docker
ansible_port=22
The ansible_ssh_private_key_file=./id_rsa will tell Ansible to use ssh key from file with correct key to this minikube instance.
Note that this declaration will need to have id_rsa file in the same location as rest of the files.
Example playbook:
- name: Playbook for checking connection between hosts
hosts: all
gather_facts: no
tasks:
- name: Task to check the connection
ping:
You can test the connection by invoking command:
$ ansible-playbook -i hosts_file ping.yaml
Above command should fail because there is no Python interpreter installed.
fatal: [minikube]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "module_stderr": "Shared connection to 192.168.99.101 closed.\r\n", "module_stdout": "/bin/sh: /usr/bin/python: No such file or directory\r\n", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
There is a successful connection between Ansible and minikube but there is no Python interpreter to back it up.
There is a way to use Ansible without Python interpreter.
This Ansible documentation is explaining the use of raw module.

openwhisk cluster setup and load balancing

I am able to run openwhisk in my local dev machine. I like to extend this to production environment. Is there any concept of openwhisk cluster?. I am not able to find good documentation on this. How auto-load balancing is achieved, etcc..
OpenWhisk is deployed via ansible and as such can be distributed across multiple VMs in a straightforward way.
Check the README on distributed deployments for further information and guidance.
Openwhisk will use ansible to deploy the openwhisk
I followed the follwoing way for my distributed setup
First ensure ssh passwrod less connectivity to all the servers
git clone https://github.com/apache/incubator-openwhisk.git
Add the remote_user and private_key_file values to the defaults section of
the ansible.cfg file. The remote_user value sets the default ssh user. The
private_key_file is required when using a private key that is not in the
default ~/.ssh folder
[defaults]
remote_user = ubuntu
private_key_file=/path/to/file.pem
Go to tools/ubuntu-setup run all.sh to install all the required softwares.
Now modify the inventory files(hosts) for your first node. this can become your bootstrapper VM
Check if you are able to ping the hosts : ansible all -i environments/distributed/hosts -m ping
if ping is fine run the next commad to generate the config files: ansible-playbook -i environments/distributed/hosts setup.yml
For installing the pre requisites: ansible-playbook -i environments/distributed prereq_build.yml
Deploy registry: ansible-playbook -i environments/distributed registry.yml
Go to openwhisk home run the following command to build the Openwhisk
./gradlew distDocker -PdockerHost=:4243 -PdockerRegistry=:5000
Once the build is successful run the following commands from the ansible folder
ansible-playbook -i environments/distributed/hosts couchdb.yml
ansible-playbook -i environments/distributed/hosts initdb.yml
ansible-playbook -i environments/distributed/hosts wipe.yml
ansible-playbook -i environments/distributed/hosts openwhisk.yml
ansible-playbook -i environments/distributed/hosts postdeploy.yml
Now edit the host file for other hosts and repeat the steps 7-8 and 12
this will create the setup in all the nodes. once done, you can use a node balancer to load balance on that. for sync between db instances i m using couchdb continuous replication

Connecting to container using ansible

I have build a docker image and started the container using ansible. I'm running into an issue trying to create a dynamic connection to the container from the docker host to set some environment variable and execute some script. I know ansible does not use ssh to connect to the container where I can can use the expect module to run this command "ssh root#localhost -p 12345". How do I add and maintain a connection to the container using ansible docker connection plugin or pointing directly to the docker host? This is all running in AWS EC2 instance.
I think I need to run ansible as an equivalent to this command use by ansible to connect to the container host "docker exec -i -t container_host_server /bin/bash".
name: Create a data container
docker_container:
name: packageserver
image: my_image/image_name
tty: yes
state: started
detach: True
detach: yes
volumes:
/var/www/html
published_ports:
"12345:22"
"80:80"
register: container
Thanks in Advance,
DT
To set environment variables you can use parameter "env" in your docker_container task.
In the docker_container task you can add the parameter "command" to override the command defined as CMD in the Dockerfile of your docker image, somethning like
command: PathToYourScript && sleep infinity
In your example you expose container port 22, so it seems you want run sshd inside container. Although it's not a best practice in Docker, if you want sshd running you have to start that using command parameter in the docker_container task:
command: ['/usr/sbin/sshd', '-D']
Doing it (and having defined a user in the container), you'll be able to connect your container with
ssh -p 12345 user#dockerServer
or, as for your example, "ssh -p 12345 root#localhost" if your image already defined root user and you are working on localhost.

Resources