I am creating a pipeline which is automatically triggered when I push my code on gitlab.com.
The project is about the provisioning of a machine.
Here my .gitlab-ci.yml file
ansible_build:
image: debian:10
script:
- apt-get update -q -y
- apt-get install -y ansible git openssh-server keychain
- service ssh stop
- service ssh start
- cp files/<ad-hoc-created-key> key.pem && chmod 600 key.pem
- eval `keychain --eval` > /dev/null 2>&1
- ssh-add key.pem
- ansible-galaxy install -r requirements.yml
- ansible-playbook provision.yml --inventory hosts --limit local
When I push my code, the gitlab environment starts running all commands, but then it exits with the following error
$ ansible-playbook provision.yml --inventory hosts --limit local
PLAY [Provision step] **********************************************************
TASK [Gathering Facts] *********************************************************
fatal: [127.0.0.1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Host key verification failed.", "unreachable": true}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #/builds/team/ansible/provision.retry
PLAY RECAP *********************************************************************
127.0.0.1 : ok=0 changed=0 unreachable=1 failed=0
In my local PC, I solved it using the ssh-copy-id <path-to-the-key> <localhost> command, but I don't know how to solve it for gitlab-ci, given that it's not an environment which I can control.
I tried also to replace the 127.0.0.1 IP address with localhost.
ansible-playbook provision.yml --inventory hosts --limit localhost
Then it fails:
ansible-playbook provision.yml --inventory hosts --limit localhost
[WARNING] Ansible is being run in a world writable directory (/builds/teamiguana/minerva-ansible), ignoring it as an ansible.cfg source. For more information see https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir
[WARNING]: Found both group and host with same name: localhost
PLAY [Provision step] **********************************************************
TASK [Gathering Facts] *********************************************************
fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.\r\nroot#localhost: Permission denied (publickey,password).", "unreachable": true}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #/builds/teamiguana/minerva-ansible/provision.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=1 failed=0
I don't have experience setting up similar - but my first thought would be to check
What system user is Gitlab trying to SSH as?
What system user has the corresponding public keys on the remote hosts?
You can override which user Ansible connects with either in the playbooks, or via --user <user> command-line flag, see https://docs.ansible.com/ansible/latest/cli/ansible-playbook.html#cmdoption-ansible-playbook-u.
Though maybe I'm misunderstanding, because I just noticed that you've set --limit local in your command?
You may try to add env variable ANSIBLE_TRANSPORT with value "local" to your ansible-playbook command, like this:
ANSIBLE_TRANSPORT=local ansible-playbook provision.yml --inventory hosts --limit local
Related
I need to execute some commands through the shell module, but when I execute them on a group of hosts, they are displayed in the terminal unreachable. How to make it so that information is displayed only on available hosts?
For now, running
ansible all -m shell -a "df -h"
Results in:
Mint-5302 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: connect to host 192.168.53.2 port 22: No route to host",
"unreachable": true
}
You can find the documentation here
Ignoring unreachable host errors
- name: Execute shell
shell: "df -h"
ignore_unreachable: yes
And at the playbook level, to ignoring each unreachable's hosts
- hosts: all
ignore_unreachable: yes
tasks:
- name: Execute shell
shell: "df -h"
You can achieve this behavior by using community.general.diy callback plugin.
Create ansible.cfg file with following content -
[defaults]
bin_ansible_callbacks = True
stdout_callback = community.general.diy
[callback_diy]
runner_on_unreachable_msg=""
Run your ad-hoc command and you will get the following output
$ ansible -m ping 192.168.10.1
PLAY [Ansible Ad-Hoc] *************************************************************************
TASK [ping] ***********************************************************************************
PLAY RECAP ************************************************************************************
192.168.10.1 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
I have a simple playbook that tries to install packages.
My task is failing(see output).
I can ping the host, and manually I can run the command as the super user(tco).
my ansible.cfg
[defaults]
inventory = /Users/<myuser>/<automation>/ansible/inventory
remote_user = tco
packages
packages:
- yum-utils
- sshpass
playbook
---
- hosts: all
vars_files:
- vars/packages.yml
tasks:
- name: testing connection
ping:
remote_user: tco
- name: Installing packages
yum:
name: "{{ packages }}"
state: present
Running playbook:
ansible-playbook my-playbook.yml --limit master --become --ask-become-pass --become-user=tco --become-method=sudo
Output:
ansible-playbook register_sys_rh.yml --limit master --become --ask-become-pass --become-user=tco --become-method=sudo
BECOME password:
PLAY [all] ******************************************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************************************
ok: [xx.xxx.13.105]
TASK [testing connection] ***************************************************************************************************************************************************
ok: [xx.xxx.13.105]
TASK [Installing packages] **************************************************************************************************************************************************
fatal: [xx.xxx.13.105]: FAILED! => {"changed": false, "msg": "This command has to be run under the root user.", "results": []}
PLAY RECAP ******************************************************************************************************************************************************************
xx.xxx.13.105 : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
inventory:
ansible-inventory --list | jq '.master'
{
"hosts": [
"xx.xxx.13.105"
]
}
I have copied my id_rsa.pub to the host already. I cannot loging to the host without a password.
I can log in and do sudo su or run any other command that needs root privilege.
[tco#control-plane-0 ~]$ whoami
tco
[tco#control-plane-0 ~]$ hostname -I
xx.xxx.13.105 192.168.122.1
[tco#control-plane-0 ~]$ sudo su
[sudo] password for tco:
[root#control-plane-0 tco]#
I explicitly override user, sudo_method through ansible_cli, no idea what I am doing wrong here.
Thanks in advance.
Fixed it.
But, I need to understand the Ansible concept better.
I changed ansible.cfg to this(changed become_user to root)
[defaults]
inventory = <my-inventory-path>
remote_user = tco
[privilege_escalation]
become=True
become_method=sudo
become_ask_pass=False
become_user=root
become_pass=<password>
And, running it like this:
ansible-playbook my-playbook.yml --limit master
this gives me an error:
FAILED! => {"msg": "Missing sudo password"}
So, I run like this:
ansible-playbook my-playbook.yml --limit master --ask-become-pass
and when a password is prompted I provide tco password not sure what is the password for the root user is.
And this works.
Not sure why cfg file password is not working, even though I provide the same password when prompted.
As per my understanding, when I say become_user and become_pass that is what ansible uses to run privilege commands. But, here I am saying remote_user: tco and become_user:root
This error is because the playbook is unable to run on the remote nodes as a root user.
We can modify the ansible.cfg file in order to resolve this issue.
Steps:
Open the ansible.cfg, default command is: sudo vim /etc/ansible/ansible.cfg
Search for regular expression: /privilege_escalation ( / is basically searching for the keyword that follows in VIM)
Press I (Insert mode), uncomment following lines:
[privilege_escalation]
become=True
become_method=sudo
become_user=root
become_ask_pass=False
Note: If the .cfg lines are not modified, the corresponding values can be passed via Terminal during the playbook execution(as specified in the question itself).
Run the playbook again with: ansible-playbook playbookname.yml
Note, if there is no ssh authentication pre-established between the Ansible controller and the nodes, appending -kK is required (this asks for the privileged user password for the remote nodes ) i.e. ansible-playbook playbookname.yml -kK
This will require you to input password for the SSH connection, as well as the BECOME password(the password for the privileged user on the node)
I am trying to create a pipeline in Gitlab-ci to run an ansible-playbook.
Here is my .gitlab-ci.yml file:
image: "my_ansible_image"
before_script:
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
build:
script:
- ansible-playbook -i inventory -u root --private-key "$SSH_PRIVATE_KEY" playbook.yml -vv
The playbook is trying to execute a simple ping module:
---
- name: Ping test ## name of the first task
ping: ##ping is a special case that do not requieres any attributs
From some reason the ssh connection is always failing with following error:
$ ansible-playbook -i inventory -u root --private-key /builds/my_name/rhe7_set-up-rpm/private_key playbook.yml -vv
[WARNING]: Ansible is being run in a world writable directory
(/builds/aramniko/rhe7_set-up-rpm), ignoring it as an ansible.cfg source. For
more information see
https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-
world-writable-dir
ansible-playbook 2.9.11
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.8.3 (default, Jul 26 2020, 02:36:32) [GCC 9.2.0]
No config file found; using defaults
PLAYBOOK: playbook.yml *********************************************************
1 plays in playbook.yml
PLAY [Set-UP] ******************************************************************
TASK [Gathering Facts] *********************************************************
task path: /builds/my_name/rhe7_set-up-rpm/playbook.yml:2
fatal: [XXXXXXXXX]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added 'XXXXXXXX' (ECDSA) to the list of known hosts.\r\nno such identity: /builds/my_name/rhe7_set-up-rpm/private_key: No such file or directory\r\nroot#XXXXXXXXXX: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true}
PLAY RECAP *********************************************************************
XXXXXXXXXXX : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
How can i solve this issue?
EDIT Solution was to add the following to before_script:
- ssh-keyscan DNS/Remote_IP
- echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
You should verify your host keys by running ssh-keyscan on the private server and copying the results to a GitLab variable called SSH_KNOWN_HOSTS. In your pipeline you need to copy that to the ~/.ssh/known_hosts file in the pipeline environment. Instructions are here:
https://docs.gitlab.com/ee/ci/ssh_keys/#verifying-the-ssh-host-keys.
On a side note, you should consider creating a secure directory to store your config file in. See: https://docs.ansible.com/ansible/latest/reference_appendices/config.html
An easier option is just creating an ansible.cfg file and add those configs and others like remote_user, inventory, private_key_file etc..
#ansible.cfg
[defaults]
host_key_checking = False
No need to create ansible.cfg file, just add ANSIBLE_HOST_KEY_CHECKING global variable in your .gitlab-ci.yml file:
variables:
ANSIBLE_HOST_KEY_CHECKING: "False"
or in the build job:
build:
variables:
ANSIBLE_HOST_KEY_CHECKING: "False"
script:
- ansible-playbook -i inventory -u root --private-key "$SSH_PRIVATE_KEY" playbook.yml -vv
I want to encrypt my ansible inventory file using ansible vault as it contains the IP/Passwords/Key file paths etc, which I do not want to keep it in readable format.
This is what I have tried.
My folder structure looks like below
env/
hosts
hosts_details
plays/
test.yml
files/
vault_pass.txt
env/hosts
[server-a]
server-a-name
[server-b]
server-b-name
[webserver:children]
server-a
server-b
env/hosts_details (file which I want to encrypt)
[server-a:vars]
env_name=server-a
ansible_ssh_user=root
ansible_ssh_host=10.0.0.1
ansible_ssh_private_key_file=~/.ssh/xyz-key.pem
[server-b:vars]
env_name=server-b
ansible_ssh_user=root
ansible_ssh_host=10.0.0.2
ansible_ssh_private_key_file=~/.ssh/xyz-key.pem
test.yml
---
- hosts: webserver
tasks:
- name: Print Hello world
debug:
msg: "Hello World"
Execution without encryption runs successfully without any errors
ansible-playbook -i env/ test.yml
When I encrypt my env/hosts_details file with vault file in files/vault_pass.txt and then execute the playbook I get the below error
ansible-playbook -i env/ test.yml --vault-password-file files/vault_pass.txt
PLAY [webserver]
******************************************************************
TASK [setup]
*******************************************************************
Thursday 10 August 2017 11:21:01 +0100 (0:00:00.053) 0:00:00.053 *******
fatal: [server-a-name]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname server-a-name: Name or service not known\r\n", "unreachable": true}
fatal: [server-b-name]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname server-b-name: Name or service not known\r\n", "unreachable": true}
PLAY RECAP
*********************************************************************
server-a-name : ok=0 changed=0 unreachable=1 failed=0
server-b-name : ok=0 changed=0 unreachable=1 failed=0
I want to know if I am missing anything or is it possible to have inventory file encrypted.
Is there any other alternative for the same?
As far as I know, you can't encrypt inventory files.
You should use group vars files instead.
Place your variables into ./env/group_vars/server-a.yml and server-b.yml in YAML format:
env_name: server-a
ansible_ssh_user: root
ansible_ssh_host: 10.0.0.1
ansible_ssh_private_key_file: ~/.ssh/xyz-key.pem
And encrypt server-a.yml and server-b.yml.
This way your inventory (hosts file) will be in plain text, but all inventory (host and group) variables will be encrypted.
I have a playbook and I want to run it with sudo. It is my ansible playbook:
site.yml
---
- name: Server
hosts: node1
sudo: yes
roles:
- dbserver
When I run it I get this:
ansible-playbook -i hosts site.yml
PLAY [Server] *****************************************************************
GATHERING FACTS ***************************************************************
fatal: [node1] => Missing sudo password
TASK: [dbserver | installing server] ******************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/home/robe/site.retry
node1 : ok=0 changed=0 unreachable=1 failed=0
Then I add the ansible sudo pass on site.yml:
---
- name: Server
hosts: node1
sudo_pass: ubuntu
roles:
- dbserver
I get this error:
ERROR: sudo_pass is not a legal parameter at this level in an Ansible Playbook
Then my questions are:
Do I have add to each tasks the ansible sudo_pass attribute?
Is there any way to say sudo and sudo_pass in the playbook?
sudo_pass is not something Ansible knows. If you need to enter a sudo password on node1, then you keep sudo: yes in the playbook, and you'll have to give Ansible the sudo password on the commandline when running your playbook:
ansible-playbook -i hosts site.yml -K
Note the -K parameter. It will then ask you for the sudo password before the playbook starts.
(the long version is --ask-sudo-pass by the way. I had to look that up)