I am trying to run ansible playbooks within Concourse for remote hosts, however i cannot do that. Below are my steps:-
Concourse Yaml File:-
---
resource_types:
- name: ansible-playbook
type: docker-image
source:
repository: troykinsella/concourse-ansible-playbook-resource
tag: latest
resources:
- name: ansible
type: ansible-playbook
source:
debug: true
user: cloud_user
ssh_private_key: ((ssh-key))
verbose: vvv
- name: source-code
type: git
source:
uri: ((git-repo))
branch: master
private_key: ((ssh-key))
jobs:
- name: ansible-concourse
plan:
- get: source-code # git resource
- put: ansible
params:
check: true
diff: true
become: true
become_user: root
inventory: inventory/hosts
playbook: site.yml
path: source-code
Host File:-
[test]
localhost
Inside the Container:-
I intercepted the container and i can ssh to any IP inside, however i am not able to make ssh-login.
Ansible Playbook:-
---
- name: "Running Current Working Directory"
hosts: test
gather_facts: no
tasks:
- name: "Current Working Directory"
shell: pwd
register: value
- debug:
msg: "The Current Working Directory {{value.stdout_lines}}"
Output Coming in Concourse:-
ansible-playbook -i inventory/hosts --private-key /tmp/ansible-playbook-resource-ssh-private-key --user cloud_user -vvv site.yml
ansible-playbook 2.9.0
config file = /tmp/build/put/source-code/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.6.8 (default, Oct 7 2019, 12:59:55) [GCC 8.3.0]
Using /tmp/build/put/source-code/ansible.cfg as config file
host_list declined parsing /tmp/build/put/source-code/inventory/hosts as it did not pass its verify_file() method
script declined parsing /tmp/build/put/source-code/inventory/hosts as it did not pass its verify_file() method
auto declined parsing /tmp/build/put/source-code/inventory/hosts as it did not pass its verify_file() method
Parsed /tmp/build/put/source-code/inventory/hosts inventory source with ini plugin
PLAYBOOK: site.yml *************************************************************
1 plays in site.yml
PLAY [Running Current Working Directory] ***************************************
META: ran handlers
TASK [Current Working Directory] ***********************************************
task path: /tmp/build/put/source-code/site.yml:7
Monday 18 November 2019 12:38:49 +0000 (0:00:00.084) 0:00:00.085 *******
<localhost> ESTABLISH SSH CONNECTION FOR USER: cloud_user
<localhost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/tmp/ansible-playbook-resource-ssh-private-key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="cloud_user"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/dc52b3112c localhost '/bin/sh -c '"'"'echo ~cloud_user && sleep 0'"'"''
<localhost> (255, b'', b'')
fatal: [localhost]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ",
"unreachable": true
}
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
Monday 18 November 2019 12:38:49 +0000 (0:00:00.029) 0:00:00.114 *******
===============================================================================
Current Working Directory ----------------------------------------------- 0.03s
/tmp/build/put/source-code/site.yml:7 -----------------------------------------
localhost is normally accessed through the local connection plugin (unless you are trying to do something really special and you have configured access through ssh which does not seem to be the case from your above error message).
If you don't declare it in you inventory, localhost is implicit, uses the local connection and is not matched in the all group.
However, if you declare localhost explicitly in your inventory, the default connection plugin becomes ssh and the all group matches this host too. You have to set the connection back to local yourself in that case.
You have two options to make your current test work:
Delete your inventory (or use one that does not explicitly declare localhost) and modify your playbook to target localhost directly => hosts: localhost
Keep your playbook as is and modify your inventory
[test]
localhost ansible_connection=local
Related
I have a simple playbook that tries to install packages.
My task is failing(see output).
I can ping the host, and manually I can run the command as the super user(tco).
my ansible.cfg
[defaults]
inventory = /Users/<myuser>/<automation>/ansible/inventory
remote_user = tco
packages
packages:
- yum-utils
- sshpass
playbook
---
- hosts: all
vars_files:
- vars/packages.yml
tasks:
- name: testing connection
ping:
remote_user: tco
- name: Installing packages
yum:
name: "{{ packages }}"
state: present
Running playbook:
ansible-playbook my-playbook.yml --limit master --become --ask-become-pass --become-user=tco --become-method=sudo
Output:
ansible-playbook register_sys_rh.yml --limit master --become --ask-become-pass --become-user=tco --become-method=sudo
BECOME password:
PLAY [all] ******************************************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************************************
ok: [xx.xxx.13.105]
TASK [testing connection] ***************************************************************************************************************************************************
ok: [xx.xxx.13.105]
TASK [Installing packages] **************************************************************************************************************************************************
fatal: [xx.xxx.13.105]: FAILED! => {"changed": false, "msg": "This command has to be run under the root user.", "results": []}
PLAY RECAP ******************************************************************************************************************************************************************
xx.xxx.13.105 : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
inventory:
ansible-inventory --list | jq '.master'
{
"hosts": [
"xx.xxx.13.105"
]
}
I have copied my id_rsa.pub to the host already. I cannot loging to the host without a password.
I can log in and do sudo su or run any other command that needs root privilege.
[tco#control-plane-0 ~]$ whoami
tco
[tco#control-plane-0 ~]$ hostname -I
xx.xxx.13.105 192.168.122.1
[tco#control-plane-0 ~]$ sudo su
[sudo] password for tco:
[root#control-plane-0 tco]#
I explicitly override user, sudo_method through ansible_cli, no idea what I am doing wrong here.
Thanks in advance.
Fixed it.
But, I need to understand the Ansible concept better.
I changed ansible.cfg to this(changed become_user to root)
[defaults]
inventory = <my-inventory-path>
remote_user = tco
[privilege_escalation]
become=True
become_method=sudo
become_ask_pass=False
become_user=root
become_pass=<password>
And, running it like this:
ansible-playbook my-playbook.yml --limit master
this gives me an error:
FAILED! => {"msg": "Missing sudo password"}
So, I run like this:
ansible-playbook my-playbook.yml --limit master --ask-become-pass
and when a password is prompted I provide tco password not sure what is the password for the root user is.
And this works.
Not sure why cfg file password is not working, even though I provide the same password when prompted.
As per my understanding, when I say become_user and become_pass that is what ansible uses to run privilege commands. But, here I am saying remote_user: tco and become_user:root
This error is because the playbook is unable to run on the remote nodes as a root user.
We can modify the ansible.cfg file in order to resolve this issue.
Steps:
Open the ansible.cfg, default command is: sudo vim /etc/ansible/ansible.cfg
Search for regular expression: /privilege_escalation ( / is basically searching for the keyword that follows in VIM)
Press I (Insert mode), uncomment following lines:
[privilege_escalation]
become=True
become_method=sudo
become_user=root
become_ask_pass=False
Note: If the .cfg lines are not modified, the corresponding values can be passed via Terminal during the playbook execution(as specified in the question itself).
Run the playbook again with: ansible-playbook playbookname.yml
Note, if there is no ssh authentication pre-established between the Ansible controller and the nodes, appending -kK is required (this asks for the privileged user password for the remote nodes ) i.e. ansible-playbook playbookname.yml -kK
This will require you to input password for the SSH connection, as well as the BECOME password(the password for the privileged user on the node)
I am executing some ansible playbooks via gitlab-ci and what I could see is
Ansible playbook executing successfully through pipeline, but it doesn't produce the output it is intended to do
When I retry the gitlab job, it produces the output I needed.
This is one of the many playbooks I am executing through gitlab:
1_ca.yaml
---
- hosts: 127.0.0.1
connection: local
tasks:
- name: Create ca-csr.json
become: true
copy:
dest: ca-csr.json
content: '{"CN":"Kubernetes","key":{"algo":"rsa","size":2048},"names":[{"C":"US","L":"Portland","O":"Kubernetes","OU":"CA","ST":"Oregon"}]}'
- name: Create ca-config.json
become: true
copy:
dest: ca-config.json
content: '{"signing":{"default":{"expiry":"8760h"},"profiles":{"kubernetes":{"usages":["signing","key encipherment","server auth","client auth"],"expiry":"8760h"}}}}'
- name: Create the ca.pem & ca-key.pem
# become: true
shell: |
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
Basically what does this do is, it creates some certs I needed.
But in the first attempt even though pipeline passes and it doesn't generate these certs. When I restart (running the same job for the second time) that particular job in gitlab it generates these certs.
Why this is happening?
This is how my .gitlab-ci.yaml looks like:
Create-Certificates:
stage: ansible-play-books-create-certs
retry:
max: 2
when:
- always
script:
- echo "Executing ansible playbooks for generating certficates"
- ansible-playbook ./ansible-playbooks/1_ca/1_ca.yaml
- ansible-playbook ./ansible-playbooks/1_ca/2_admin.yaml
- ansible-playbook ./ansible-playbooks/1_ca/3_kubelet.yaml
- ansible-playbook ./ansible-playbooks/1_ca/4_kube-controller.yaml
- ansible-playbook ./ansible-playbooks/1_ca/5_kube-proxy.yaml
- ansible-playbook ./ansible-playbooks/1_ca/6_kube-scheduler.yaml
- ansible-playbook ./ansible-playbooks/1_ca/7_kube-api-server.yaml
- ansible-playbook ./ansible-playbooks/1_ca/8_service-account.yaml
- ansible-playbook ./ansible-playbooks/1_ca/9_distribute-client-server-cert.yaml
# when: delayed
# start_in: 1 minutes
tags:
- banuka-gcp-k8s-hard-way
PS: These ansible playbooks are executing in the ansible host itself, not in remote servers. So I can log into the ansible master server and check if these files are created or not.
running your playbook without the last shell module produces the follwing output:
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [127.0.0.1] **************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok: [127.0.0.1]
TASK [Create ca-csr.json] *****************************************************************************************************************************************************************************************
[WARNING]: File './ca-csr.json' created with default permissions '600'. The previous default was '666'. Specify 'mode' to avoid this warning.
changed: [127.0.0.1]
TASK [Create ca-config.json] **************************************************************************************************************************************************************************************
[WARNING]: File './ca-config.json' created with default permissions '600'. The previous default was '666'. Specify 'mode' to avoid this warning.
changed: [127.0.0.1]
PLAY RECAP ********************************************************************************************************************************************************************************************************
127.0.0.1 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
and checking the existence:
$ ls ca* -al
-rw------- 1 root root 155 Aug 17 02:48 ca-config.json
-rw------- 1 root root 129 Aug 17 02:48 ca-csr.json
so although it's quite dirty way of writing a playbook - it works.
Why is it dirty ? :
you're not using any inventory
you should use local_action and not connection: local for local tasks
you are misusing ansible that is multi-node configuration management to do a bash script task
so in conclusion - there's nothing wrong with your ansible playbook - or maybe the file permissions (?) and if it does not run - you should look more in the gitlab-ci direction.
you need to provide more details on Gitlab-CI setup but - maybe the stage is not correct ?
I am trying to create a pipeline in Gitlab-ci to run an ansible-playbook.
Here is my .gitlab-ci.yml file:
image: "my_ansible_image"
before_script:
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
build:
script:
- ansible-playbook -i inventory -u root --private-key "$SSH_PRIVATE_KEY" playbook.yml -vv
The playbook is trying to execute a simple ping module:
---
- name: Ping test ## name of the first task
ping: ##ping is a special case that do not requieres any attributs
From some reason the ssh connection is always failing with following error:
$ ansible-playbook -i inventory -u root --private-key /builds/my_name/rhe7_set-up-rpm/private_key playbook.yml -vv
[WARNING]: Ansible is being run in a world writable directory
(/builds/aramniko/rhe7_set-up-rpm), ignoring it as an ansible.cfg source. For
more information see
https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-
world-writable-dir
ansible-playbook 2.9.11
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.8.3 (default, Jul 26 2020, 02:36:32) [GCC 9.2.0]
No config file found; using defaults
PLAYBOOK: playbook.yml *********************************************************
1 plays in playbook.yml
PLAY [Set-UP] ******************************************************************
TASK [Gathering Facts] *********************************************************
task path: /builds/my_name/rhe7_set-up-rpm/playbook.yml:2
fatal: [XXXXXXXXX]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added 'XXXXXXXX' (ECDSA) to the list of known hosts.\r\nno such identity: /builds/my_name/rhe7_set-up-rpm/private_key: No such file or directory\r\nroot#XXXXXXXXXX: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true}
PLAY RECAP *********************************************************************
XXXXXXXXXXX : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
How can i solve this issue?
EDIT Solution was to add the following to before_script:
- ssh-keyscan DNS/Remote_IP
- echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
You should verify your host keys by running ssh-keyscan on the private server and copying the results to a GitLab variable called SSH_KNOWN_HOSTS. In your pipeline you need to copy that to the ~/.ssh/known_hosts file in the pipeline environment. Instructions are here:
https://docs.gitlab.com/ee/ci/ssh_keys/#verifying-the-ssh-host-keys.
On a side note, you should consider creating a secure directory to store your config file in. See: https://docs.ansible.com/ansible/latest/reference_appendices/config.html
An easier option is just creating an ansible.cfg file and add those configs and others like remote_user, inventory, private_key_file etc..
#ansible.cfg
[defaults]
host_key_checking = False
No need to create ansible.cfg file, just add ANSIBLE_HOST_KEY_CHECKING global variable in your .gitlab-ci.yml file:
variables:
ANSIBLE_HOST_KEY_CHECKING: "False"
or in the build job:
build:
variables:
ANSIBLE_HOST_KEY_CHECKING: "False"
script:
- ansible-playbook -i inventory -u root --private-key "$SSH_PRIVATE_KEY" playbook.yml -vv
I want to provision a new vps. The way this is typically done: 1) try login manually as a non-root user, and 2) if that fails then perform the provisioning.
But I can't connect. I can't even login as root. (I can ssh from the shell, so the password is correct.)
hosts:
[server]
42.42.42.42
playbook.yml:
---
- hosts: all
vars:
ROOT_PASSWORD: foo
gather_facts: no
tasks:
- name: set root password
set_fact: ansible_password={{ ROOT_PASSWORD }}
- name: try login with password
local_action: "command ssh -q -o BatchMode=yes -o ConnectTimeout=3 root#{{ inventory_hostname }} 'echo ok'"
ignore_errors: true
changed_when: false
# more stuff here...
I tried the following, but all don't connect:
I stored the password in a variable like above
I prompted for the password using ansible-playbook -k playbook.yml
I moved the password to the inventory file
[server]
42.42.42.42 ansible_user=root ansible_password=foo
I added the ssh flag -o PreferredAuthentications=password to force password auth
But none of the above connects. I always get the error
root#42.42.42.42: Permission denied (publickey,password).
If I remove -o BatchMode=yes then it prompts me for a password, and does connect. But that prevents automation, the idea is to do this without user intervention.
What am I doing wrong?
This is a new vps, nothing is set up yet - so I'm looking for the simplest possible example of a playbook that connects using root and a password.
You're close. The variable is ansible_ssh_password, not ansible_ssh_pass. The variables with _ssh in the name are legacy names, so you can juse use ansible_user and ansible_password instead.
If I have an inventory like this:
[server]
example ansible_host=192.168.122.148 ansible_user=root ansible_password=secret
Then I can run this command successfully:
$ ansible all -i hosts -m ping
example | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
If the above ad-hoc command works correctly, then a playbook should work correctly as well. E.g., still assuming the above inventory, I can use the following playbook:
---
- hosts: all
gather_facts: false
tasks:
- ping:
And I can call it like this:
$ ansible-playbook playbook.yml -i hosts
PLAY [all] ***************************************************************************
TASK [ping] **************************************************************************
ok: [example]
PLAY RECAP ***************************************************************************
example : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
...and it all works just fine.
Try using --ask-become-pass
ansible-playbook -k playbook.yml --ask-become-pass
That way it's not hardcoded.
Also, inside the playbook you can invoke:
---
- hosts: all
become: true
gather_facts: no
All SO answers and blog articles I've seen so far recommend doing it the way I've shown.
But after spending much time on this, I don't believe it could work that way, so I don't understand why it is always recommended. I noticed that ansible has changed its API many times, and maybe that approach is simply outdated!
So I came up with an alternative, using sshpass:
hosts:
[server]
42.42.42.42
playbook.yml:
---
- hosts: all
vars:
ROOT_PASSWORD: foo
gather_facts: no
tasks:
- name: try login with password (using out-of-band ssh connection)
local_action: command sshpass -p {{ ROOT_PASSWORD }} ssh -q -o ConnectTimeout=3 root#{{ inventory_hostname }} 'echo ok'
ignore_errors: true
register: exists_user
- name: ping if above succeeded (using in-band ssh connection)
remote_user: root
block:
- name: set root ssh password
set_fact:
ansible_password: "{{ ROOT_PASSWORD }}"
- name: ping
ping:
data: pong
when: exists_user is success
This is just a tiny proof of concept.
The actual use case is to try connect with a non-root user, and if that fails, to provision the server. The above is the starting point for such a playbook.
Unlike #larsks' excellent alternative, this does not assume python is installed on the remote, and performs the ssh connection test out of band, assisted by sshpass.
I am trying to use ansible to connect to my switches and just do a show version. For some reason when i run the ansible playbook i keep getting the error "Failed to open session", i don't know why i keep getting it. I am able to ssh directly to the box with no issues.
[Ansible.cfg]
enable_task_debugger=True
hostfile=inventory
transport=paramiko
host_key_checking=False
[inventory/hosts]
127.0.0.1 ansible_connection=local
[routers]
192.168.10.1
[test.yaml]
---
- hosts: routers
gather_facts: true
connection: paramiko
tasks:
- name: show run
ios_command:
commands:
- show version
then i try to run it like this
ansible-playbook -vvv -i inventory test.yaml -u username -k
And then this is the last line of the error
EXEC /bin/sh -c 'echo ~ && sleep 0'
fatal: [192.168.10.1]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to open session",
"unreachable": true
}
Anisble version is 2.4.2.0
Please use::
connection: local
change - hosts: routers to - hosts: localhost