Integrating ansible with gitlab - ansible

I wanted to deploy the artifacts that are stored by the previous job into another server.
I am using Ansible to deploy the artifacts.
this is my job for deploying
deploy:
image: williamyeh/ansible:ubuntu18.04
image: linuxserver/openssh-server
image: python
stage: deploy
variables:
ANSIBLE_HOST_KEY_CHECKING: "False"
script:
- printf "[sample]\n host1 ansible_host=10.45.1.21 ansible_user=csrtest ansible_password=admin123 ansible_python_interpreter=/usr/bin/python3 ansible_become=yes ansible_sudo_pass=admin123 http_port=8080\n" > /etc/ansible/hosts
- cat /etc/ansible/hosts
- ansible --version
- ls
- ansible all -m ping
- ansible-playbook playbook.yaml
I have to install the above 3 docker images but it is downloading the last image.
How to solve this, can anyone please reply to this??
Thanks in advance

Related

Why can't my ci-cd config file find this Ansible command?

I'm currently testing out Gitlab CI-CD and Ansible and I wanted to combine the two. I already made an Ansible playbook which is just a small nginx server for testing.
I'm using a Docker container with an Alpine image for my runner.
My .gitlab-ci.yml file looks like this:
stages:
- install
- deploy
install-ansible:
stage: install
script:
- apk add ansible -v
deploy-job:
stage: deploy
script:
- ansible-playbook ansible_roles.yml
The first part of the Pipeline is working but it always fails in the deploy part and I get the following error message:
$ ansible-playbook ansible_roles.yml
/bin/sh: eval: line 128: ansible-playbook: not found
Cleaning up project directory and file based variables
ERROR: Job failed: exit code 127
Install ansible in the same job where it is suppose to run: i.e. drop the install_ansible job and install ansible in deploy-job.
Note: as is, you'll have to install ansible in every stage where you want to use it.
stages:
- deploy
deploy-job:
stage: deploy
before_script:
- apk add ansible -v
script:
- ansible-playbook ansible_roles.yml
As installing packages on each ci run (and possibly in different jobs) can be costly and lengthy, you might consider creating an image already containing ansible, push it to a docker registry and use it directly in your job instead of the default one provided by your runner config:
stages:
- deploy
deploy-job:
stage: deploy
image: my.docker.repo/my/ansible/image:latest
script:
- ansible-playbook ansible_roles.yml
An other solution is to ask your runner administrator to install ansible in the default image used by your runner so that it is always available by default.

How to start a selenoid on gitlab-ci?

I'm trying to run tests on gitlab-ci, but I don't know which command to start selenoid.
Locally, this command looks like ./ Cm selenoid start But how to specify in the case of starting selenoid from the service, I do not know.
This is my .gitlab-ci.yml:
stages:
- testing
ya_test:
stage: testing
tags:
- docker
services:
- selenoid/chrome
image: python:3.9-alpine
before_script:
- pip3 install -r requirements.txt
- selenoid/chrome start #???????
script:
- pytest -s
allow_failure: true
And what address to specify in the test fixture? localhost:4444?
Thanks for the help!
For launching Selenoid with Chrome try this yml.
Use webdriver.Remote(command_executor="http://selenoid__chrome:4444",
options=chrome_options, desired_capabilities=DesiredCapabilities.CHROME) to connect Chrome and add no-sandbox to options.
image: python:3.8
stages:
- test
test:
stage: test
services:
- name: aerokube/selenoid
- name: selenoid/chrome:89.0
before_script:
- echo "Install environment"
- apt-get update -q -y
- pip3 install -r requirements.txt
script:
- echo "Run all tests"
- py.test -s -v --junitxml=report.xml test.py
# if you want detailed report at GitLab add artifacts
artifacts:
when: always
reports:
junit: report.xml

ERROR: problem running /home/circleci/project/ansible/inventory.yml --list ([Errno 8] Exec format error)

I deploy my project automatically using CircleCI. I tried to run ansible locally with no problems but I can't seem to run the same commands on circleci machine.
Code circleci config:
- add_ssh_keys:
fingerprints:
- "my:fin:ger:pri:nt"
- run:
name: deploy application by ansible
command: |
cd ansible
ansible-playbook -i inventory.yml -e 'record_host_keys=True' -u ec2-user playbook.yml
environment:
ANSIBLE_HOST_KEY_CHECKING: False
Code of inventory.yml:
all:
hosts:
"My EC2-IP"
What do I miss here?

Ansible Azure Dynamic inventory with tags does not work

I am using ansible version 2.8.1 and trying to identify the servers in a resource group based on tags Below is my code
i have 2 vms in test-rg (testvm1, testvm2). Only testvm1 has the tag nginx
i have set env variable AZURE_TAGS=nginx
azureinv.yml
plugin: azure_rm
include_vm_resource_groups:
- test-rg
nginx.yml
---
- name: Install and start Nginx on an Azure virtual machine
hosts: all
become: yes
tasks:
- name: echo test
shell: " echo test "
ansible-playbook -i ./azureinv.yml nginx.yml -u test
output:
i see its doing echo on both the servers (testvm1, testvm2) even if there is a tag called nginx, only on one server
can some one please help me ?

Ansible Shell Cannot Pull Docker Image

I've run into issues pulling Docker images from a private DockerHub repo using the Docker module of Ansible, so to sanity-check that code decided to try pulling the image in question first using the shell. This also fails. What's going on here? If I SSH onto the box, I am able to run exactly the same command in the shell and it works, pulling the right image.
Isolated example play:
---
- hosts: <host-ip>
gather_facts: True
remote_user: ubuntu
sudo: yes
tasks:
- include_vars: vars/client_vars.yml
- name: Pull stardog docker image [private]
shell: sudo docker pull {{stardog_docker_repo}}
- name: Tag stardog docker image [private]
shell: sudo docker tag {{stardog_docker_repo}} stardog_tag
The error that's being output is:
failed: [<host-ip>] => {"changed": true, "cmd": "sudo docker pull <org>/<image>:latest", "delta": "0:00:01.395931", "end": "2015-08-05 17:35:22.480811", "rc": 1, "start": "2015-08-05 17:35:21.084880", "warnings": []}
stderr: Error: image <org>/<image>:latest not found
stdout: Pulling repository <org>/<image>
FATAL: all hosts have already failed -- aborting
NB: I've sanitised my <org> and <image> but rest assured their image identifier in the playbook and error logging perfectly match the image that I can successfully run in the shell over ssh by doing:
$ sudo docker pull <org>/<image>:latest
I'm aware of various GitHub issues (like this one I had when using the Docker module), patches et cetera related to the docker-py library, but the thing here is I'm just using the Ansible shell module. What have I missed?
A colleague of mine pointed out something - if you log the env, you find that the sudo: yes makes root run the docker commands by default and thus the ubuntu user's Docker credentials are not picked up. This playbook works (assuming you have a valid dockercfg.json in the docker folder, relative to this playbook.
---
- hosts: <host-ip>
gather_facts: True
remote_user: ubuntu
sudo: yes
tasks:
- include_vars: vars/client_vars.yml
# run the docker tasks
- name: Add docker credentials for ubuntu user
copy: src=docker/dockercfg.json dest=/root/.dockercfg
- name: Get env
shell: sudo env
register: sudo_env
- name: Debug
debug: msg="{{sudo_env}}"
- name: Pull stardog docker image [private]
shell: docker pull {{stardog_docker_repo}}
- name: Tag stardog docker image [private]
shell: docker tag {{stardog_docker_repo}} stardog_tag
This gives root the right DockerHub creds. Alternatively, you can add sudo: false to each of the plays and use sudo inline on each shell call to run as the ubuntu user.
You should use Ansible's docker_container
module to pull image now.
In this way, you don't need to run sudo in shell.
http://docs.ansible.com/ansible/docker_container_module.html

Resources