Why can't my ci-cd config file find this Ansible command? - ansible

I'm currently testing out Gitlab CI-CD and Ansible and I wanted to combine the two. I already made an Ansible playbook which is just a small nginx server for testing.
I'm using a Docker container with an Alpine image for my runner.
My .gitlab-ci.yml file looks like this:
stages:
- install
- deploy
install-ansible:
stage: install
script:
- apk add ansible -v
deploy-job:
stage: deploy
script:
- ansible-playbook ansible_roles.yml
The first part of the Pipeline is working but it always fails in the deploy part and I get the following error message:
$ ansible-playbook ansible_roles.yml
/bin/sh: eval: line 128: ansible-playbook: not found
Cleaning up project directory and file based variables
ERROR: Job failed: exit code 127

Install ansible in the same job where it is suppose to run: i.e. drop the install_ansible job and install ansible in deploy-job.
Note: as is, you'll have to install ansible in every stage where you want to use it.
stages:
- deploy
deploy-job:
stage: deploy
before_script:
- apk add ansible -v
script:
- ansible-playbook ansible_roles.yml
As installing packages on each ci run (and possibly in different jobs) can be costly and lengthy, you might consider creating an image already containing ansible, push it to a docker registry and use it directly in your job instead of the default one provided by your runner config:
stages:
- deploy
deploy-job:
stage: deploy
image: my.docker.repo/my/ansible/image:latest
script:
- ansible-playbook ansible_roles.yml
An other solution is to ask your runner administrator to install ansible in the default image used by your runner so that it is always available by default.

Related

The deployment environment 'Staging' in your bitbucket-pipelines.yml file occurs multiple times in the pipeline

I'm trying to get Bitbucket Pipelines to work with multiple steps that define the deployment area. When I do, I get the error
Configuration error The deployment environment 'Staging' in your
bitbucket-pipelines.yml file occurs multiple times in the pipeline.
Please refer to our documentation for valid environments and their
ordering.
From what I read, the deployment variable has to happen on a step by step basis.
How would I set up this example pipelines file to not hit that error?
image: ubuntu:18.04
definitions:
steps:
- step: &build
name: npm-build
condition:
changesets:
includePaths:
# Only run npm if anything in the build directory was touched
- "build/**"
image: node:14.17.5
script:
- echo 'build initiated'
- cd build
- npm install
- npm run dev
- echo 'build complete'
artifacts:
- themes/factor/css/**
- themes/factor/js/**
- step: &deploychanges
name: Deploy_Changes
deployment: Staging
script:
- echo 'Installing server dependencies'
- apt-get update -q
- apt-get install -qy software-properties-common
- add-apt-repository -y ppa:git-ftp/ppa
- apt-get update -q
- apt-get install -qy git-ftp
- echo 'All dependencies installed'
- echo 'Transferring changes'
- git ftp init --user $FTP_USER --passwd $FTP_PASSWORD $FTP_ADDRESS push --force --changed-only -vv
- echo 'File transfer complete'
- step: &deploycompiled
name: Deploy_Compiled
deployment: Staging
condition:
changesets:
includePaths:
# Only run npm if anything in the build directory was touched
- "build/**"
script:
- echo 'Installing server dependencies'
- apt-get update -q
- apt-get install -qy software-properties-common
- add-apt-repository -y ppa:git-ftp/ppa
- apt-get update -q
- apt-get install -qy git-ftp
- echo 'All dependencies installed'
- echo 'Transferring compiled assets'
- git ftp init --user $FTP_USER --passwd $FTP_PASSWORD $FTP_ADDRESS push --all --syncroot themes/factor/css/ -vv
- git ftp init --user $FTP_USER --passwd $FTP_PASSWORD $FTP_ADDRESS push --all --syncroot themes/factor/js/ -vv
- echo 'File transfer complete'
pipelines:
branches:
master:
- step: *build
<<: *deploychanges
deployment: Production
- step:
<<: *deploycompiled
deployment: Production
dev:
- step: *build
- step: *deploychanges
- step: *deploycompiled
The workaround for the issue with reusing Environment Variables without using the deployment clause for more than one steps in a pipeline I have found is to dump ENV VARS to a file and save it as an artifact that will be sourced in the following steps.
The code snippet for it would look like:
steps:
- step: &set-environment-variables
name: 'Set environment variables'
script:
- printenv | xargs echo export > .envs
artifacts:
- .envs
- step: &next-step
name: "Next step in the pipeline"
script:
- source .envs
- next_actions
pipelines:
pull-requests:
'**':
- step:
<<: *set-environment-variables
deployment: my-deployment
- step:
<<: *next-step
name: "Name of the next step being executed"
branches:
staging:
- step:
<<: *set-environment-variables
deployment: my-deployment
- step:
<<: *next-step
name: "Name of the next step being executed"
So far this solution works for me.
Update:
after having an issue of "%s" appearing in the .envs file, which caused the later source .envs statement to fail, here is a slightly different approach to the initial step. It gets around that particular issue, but also only exports those variables you know you need in your pipeline - noting that there are a lot of bitbucket environment variables available to the first script step which will be available naturally to later scripts anyway, and it makes more sense (to me anyway) that you don't just dump out all environment variables to the .env artifact, but do it in a much more controlled manner.
- step: &set-environment-variables
name: 'Set environment variables'
script:
- echo "export SSH_USER=$SSH_USER" > .envs
- echo "export SERVER_IP=$SERVER_IP" >> .envs
- echo "export ANOTHER_ENV_VAR=$ANOTHER_ENV_VAR" >> .envs
artifacts:
- .envs
In this example, .envs will now contain only those 3 environment variables, and not a whole heap of system + bitbucket variables (and of course, no pesky %s characters either!)
Normally what happens is that you deploy to an environment. So there is one step which deploys. So particularly u should put your "deployment" group to it specifically. This is how Bitbucket manages that if the deployment of the code happened or not. So its like you can have multiple steps where in one you are testing unit cases, integration cases, another one for building the binaries and the last one as an artifact deploys to the env marking deployment group. see the below example.
definitions:
steps:
- step: &test-vizdom-services
name: "Vizdom services unit Tests"
image: mcr.microsoft.com/dotnet/core/sdk:3.1
script:
- cd ./vizdom/vizdom.services.Tests
- dotnet test vizdom.services.Tests.csproj
pipelines:
custom:
DEV-AWS-api-deploy:
- step: *test-vizdom-services
- step:
name: "Vizdom Webapi unit Tests"
image: mcr.microsoft.com/dotnet/core/sdk:3.1
script:
- export ASPNETCORE_ENVIRONMENT=Dev
- cd ./vizdom/vizdom.webapi.tests
- dotnet test vizdom.webapi.tests.csproj
- step:
deployment: DEV-API
name: "API: Build > Zip > Upload > Deploy"
image: mcr.microsoft.com/dotnet/core/sdk:3.1
script:
- apt-get update
- apt-get install zip -y
- mkdir -p ~/deployment/release_dll
- cd ./vizdom/vizdom.webapi
- cp -r ../shared_settings ~/deployment
- dotnet publish vizdom.webapi.csproj -c Release -o ~/deployment/release_dll
- cp Dockerfile ~/deployment/
- cp -r deployment_scripts ~/deployment
- cp deployment_scripts/appspec_dev.yml ~/deployment/appspec.yml
- cd ~/deployment
- zip -r $BITBUCKET_CLONE_DIR/dev-webapi-$BITBUCKET_BUILD_NUMBER.zip .
- cd $BITBUCKET_CLONE_DIR
- pipe: atlassian/aws-code-deploy:0.5.3
variables:
AWS_DEFAULT_REGION: 'us-east-1'
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
COMMAND: 'upload'
APPLICATION_NAME: 'ss-webapi'
ZIP_FILE: 'dev-webapi-$BITBUCKET_BUILD_NUMBER.zip'
S3_BUCKET: 'ss-codedeploy-repo'
VERSION_LABEL: 'dev-webapi-$BITBUCKET_BUILD_NUMBER.zip'
- pipe: atlassian/aws-code-deploy:0.5.3
variables:
AWS_DEFAULT_REGION: 'us-east-1'
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
COMMAND: 'deploy'
APPLICATION_NAME: 'ss-webapi'
DEPLOYMENT_GROUP: 'dev-single-instance'
WAIT: 'false'
S3_BUCKET: 'ss-codedeploy-repo'
VERSION_LABEL: 'dev-webapi-$BITBUCKET_BUILD_NUMBER.zip'
So as you can see I have multiple steps for running test cases but I would finally build the binaries and deploy the code in final step. I could have broken it into separate steps but I dont want to waste the minutes of having to use another step because cloning and copying the artifact takes some time. Right now there are three steps it could have been broken into 4. where the 4th one would have been the deployment step. I hope this brings some clarity.
Also you can modify the names of the deployment groups as per your needs and can have up to 50 deployment groups :)
Little did I know, it's intentional that deploy happens in one step and you can only define on one step the deployment environment. The following setup is what worked for us (plus the appropriate separate git-ftp files):
image: ubuntu:18.04
definitions:
steps:
- step: &build
name: Build
condition:
changesets:
includePaths:
# Only run npm if anything in the build directory was touched
- "build/**"
image: node:15.0.1
script:
- echo 'build initiated'
- cd build
- npm install
- npm run prod
- echo 'build complete'
artifacts:
- themes/factor/css/**
- themes/factor/js/**
- step: &deploy
name: Deploy
deployment: Staging
script:
- echo 'Installing server dependencies'
- apt-get update -q
- apt-get install -qy software-properties-common
- add-apt-repository -y ppa:git-ftp/ppa
- apt-get update -q
- apt-get install -qy git-ftp
- echo 'All dependencies installed'
- echo 'Transferring changes'
- git ftp init --user $FTP_USER --passwd $FTP_PASSWORD $FTP_ADDRESS push --force --changed-only -vv
- echo 'File transfer complete'
pipelines:
branches:
master:
- step: *build
- step:
<<: *deploy
deployment: Production
dev:
- step: *build
- step: *deploy
I guess we cannot use combine the Deployment with either Artifact or Cache.
If I use standalone Deployment, so I can use the same deployment for multiple steps (as my screenshot).
In case I add cache/artifact, will get same error as yours.
Just got the same issue today.
I don't think there's currently a solution for this except rewrite the steps to not run two steps in on environment.
Waiting on https://jira.atlassian.com/browse/BCLOUD-18261 which planned to be released in July.
Related https://community.atlassian.com/t5/Bitbucket-questions/The-deployment-environment-test-in-your-bitbucket-pipelines-yml/qaq-p/971584
This is currently not available. They do have a ticket and it says it's being worked on. The best workaround currently appears to be creating multiple developer variable environments for steps that use the same variables.
Ex:
- step:
<<: *InitialSetup
deployment: Beta-Setup
- step:
<<: *Build
deployment: Beta-Build
From the comments on the ticket:
Hey everyone,
I know this is a long-winded workaround, and someone has probably already mentioned it, but I got around the issue by setting up "sub environments", one for each step. E.g. instead of having a "Staging" environment, I set up a "Staging Build" and "Staging Deploy" environment, and just had to duplicate the variables if necessary. I did the same for production.
Having to setup and maintain all these environments and variables can be a pain, but one can automate this to prevent human error, through setting up an OAuth client tool that interfaces with the API (you just need the "pipelines" scope), if one can be bothered to go to the effort (as I have: https://blog.programster.org/bitbucket-create-oauth-client-credentials).
I can't wait for this feature to be completed as that is the "real" solution, and a lot less effort!
With your cases, to solve the problems, you either solve the errors as following options:
Combine all steps into one big step
Or create different deployment variable group Staging DeployChanges and Staging DeployComplied, this way may lead to duplicate variable
ex:
- step: &deploychanges
name: Deploy_Changes
deployment: Staging DeployChanges
script:
- ....
- step: &deploycompiled
name: Deploy_Compiled
deployment: Staging DeployComplied
....

How to retain environment for different jobs in azure pipeline

Stage can keep all the running environments for its jobs, but I have several different jobs that logically cannot grouped into the same stage. all of these jobs has the same running environment, I don't want to repeat the following code to each jobs, how can I abstract all this steps into some sort of functions and call by each job. Or how can I create a shared environment for those jobs. Or how can I retain the environment instead of groups them all into the same stage?
steps:
- bash: echo "##vso[task.prependpath]$CONDA/bin"
displayName: Add Conda to PATH
- bash: conda env update -f environment.yml --name $(Agent.Id)
displayName: Create Conda Environment
- bash: export PYTHONPATH="src/"
displayName: Add /src to PYTHONAPTH
- bash: source activate $(Agent.Id)
displayName: Active Test Environment
You can put above steps into a template yaml file. And use step templates to reference it in your main pipeline Yaml file.
For example, create a template yaml file setEnv.yml with above codes.
#File: setEnv.yml
steps:
- bash: echo "##vso[task.prependpath]$CONDA/bin"
displayName: Add Conda to PATH
- bash: conda env update -f environment.yml --name $(Agent.Id)
displayName: Create Conda Environment
...
Use template to reference the above template file.
# File: azure-pipelines.yml
stages:
- stage: A
jobs:
- job: macOS
pool:
vmImage: 'macOS-10.14'
steps:
- template: setEnv.yml # Template reference
- othertasks:
- stage: B
jobs:
- job: Linux
pool:
vmImage: 'ubuntu-16.04'
steps:
- template: setEnv.yml # Template reference
- othertasks:
Check the document here for more information.

Integrating ansible with gitlab

I wanted to deploy the artifacts that are stored by the previous job into another server.
I am using Ansible to deploy the artifacts.
this is my job for deploying
deploy:
image: williamyeh/ansible:ubuntu18.04
image: linuxserver/openssh-server
image: python
stage: deploy
variables:
ANSIBLE_HOST_KEY_CHECKING: "False"
script:
- printf "[sample]\n host1 ansible_host=10.45.1.21 ansible_user=csrtest ansible_password=admin123 ansible_python_interpreter=/usr/bin/python3 ansible_become=yes ansible_sudo_pass=admin123 http_port=8080\n" > /etc/ansible/hosts
- cat /etc/ansible/hosts
- ansible --version
- ls
- ansible all -m ping
- ansible-playbook playbook.yaml
I have to install the above 3 docker images but it is downloading the last image.
How to solve this, can anyone please reply to this??
Thanks in advance

Gitlab executing next stage even if one fails (with dependency)

I'm setting up a gitlab pipeline with multiple stages and at least two sites each stage. How do iIconditionally allow the next stage to continue even if one site in the first stage failed (and the whole stage as it is marked as failed)? For example: I want to prepare, build and test and I want to do this on Windows & Linux runner. So if my Linux runner failed in preparation but my Windows runner succeeded, then the next stage should start without building the Linux package, because this already failed. But the windows build should start.
My Intention is that if one system fails at least the second is able to continue.
I added dependencies and i thought that this would solve my problem. Because if site "build windows" is dependent on "prepare windows" then it shouldn't matter if "prepare Linux" failed. But this isn't the case :/
image: node:10.6.0
stages:
- prepare
- build
- test
prepare windows:
stage: prepare
tags:
- windows
script:
- npm i
- git submodule foreach 'npm i'
prepare linux:
stage: prepare
tags:
- linux
script:
- npm i
- git submodule foreach 'npm i'
build windows:
stage: build
tags:
- windows
script:
- npm run build-PWA
dependencies:
- prepare windows
build linux:
stage: build
tags:
- linux
script:
- npm run build-PWA
dependencies:
- prepare linux
unit windows:
stage: test
tags:
- windows
script:
- npm run test
dependencies:
- build windows
artifacts:
paths:
- dist/
- package.json
expire_in: 5 days
unit linux:
stage: test
tags:
- linux
script:
- npm run test
dependencies:
- build linux
artifacts:
paths:
- dist/
- package.json
expire_in: 5 days
See allow_failure option:
allow_failure allows a job to fail without impacting the rest of the CI suite.
example:
job1:
stage: test
script:
- execute_script_that_will_fail
allow_failure: true

Ansible syntax issue running shell command and docker_container module

What I have is valid YAML but for some reason it's not valid for Ansible and the documentation on the Ansible site has some great examples of running the modules but there isn't any documentation on how the modules work together or if they can. I'm assuming I can run a shell module and a docker_container module in the same task but it appears to me as if Ansible is disagreeing with me. Here's what I have.
---
- name: Setup Rancher Container
shell: sudo su_root
docker_container:
name: rancherserver
image: rancher/server
state: started
restart_policy: always
published_ports: 8080:8080
...
ERROR! conflicting action statements
The error appears to have been in '/opt/ansible_scripts/ansible/roles/dockermonitoring
/tasks/main.yml': line 2, column 3, but maybe elsewhere in the file depending on the exact
syntax problem.
The offending line appears to be:
---
- name: Setup Rancher Container
^ here
Because I'm runing this on RHEL 7 I need to be able to run the sudo su_root script to become root before ansible can communicate with the Docker API as docker runs as root.
So if I can't run this script and then run the docker_container I think that's a big problem with ansible.
I'm assuming I can run a shell module and a docker_container module in the same task
Wrong. One task – one module.
There's become parameter to get privileged access, like:
- name: Setup Rancher Container
become: yes
docker_container:
name: rancherserver
...

Resources