Circle Ci: Second job never runs? - continuous-integration

New to the world of CircleCI and cannot seem to get anything apart from the first job to run.
I've tried all sorts of things from removing line breaks to renaming the job to "test", swapping the order of the first job and the second job, but nothing works.
Is there something I need to change in the project config such as defining jobs ahead of time?
.circleci/config.yml
version: 2.0
jobs:
build:
docker:
# specify the version you desire here
- image: circleci/ruby:2.5.0-node-browsers
# Specify service dependencies here if necessary
# CircleCI maintains a library of pre-built images
# documented at https://circleci.com/docs/2.0/circleci-images/
# - image: circleci/postgres:9.4
working_directory: ~/repo
steps:
- checkout
# Download and cache dependencies
- restore_cache:
keys:
- v1-dependencies-{{ checksum "Gemfile.lock" }}
# fallback to using the latest cache if no exact match is found
- v1-dependencies-
- run:
name: install bundler
command: gem install bundler
- run:
name: install dependencies
command: |
bundle install --jobs=4 --retry=3 --path vendor/bundle
- save_cache:
paths:
- ./vendor/bundle
key: v1-dependencies-{{ checksum "Gemfile.lock" }}
precompile_assets:
machine: true
working_directory: ~/repo
steps:
- run:
name: Precompile assets for public folder
command: rails assets:precompile

disclaimer: Developer Evangelist at CircleCI
You're missing the Workflows section of the config. The part that tells CircleCI which jobs are available and how to run them.
https://circleci.com/docs/2.0/workflows/
Also, the precompile_assets job will fail because it doesn't have any files from your repo. You'd need to have a checkout, or utilize workspaces (also available in my link above) in order to provide it files.

Related

Stop auto triggering circle ci pipeline

I have configured my cypress test suite to circleci pipeline. The issue I am facing is that when I pushed my git branch or accept a pull request for the branch connected to the circle ci pipeline, it starts to run the test. I just don't need that I'm preferred to run the pipeline manually from the circle ci dashboard. Can someone guide me to fix the issue please? Attached my circleci yaml file below..
version: 2
jobs:
- request-testing:
type: approval
build:
docker:
- image: cypress/base:14.16.0
environment:
## this enables colors in the output
TERM: xterm
working_directory: ~/app
parallelism: 4
resource_class: large
steps:
- checkout
- restore_cache:
keys:
- v1-deps-{{ .Branch }}-{{ checksum "package.json" }}
- v1-deps-{{ .Branch }}
- v1-deps
- run:
name: Install Dependencies
command: npm ci
- save_cache:
key: v1-deps-{{ .Branch }}-{{ checksum "package.json" }}
# cache NPM modules and the folder with the Cypress binary
paths:
- ~/.npm
- ~/.cache
#run: $(npm bin)/cypress run
- run: $(npm bin)/cypress run --parallel --record --key 4
The simple way would be to edit the CircleCI webhook in your VCS repository settings, and unselect both the "Pushes" and "Pull requests" events.
This way there won't be any webhook delivery sent to CircleCI for these types of events, and therefore no build will be triggered.

The deployment environment 'Staging' in your bitbucket-pipelines.yml file occurs multiple times in the pipeline

I'm trying to get Bitbucket Pipelines to work with multiple steps that define the deployment area. When I do, I get the error
Configuration error The deployment environment 'Staging' in your
bitbucket-pipelines.yml file occurs multiple times in the pipeline.
Please refer to our documentation for valid environments and their
ordering.
From what I read, the deployment variable has to happen on a step by step basis.
How would I set up this example pipelines file to not hit that error?
image: ubuntu:18.04
definitions:
steps:
- step: &build
name: npm-build
condition:
changesets:
includePaths:
# Only run npm if anything in the build directory was touched
- "build/**"
image: node:14.17.5
script:
- echo 'build initiated'
- cd build
- npm install
- npm run dev
- echo 'build complete'
artifacts:
- themes/factor/css/**
- themes/factor/js/**
- step: &deploychanges
name: Deploy_Changes
deployment: Staging
script:
- echo 'Installing server dependencies'
- apt-get update -q
- apt-get install -qy software-properties-common
- add-apt-repository -y ppa:git-ftp/ppa
- apt-get update -q
- apt-get install -qy git-ftp
- echo 'All dependencies installed'
- echo 'Transferring changes'
- git ftp init --user $FTP_USER --passwd $FTP_PASSWORD $FTP_ADDRESS push --force --changed-only -vv
- echo 'File transfer complete'
- step: &deploycompiled
name: Deploy_Compiled
deployment: Staging
condition:
changesets:
includePaths:
# Only run npm if anything in the build directory was touched
- "build/**"
script:
- echo 'Installing server dependencies'
- apt-get update -q
- apt-get install -qy software-properties-common
- add-apt-repository -y ppa:git-ftp/ppa
- apt-get update -q
- apt-get install -qy git-ftp
- echo 'All dependencies installed'
- echo 'Transferring compiled assets'
- git ftp init --user $FTP_USER --passwd $FTP_PASSWORD $FTP_ADDRESS push --all --syncroot themes/factor/css/ -vv
- git ftp init --user $FTP_USER --passwd $FTP_PASSWORD $FTP_ADDRESS push --all --syncroot themes/factor/js/ -vv
- echo 'File transfer complete'
pipelines:
branches:
master:
- step: *build
<<: *deploychanges
deployment: Production
- step:
<<: *deploycompiled
deployment: Production
dev:
- step: *build
- step: *deploychanges
- step: *deploycompiled
The workaround for the issue with reusing Environment Variables without using the deployment clause for more than one steps in a pipeline I have found is to dump ENV VARS to a file and save it as an artifact that will be sourced in the following steps.
The code snippet for it would look like:
steps:
- step: &set-environment-variables
name: 'Set environment variables'
script:
- printenv | xargs echo export > .envs
artifacts:
- .envs
- step: &next-step
name: "Next step in the pipeline"
script:
- source .envs
- next_actions
pipelines:
pull-requests:
'**':
- step:
<<: *set-environment-variables
deployment: my-deployment
- step:
<<: *next-step
name: "Name of the next step being executed"
branches:
staging:
- step:
<<: *set-environment-variables
deployment: my-deployment
- step:
<<: *next-step
name: "Name of the next step being executed"
So far this solution works for me.
Update:
after having an issue of "%s" appearing in the .envs file, which caused the later source .envs statement to fail, here is a slightly different approach to the initial step. It gets around that particular issue, but also only exports those variables you know you need in your pipeline - noting that there are a lot of bitbucket environment variables available to the first script step which will be available naturally to later scripts anyway, and it makes more sense (to me anyway) that you don't just dump out all environment variables to the .env artifact, but do it in a much more controlled manner.
- step: &set-environment-variables
name: 'Set environment variables'
script:
- echo "export SSH_USER=$SSH_USER" > .envs
- echo "export SERVER_IP=$SERVER_IP" >> .envs
- echo "export ANOTHER_ENV_VAR=$ANOTHER_ENV_VAR" >> .envs
artifacts:
- .envs
In this example, .envs will now contain only those 3 environment variables, and not a whole heap of system + bitbucket variables (and of course, no pesky %s characters either!)
Normally what happens is that you deploy to an environment. So there is one step which deploys. So particularly u should put your "deployment" group to it specifically. This is how Bitbucket manages that if the deployment of the code happened or not. So its like you can have multiple steps where in one you are testing unit cases, integration cases, another one for building the binaries and the last one as an artifact deploys to the env marking deployment group. see the below example.
definitions:
steps:
- step: &test-vizdom-services
name: "Vizdom services unit Tests"
image: mcr.microsoft.com/dotnet/core/sdk:3.1
script:
- cd ./vizdom/vizdom.services.Tests
- dotnet test vizdom.services.Tests.csproj
pipelines:
custom:
DEV-AWS-api-deploy:
- step: *test-vizdom-services
- step:
name: "Vizdom Webapi unit Tests"
image: mcr.microsoft.com/dotnet/core/sdk:3.1
script:
- export ASPNETCORE_ENVIRONMENT=Dev
- cd ./vizdom/vizdom.webapi.tests
- dotnet test vizdom.webapi.tests.csproj
- step:
deployment: DEV-API
name: "API: Build > Zip > Upload > Deploy"
image: mcr.microsoft.com/dotnet/core/sdk:3.1
script:
- apt-get update
- apt-get install zip -y
- mkdir -p ~/deployment/release_dll
- cd ./vizdom/vizdom.webapi
- cp -r ../shared_settings ~/deployment
- dotnet publish vizdom.webapi.csproj -c Release -o ~/deployment/release_dll
- cp Dockerfile ~/deployment/
- cp -r deployment_scripts ~/deployment
- cp deployment_scripts/appspec_dev.yml ~/deployment/appspec.yml
- cd ~/deployment
- zip -r $BITBUCKET_CLONE_DIR/dev-webapi-$BITBUCKET_BUILD_NUMBER.zip .
- cd $BITBUCKET_CLONE_DIR
- pipe: atlassian/aws-code-deploy:0.5.3
variables:
AWS_DEFAULT_REGION: 'us-east-1'
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
COMMAND: 'upload'
APPLICATION_NAME: 'ss-webapi'
ZIP_FILE: 'dev-webapi-$BITBUCKET_BUILD_NUMBER.zip'
S3_BUCKET: 'ss-codedeploy-repo'
VERSION_LABEL: 'dev-webapi-$BITBUCKET_BUILD_NUMBER.zip'
- pipe: atlassian/aws-code-deploy:0.5.3
variables:
AWS_DEFAULT_REGION: 'us-east-1'
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
COMMAND: 'deploy'
APPLICATION_NAME: 'ss-webapi'
DEPLOYMENT_GROUP: 'dev-single-instance'
WAIT: 'false'
S3_BUCKET: 'ss-codedeploy-repo'
VERSION_LABEL: 'dev-webapi-$BITBUCKET_BUILD_NUMBER.zip'
So as you can see I have multiple steps for running test cases but I would finally build the binaries and deploy the code in final step. I could have broken it into separate steps but I dont want to waste the minutes of having to use another step because cloning and copying the artifact takes some time. Right now there are three steps it could have been broken into 4. where the 4th one would have been the deployment step. I hope this brings some clarity.
Also you can modify the names of the deployment groups as per your needs and can have up to 50 deployment groups :)
Little did I know, it's intentional that deploy happens in one step and you can only define on one step the deployment environment. The following setup is what worked for us (plus the appropriate separate git-ftp files):
image: ubuntu:18.04
definitions:
steps:
- step: &build
name: Build
condition:
changesets:
includePaths:
# Only run npm if anything in the build directory was touched
- "build/**"
image: node:15.0.1
script:
- echo 'build initiated'
- cd build
- npm install
- npm run prod
- echo 'build complete'
artifacts:
- themes/factor/css/**
- themes/factor/js/**
- step: &deploy
name: Deploy
deployment: Staging
script:
- echo 'Installing server dependencies'
- apt-get update -q
- apt-get install -qy software-properties-common
- add-apt-repository -y ppa:git-ftp/ppa
- apt-get update -q
- apt-get install -qy git-ftp
- echo 'All dependencies installed'
- echo 'Transferring changes'
- git ftp init --user $FTP_USER --passwd $FTP_PASSWORD $FTP_ADDRESS push --force --changed-only -vv
- echo 'File transfer complete'
pipelines:
branches:
master:
- step: *build
- step:
<<: *deploy
deployment: Production
dev:
- step: *build
- step: *deploy
I guess we cannot use combine the Deployment with either Artifact or Cache.
If I use standalone Deployment, so I can use the same deployment for multiple steps (as my screenshot).
In case I add cache/artifact, will get same error as yours.
Just got the same issue today.
I don't think there's currently a solution for this except rewrite the steps to not run two steps in on environment.
Waiting on https://jira.atlassian.com/browse/BCLOUD-18261 which planned to be released in July.
Related https://community.atlassian.com/t5/Bitbucket-questions/The-deployment-environment-test-in-your-bitbucket-pipelines-yml/qaq-p/971584
This is currently not available. They do have a ticket and it says it's being worked on. The best workaround currently appears to be creating multiple developer variable environments for steps that use the same variables.
Ex:
- step:
<<: *InitialSetup
deployment: Beta-Setup
- step:
<<: *Build
deployment: Beta-Build
From the comments on the ticket:
Hey everyone,
I know this is a long-winded workaround, and someone has probably already mentioned it, but I got around the issue by setting up "sub environments", one for each step. E.g. instead of having a "Staging" environment, I set up a "Staging Build" and "Staging Deploy" environment, and just had to duplicate the variables if necessary. I did the same for production.
Having to setup and maintain all these environments and variables can be a pain, but one can automate this to prevent human error, through setting up an OAuth client tool that interfaces with the API (you just need the "pipelines" scope), if one can be bothered to go to the effort (as I have: https://blog.programster.org/bitbucket-create-oauth-client-credentials).
I can't wait for this feature to be completed as that is the "real" solution, and a lot less effort!
With your cases, to solve the problems, you either solve the errors as following options:
Combine all steps into one big step
Or create different deployment variable group Staging DeployChanges and Staging DeployComplied, this way may lead to duplicate variable
ex:
- step: &deploychanges
name: Deploy_Changes
deployment: Staging DeployChanges
script:
- ....
- step: &deploycompiled
name: Deploy_Compiled
deployment: Staging DeployComplied
....

CircleCI error when attempting to restore/save cache

I'm configuring CircleCI to try and cache dependencies so I don't have to run yarn install on every single commit.
This is what my config.yml file looks like:
version: 2.1
jobs:
build-and-test-frontend:
docker:
- image: circleci/node:14
steps:
- checkout
- restore_cache:
name: Restore Yarn Package Cache
keys:
- yarn-packages-{{ checksum "yarn.lock" }}
- run:
working_directory: ./frontend/tests
name: Run jest tests
command: |
yarn install --frozen-lockfile --cache-folder ~/.
yarn test
- save_cache:
name: Save Yarn Package Cache
key: yarn-packages-{{ checksum "yarn.lock" }}
paths:
- ~/.cache/yarn
workflows:
sample:
jobs:
- build-and-test-frontend
But when either restore_cache or save_cache attempts to run, I get the following error:
error computing cache key: template: cacheKey:1:17: executing "cacheKey" at <checksum "yarn.lock">: error calling checksum: open /home/circleci/project/yarn.lock: no such file or directory
I'm brand new to using CircleCI so I'm not sure how to interpret this. What can I do to fix this?
EDIT:
This is the structure of my directory:
--project_root
|
|--frontend
|-node_modules/
|-public/
|-src/
|-tests/
|-package.json
|-yarn.lock
It's hard for me to give a great answer since I can't see your files in the repo but the config you have now suggest that the yarn.lock file you have is not in the root of the repo but rather in ./frontend/tests.
If that's where it is and that's where you want to keep it, then I'd suggest moving the working_dir key from the step level to the job level. This will then apply it to every step including the caching steps. Then they should find the file they are looking for.
Update:
Thanks for the repo tree. According to that you likely want to have your config like this:
version: 2.1
workflows:
sample:
jobs:
- build-and-test-frontend
jobs:
build-and-test-frontend:
docker:
- image: cimg/node:14.17
working_directory: ./frontend
steps:
- checkout
- restore_cache:
name: Restore Yarn Package Cache
keys:
- yarn-packages-{{ checksum "yarn.lock" }}
- run:
name: Run jest tests
command: |
yarn install --frozen-lockfile --cache-folder ~/.
yarn test
- save_cache:
name: Save Yarn Package Cache
key: yarn-packages-{{ checksum "yarn.lock" }}
paths:
- ~/.cache/yarn
You'll notice a few things here:
I moved workflows to that top. That's just a personal stylistic choice but I believe it helps keep your config readable as it grows.
I moved working_directory to the job level instead of the step it was on.
I set working_directory to the frontend directory. Most filepaths on CircleCI will be relative to the working_directory. Since that's where yarn.lock is, that's where I set it.
I change the image from circleci/node:14 to cimg/node:14. The images in the circleci namespaces are deprecated. Going forward, you'll want to use the newer CircleCI images which are in the cimg namespace.

Gitlab cache not uploading due to policy

I am getting the error Not uploading cache {name of branch} due to policy in my gitlab runner. My .yaml file looks like this:
stages:
- test
- staging
- publish
- deploy
# cache using branch name
# https://gitlab.com/help/ci/caching/index.md
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .yarn
- node_modules/
policy: pull
before_script:
- yarn install --cache-folder .yarn
test:
stage: test
image: node:14
script:
- yarn install
- yarn test
pages:
stage: staging
image: alekzonder/puppeteer
except:
- master
script:
- yarn install
- yarn compile
- yarn build
publish:
stage: publish
image: alekzonder/puppeteer
when: manual
script:
- yarn install
- yarn compile
- yarn build
artifacts:
paths:
- dist
deploy:
image: google/cloud-sdk:latest
stage: deploy
needs:
- publish
script:
- gcloud auth activate-service-account --account ${GCP_SERVICE_ACCOUNT} --key-file ${GOOGLE_APPLICATION_CREDENTIALS}
- gsutil -u test rsync -r -d dist/ gs://test.ca
I was wondering why it always fails to upload, and thereby fails to extract the cache. Any tips or corrections welcome. Here is a screenshot of where it fails:
You have the following set:
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .yarn
- node_modules/
policy: pull
Which sets a pipeline-global precedent that you only want to (policy: pull) pull from the cache.
You'll want to read https://docs.gitlab.com/ee/ci/yaml/#cachepolicy
If you omit the policy: piece, the default is to pull-push (which will get your cache uploading).
I tend to have my pipelines structured a little different than yours, though. I typically have a "prep" step that I define, and then run the yarn install once there:
"Installing Dependencies":
image: node:lts
stage: Prep
cache:
paths:
- .yarn
- node_modules/
policy: pull-push
script:
yarn install
...
Note: Then you can leave your global policy of 'pull', since this one job will have an override to pull-push.
Then, you can remove the yarn install on all other tasks, as the cache will be restored.

GitLab CI : merge or replace cache?

I'm using GitLab CI.
I have 2 jobs in the build stage that build differently my app. The 2 jobs upload a cache for the branch. I use the compiled sources to launch some tests in the test stage.
build:
stage: build
script:
- ./gradlew build --build-cache --quiet
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- "*/build"
build_with_different_conf:
stage: build
script:
- ./gradlew buildDiff --build-cache --quiet
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- "*/build"
Test:
stage: test
script:
- ./gradlew test --build-cache
In my example, the job build_with_different_conf take more time to finish.
My question is : Is the last finishing build job upload the cache and replace the cache from the first build job or is it merging files whith the precedent job ?
Thanks.
From what i understand you are using global cache for gradle dependencies.
Than you want to have some kind of job, to job cache.
I would do it this way, more or less.
stages:
- build
- test
cache:
paths:
- <your_gradle_cache>
build_classes:
stage: build
script:
- ./gradlew build --build-cache --quiet
artifacts:
expire_in: 1d
paths:
- <your_build_dir>
build_war:
stage: build
dependencies:
- build_classes
script:
- ./gradlew buildDiff --build-cache --quiet
artifacts:
expire_in: 1w
paths:
- <path_to_your_war>
test_classes:
stage: test
dependencies:
- build_war
script:
- ./gradlew test --build-cache
test_war:
stage: test
dependencies:
- build_war
script:
- test # some kind of test to assure your war is in good condition
In this configuration:
build_classes --[.classes]--> build_war -> [.war]
| |
[.classes] [.war]
| |
V V
test_classes test_war
PS. Dont forget you can use shell (or whatever your runner's os) to debug, understand more about this. You can put ls -la all over the place.
build:
stage: build
Same stage jobs runs in parallel.
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- "*/build"
Cache files are managed by cache:key. It means if you're using same cache:key for different jobs they'll share same cache.zip file across jobs even you're defining different cache:paths. If you're using same key, but different path then your cache won't be effective, because of every job will overwrite cache.zip file with different path contents.
In your case you're using same cache:key across different jobs.
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- "*/build"
It means last finished job will overwrite the cache.zip file not merge and will be used for next job and subsequent pipeline jobs with same key defined.
Bonus:
Test:
stage: test
script:
- ./gradlew test --build-cache
Also beware that if this job requires */build directory contents to exist, you have to be careful and better to use artifacts instead. Cache doesn't always exist and it's provided as best effort delivery.
For example I use gitlab ci's cache like this.
nodejs_test:
stage: test
image: node:12.13-alpine
before_script:
- npm install
script:
- yarn test
cache:
key:
files:
# New cache key will be computed on each package.json change.
- package.json
paths:
- node_modules/
nodejs_build:
stage: build
image: node:12.13-alpine
before_script:
# In case if we miss cache, we can simply install packages again.
# If cache is there npm install won't download them again.
- npm install
script:
- yarn build
cache:
policy: pull # totally optional
key:
files:
- package.json
prefix: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/

Resources