Appveyor deployment into 2 different locations based on branch commit yml parse error - continuous-integration

I have a pretty simple scenario where I want to deploy to two different locations depending on the commit happening on dev branch or master. Since its imposible to have two different yml files on these branches since one overwrites the other every time I came about this article here:
https://www.appveyor.com/blog/2014/07/23/appveyor-yml-and-multiple-branches/
The article makes it clear we can use one yml file to set htis up howver I het an error:
Error parsing appveyor.yml: (Line: 35, Col: 1, Idx: 554) - (Line: 35, Col: 9, Idx: 562): Duplicate key
Here is my yml
image: Visual Studio 2017
environment:
nodejs_version: "6"
platform:
- x64
install:
- ps: Install-Product node $env:nodejs_version
- yarn install --no-progress
build_script:
- yarn ng -- build --prod --aot --no-progress
cache:
- node_modules -> yarn.lock
- "%LOCALAPPDATA%/Yarn"
branches:
only:
- master
artifacts:
path: '\dist\'
name: NINJASPA
before_deploy:
ssh root#ipadresshere -t "ls; rm -r -v /var/www/asp/ninjacodingfront/*; ls; exit; bash --login"
deploy:
provider: Environment
name: NinjaCodingFront
branches:
only:
- dev
artifacts:
path: '\dist\'
name: NINJASPADEV
before_deploy:
ssh root#ipadresshere -t "ls; rm -r -v /var/www/asp/ninjacodingfrontdev/*; ls; exit; bash --login"
deploy:
provider: Environment
name: NinjaCodingFrontDev
Line 35 is where branches dev comes come:
branches: --------------- (line 35)
only:
- dev
No idea what to do next, please help. Hope its solvable. Thanks!

So finally this is how its done:
image: Visual Studio 2017
platform:
- x64
environment:
nodejs_version: "6"
install:
- ps: Install-Product node $env:nodejs_version
- yarn install --no-progress
build_script:
- yarn ng -- build --prod --aot --no-progress
cache:
- node_modules -> yarn.lock
- "%LOCALAPPDATA%/Yarn"
for:
-
branches:
only:
- master
deploy:
provider: Environment
name: NinjaCodingFront
artifacts:
path: '\dist\'
name: NINJASPA
before_deploy:
ssh root#xxxxxxxxx -t "ls; rm -r -v /var/www/asp/ninjacodingfront/*; ls; exit; bash --login"
-
branches:
only:
- dev
deploy:
provider: Environment
name: NinjaCodingFrontDev
artifacts:
path: '\dist\'
name: NINJASPADEV
before_deploy:
ssh root#xxxxxxxxxxx -t "ls; rm -r -v /var/www/asp/ninjacodingfrontdev/*; ls; exit; bash --login"

Related

Circle-ci: No workflow when I set filters to only tags and ignore all branches

I’m trying to run a workflow only on tags, but I’m getting no workflow
And when I remove ignore branches, it runs on every branch and tag.
Do I miss something? Or what exactly can I achieve with this usecase?
This is a screenshot, and I’m expecting the workflow to run on instable-2.7.31.
Screenshot- noworkflow
Thanks.
My .circleci/config.yml
only-deploy-unstable: &only-deploy-unstable
context: Unstable-context
filters:
tags:
only: /^unstable-.*/
branches:
ignore: /.*/
version: 2.1
jobs:
build_unstable:
docker:
- image: docker:20.10.8
environment:
DOCKER_IMAGE_BASE_URL: **********
steps:
- checkout
- setup_remote_docker
- run: apk update
- run: apk add git
- run: docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
- run:
name: build and push unstable docker image
no_output_timeout: 15m
command: |
export TAG_NAME=$(git describe --tags --abbrev=0)
echo ${DOCKER_IMAGE_BASE_URL}:$TAG_NAME
docker build --build-arg STAGING=test --rm -t $DOCKER_IMAGE_BASE_URL:$TAG_NAME -t $DOCKER_IMAGE_BASE_URL:latest .
docker push $DOCKER_IMAGE_BASE_URL:$TAG_NAME
docker push $DOCKER_IMAGE_BASE_URL:latest
deploy_unstable:
docker:
- image: docker:20.10.8
steps:
- checkout
- setup_remote_docker
- run: command -v ssh-agent >/dev/null || ( apk add --update openssh )
- run: eval $(ssh-agent -s)
- run: ********************
workflows:
# build unstable-version
build_and_push_unstable:
jobs:
- build_unstable: *only-deploy-unstable
- hold:
<<: *only-deploy-unstable
type: approval
requires:
- build_unstable
- deploy_unstable:
<<: *only-deploy-unstable
requires:
- hold
Could be the syntax when you reference your YAML alias. Have you tried:
- build_unstable:
<<: *only-deploy-unstable
- hold:
<<: *only-deploy-unstable
type: approval
requires:
- build_unstable
- deploy_unstable:
<<: *only-deploy-unstable
requires:
- hold

Gitlab pipeline error With CD/CI for AWS ec2 debian instance: This job is stuck because you don't have any active runners online

I want to create a CI/CD pipeline between gitlab and aws ec2 deployment.
My repository is nodejs/express web server project.
And I created a gitlab-ci.yaml
image: node:latest
cache:
paths:
- node_modules/
stages:
- build
- test
- staging
- openMr
- production
before_script:
- apt-get update -qq && apt-get install
Build:
stage: build
tags:
- node
before_script:
- yarn config set cache-folder .yarn
- yarn install
script:
- npm run build
Test:
stage: test
tags:
- node
before_script:
- yarn config set cache-folder .yarn
- yarn install --frozen-lockfile
script:
- npm run test
Deploy to Production:
stage: production
tags:
- node
before_script:
- mkdir -p ~/.ssh
- echo -e "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- bash ./gitlab-deploy/.gitlab-deploy.prod.sh
environment:
name: production
url: http://ec2-url.compute.amazonaws.com:81
When I push a new commit pipeline failed on build step. And I get a warning as :
This job is stuck because you don't have any active runners online or
available with any of these tags assigned to them: node
I checked my runner on gitlab settings/CI/CD
After that I checkked server
admin#ip-111.222.222.111:~$ gitlab-runner
statusRuntime platform arch=amd64 os=linux pid=18787 revision=98daeee0 version=14.7.0
FATAL: The --user is not supported for non-root users
You need to remove the tag node from your jobs. Runner tags are used to define which runner should pick up your jobs (https://docs.gitlab.com/ee/ci/runners/configure_runners.html#use-tags-to-control-which-jobs-a-runner-can-run). As there is no runner available which supports the tag node, your job gets stuck.
It doesn't look like your pipeline has any special requirements so you can just remove the tag so it can be picked up by every runner.
The runner that can be seen in your screenshot supports the tag shop_service_runner. So another option would be to change the tag node to shop_service_runner which would lead to this runner (and every runner with the same tags) being able to pick up this job.

docker-compose deployment configuration for Circle CI

I am using Circle CI to deploy a microservice to a Digital Ocean droplet and had a few questions about whether my approach is the right one.
My microservice is built using docker-compose, and therefore requires a docker-compose.yml file to pull, start the images that constitute it.
In a nutshell, my deployment approach would be:
Merge branch to master will kick off a CircleCI build
CircleCI will run unit tests
Upon all tests passing, docker-compose build and docker-compose push to Docker Hub
Stop all running images of that service on remote server.
Remove dangling images, and local networks.
Download the relevant docker-compose.yml, Dockerfile and docker-compose.env files.
Pull using docker-compose pull
Start images using docker-compose up
I am using this configuration in CircleCI:
version: 2.1
jobs:
build:
docker:
- image: "circleci/node:10.16.0"
steps:
- checkout
- run:
name: Update to latest npm version
command: "sudo npm install -g npm#latest"
- restore_cache:
key: dependency-cache-{{ checksum "package-lock.json" }}
- run:
name: Install dependencies
command: npm install
- run:
name: Install `docker-compose`
command: |
curl -L https://github.com/docker/compose/releases/download/1.19.0/docker-compose-`uname -s`-`uname -m` > ~/docker-compose
chmod +x ~/docker-compose
sudo mv ~/docker-compose /usr/local/bin/docker-compose
- setup_remote_docker:
docker_layer_caching: false
- run:
name: Build using `docker-compose`
command: |
docker-compose build
- run:
name: Login for Docker Hub
command: |
echo "$DOCKER_PASSWORD" | docker login --username $DOCKER_USERNAME --password-stdin
- run:
name: Push to Docker Hub
command: |
docker-compose push
- run: ssh-keyscan $DIGITALOCEAN_HOST >> ~/.ssh/known_hosts
- add_ssh_keys:
fingerprints:
- fo:of:fe:ef:af
- run:
name: Remove currently running containers
command: |
ssh root#$DIGITALOCEAN_HOST ./deploy_image.sh
I am planning on creating a bash script to handle steps 4 to 8 from my list above.
Is it a good idea to have a script take care of the Docker steps?
Or is there a better way to have a more "native" CircleCI configuration?

Yaml : Formatting error in yaml file. expected '<document start>', but found '<block mapping start>

version: 2.1
executors:
docker-publisher:
environment:
IMAGE_NAME: vinaya.nayak/mocking-service
docker:
- image: circleci/buildpack-deps:stretch
jobs:
build:
executor: docker-publisher
steps:
- checkout
- setup_remote_docker
- run:
name: Build Docker image
command: |
docker build -t $IMAGE_NAME:latest .
- run:
name: Archive Docker image
command: docker save -o mocking.tar $IMAGE_NAME
- persist_to_workspace:
root: .
paths:
- ./mocking.tar
publish-latest:
executor: docker-publisher
steps:
- attach_workspace:
at: /tmp/workspace
- setup_remote_docker
- run:
name: Load archived Docker image
command: docker load -i /tmp/workspace/mocking.tar
- run:
name: Publish Docker Image to Docker Hub
command: |
echo "$DOCKER_HUB_PASSWORD" | docker login -u "$DOCKER_HUB_USERNAME" --password-stdin
docker push docker.kfz42.de/v2/java/mocking-service/$IMAGE_NAME:latest .
workflows:
version: 2
build-master:
jobs:
- build:
filters:
branches:
only: master
- publish-latest:
requires:
- build
filters:
branches:
only: master
can some one help me with whats wrong with my yaml file. I get the following error. I even tried using yaml formatter and the yaml formatter says that this is a valid yaml file
!/bin/sh -eo pipefail Unable to parse YAML expected '', but found '' in 'string', line 39,
column 1: workflows: Warning: This configuration was auto-generated to
show you the message above. Don't rerun this job. Rerunning will have
no effect. false Exited with code 1
Your file starts with a key-value pair indented with two spaces, so you have a root level node that is a mapping. That is fine as long as all other root level are indented two spaces as well.
workflows is not indented, that is why the parser expected a new document.
version: 2.1
executors:
docker-publisher:
environment:
IMAGE_NAME: vinaya.nayak/mocking-service
docker:
- image: circleci/buildpack-deps:stretch
jobs:
build:
executor: docker-publisher
steps:
- checkout
- setup_remote_docker
- run:
name: Build Docker image
command: |
docker build -t $IMAGE_NAME:latest .
- run:
name: Archive Docker image
command: docker save -o mocking.tar $IMAGE_NAME
- persist_to_workspace:
root: .
paths:
- ./mocking.tar
publish-latest:
executor: docker-publisher
steps:
- attach_workspace:
at: /tmp/workspace
- setup_remote_docker
- run:
name: Load archived Docker image
command: docker load -i /tmp/workspace/mocking.tar
- run:
name: Publish Docker Image to Docker Hub
command: |
echo "$DOCKER_HUB_PASSWORD" | docker login -u "$DOCKER_HUB_USERNAME" --password-stdin
docker push docker.kfz42.de/v2/java/mocking-service/$IMAGE_NAME:latest .
workflows:
version: 2
build-master:
jobs:
- build:
filters:
branches:
only: master
- publish-latest:
requires:
- build
filters:
branches:
only: master
I fixed the above problem by indenting workflows with 2 spaces

GitLab Runner cache for Gradle is not working

I'm using GitLab Runner as CI to build an Android project however the cache is not working.
Here's my .gitlab-ci.yml. It was modified from https://gist.github.com/daicham/5ac8461b8b49385244aa0977638c3420.
image: runmymind/docker-android-sdk:latest
variables:
GRADLE_USER_HOME: $CI_PROJECT_DIR/.gradle
stages:
- build
debug:
stage: build
script:
- set +e
- du -sh $CI_PROJECT_DIR/.gradle/wrapper
- du -sh $CI_PROJECT_DIR/.gradle/caches
- set -e
- ./gradlew assembleDebug
- mkdir artifacts
- cp mobile/build/outputs/apk/*.apk artifacts/
- cp wear/build/outputs/apk/*.apk artifacts/
cache:
paths:
- .gradle/wrapper/
- .gradle/caches/
- build/
- mobile/build/
- wear/build/
artifacts:
name: "project_${CI_JOB_NAME}_${CI_COMMIT_REF_NAME}_${CI_COMMIT_SHA}"
expire_in: 2 weeks
paths:
- artifacts/
And the log:
Running with gitlab-ci-multi-runner 9.0.0 (08a9e6f)
Using Docker executor with image runmymind/docker-android-sdk:latest ...
Using docker image sha256:d696fa13188c8d2d121c86cf526201b363c1e34ee7b163d6ce1ab1718f91a5e6 ID=sha256:d696fa13188c8d2d121c86cf526201b363c1e34ee7b163d6ce1ab1718f91a5e6 for predefined container...
Pulling docker image runmymind/docker-android-sdk:latest ...
Using docker image runmymind/docker-android-sdk:latest ID=sha256:474ac98077a496f2f71aa22ce4eebcea966c2960a061d4a59babe81ff007009b for build container...
Running on runner-8ce5d03c-project-72-concurrent-0 via outrage...
Cloning repository...
Cloning into '/builds/User/android-project'...
Checking out 015d01d0 as master...
Skipping Git submodules setup
Checking cache for default...
Successfully extracted cache
$ set +e
$ du -sh $CI_PROJECT_DIR/.gradle/wrapper
du: cannot access '/builds/User/android-project/.gradle/wrapper': No such file or directory
$ du -sh $CI_PROJECT_DIR/.gradle/caches
du: cannot access '/builds/User/android-project/.gradle/caches': No such file or directory
$ set -e
$ ./gradlew assembleDebug
Downloading https://services.gradle.org/distributions/gradle-3.4.1-all.zip
Unzipping /builds/User/android-project/.gradle/wrapper/dists/gradle-3.4.1-all/c3ib5obfnqr0no9szq6qc17do/gradle-3.4.1-all.zip to /builds/User/android-project/.gradle/wrapper/dists/gradle-3.4.1-all/c3ib5obfnqr0no9szq6qc17do
Set executable permissions for: /builds/User/android-project/.gradle/wrapper/dists/gradle-3.4.1-all/c3ib5obfnqr0no9szq6qc17do/gradle-3.4.1/bin/gradle
Starting a Gradle Daemon (subsequent builds will be faster)
Download https://jcenter.bintray.com/com/android/tools/build/gradle/2.3.0/gradle-2.3.0.pom
(more downloads)
I have also tried using gradle argument to set gradle user home, aggressively specifying .gradle/ for cache, etc, but none of them worked.
Any ideas?
If you use Gitlab < 9.0,you add to specify the cache has to be shared among different pipelines.
Try to add key: $CI_PROJECT_NAME under cache:
image: runmymind/docker-android-sdk:latest
variables:
GRADLE_USER_HOME: $CI_PROJECT_DIR/.gradle
stages:
- build
debug:
stage: build
script:
- set +e
- du -sh $CI_PROJECT_DIR/.gradle/wrapper
- du -sh $CI_PROJECT_DIR/.gradle/caches
- set -e
- ./gradlew assembleDebug
- mkdir artifacts
- cp mobile/build/outputs/apk/*.apk artifacts/
- cp wear/build/outputs/apk/*.apk artifacts/
cache:
key: $CI_PROJECT_NAME
paths:
- .gradle/wrapper/
- .gradle/caches/
- build/
- mobile/build/
- wear/build/
artifacts:
name: "project_${CI_JOB_NAME}_${CI_COMMIT_REF_NAME}_${CI_COMMIT_SHA}"
expire_in: 2 weeks
paths:
- artifacts/

Resources