I've been following a lot of tutorials on CI using Python but the tutorials seem to stop there and rarely take the next step to CD. I'm a sole developer as well.
I've setup a project on Github that runs locally on my PC and is not a web app. I've connected it to CircleCI for CI. Here is my config.yml file.
version: 2
jobs:
build:
docker:
- image: circleci/python:3.7
working_directory: ~/repo
steps:
# Step 1: obtain repo from GitHub
- checkout
- run:
name: install dependencies
command: |
sudo apt-get update
pip install -r requirements.txt
- run:
name: run tests
command: |
python -m pytest -v
Everything runs great and I get an email from CircleCI alerting me the build failed when I make a push to master on github and one the of the pytests fail.
So my question, is what is the next step here? I have a few thoughts but am not sure on any of them honestly.
Create separate test and prod versions of the code. Automate updating the prod version when the test version builds with no errors. However, not sure what tools to use for this.
Push to project to Dockerhub. This seems redundant to me though, because Docker would run the same pytests that CircleCI is running. I'm not sure how this would even help with CD at this point.
Could someone please provide some guidance on next steps here?
Currently you have only one job build, so you can add more jobs under the jobs section. So what you want to do here is:
add test
build prod version
Push to Dockerhub
Please use config 2.1 to enable the workflows.
version: 2.1
jobs:
build:
docker:
- image: circleci/python:3.7
working_directory: ~/repo
steps:
# Step 1: obtain repo from GitHub
- checkout
- run:
name: install dependencies
command: |
sudo apt-get update
pip install -r requirements.txt
- run:
name: run tests
command: |
python -m pytest -v
test:
docker:
- image: circleci/python:3.7
steps:
- checkout
- run: echo "do your test here"
build-prod:
docker:
- image: circleci/python:3.7
steps:
- checkout
- run: echo "build your app"
push-to-dockerhub:
docker:
- image: circleci/python:3.7
steps:
- checkout
- setup_remote_docker # this is necessary to use docker daemon
- run: echo "do docker login and docker push here"
workflows:
build-and-push:
jobs:
- build
- test
requires:
- build
- build-prod
requires:
- test
- push-to-dockerhub
requires:
- build-prod
Please make sure we're using requires to run the job only when the required job is finished successfully.
Well definitely I've not tested the config on my end, but it's like above config. You have more configuration documents here - so please take a look for it to make it perfectly work.
https://circleci.com/docs/2.0/configuration-reference/
Related
I really can't understand why on my machine via SSH I can execute npm commands and in deploy pipeline is not work? Wtf
Starting deploy
Already up to date.
v16.7.0
7.20.3
Deploy end
Result in CircleCI
Starting deploy
Already up to date.
deploy.sh: line 6: node: command not found
deploy.sh: line 7: npm: command not found
Deploy end
version: 2.1
# Define the jobs we want to run for this project
jobs:
pull-and-build:
docker:
- image: arvindr226/alpine-ssh
steps:
- checkout
- run: ssh -o StrictHostKeyChecking=no ubuntu#xx.xx.xx.xx "cd ~/apps/clm/core; sudo bash deploy.sh"
# Orchestrate our job run sequence
workflows:
version: 2
build-project:
jobs:
- pull-and-build:
filters:
branches:
only:
- Desenvolvimento
My bash script
#!/bin/bash
echo "Starting deploy"
cd ~/apps/clm/core
git pull
node -v
npm -v
echo "Deploy end"
Thanks a lot to anyone who helps.
I really don't understand what's going on I've tried looking for everything...
The problem was in the PATH!
ssh -o StrictHostKeyChecking=no xxx#xx.xxx.xx.xx "source ~/.nvm/nvm.sh; cd ~/apps/xxx/xxx; git pull; npm install; pm2 restart core client"
I'm using Concourse for building my java package.
In order to run integration tests of that package, I need a local instance of elasticsearch present.
Prior to ES version 8, all I was doing was installing ES in Docker image that I would then use as Concourse task's image resource to build my java package in:
FROM openjdk:11-jdk-slim-stretch
RUN apt-get update && apt-get install -y procps
ADD "https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.9.3-amd64.deb" /es.deb
RUN dpkg -i es.deb
RUN rm es.deb
Later I would just start it right before building with:
/etc/init.d/elasticsearch start
Problems started when upgrading ES to version 8. That init.d file does not seem to exist anymore. Some of the advices I found suggest running ES as a container, so running ES container inside of the concourse container which seems a bit too complex for my use case.
If you had similar problems in your projects, how did you solve them?
This is what I would do:
Build your docker image off of an official Elastic docker image, e.g.:
FROM elasticsearch:8.2.2
USER root
RUN apt update && apt install -y sudo
Start Elastic within your task. Suppose the image got pushed to oozie/elastic on docker. Then the following pipeline job should succeed:
jobs:
- name: run-elastic
plan:
- task: integtest
config:
platform: linux
image_resource:
type: docker-image
source:
repository: oozie/elastic
run:
path: /bin/bash
args:
- -c
- |
(sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch -Expack.security.enabled=false -E discovery.type=single-node > elastic.log) &
while ! curl http://localhost:9200; do sleep 10; done
It should result in the following task run:
Pretty new to Nektos/act and in general running workflows locally and cant seem to find a solution to a permissions denied error when installing Node version 16. Here is the error I am running into when I run the following:
Command:
act -j release
Error:
docker exec cmd=[mkdir -p /var/run/act/actions/actions-setup-node#v1/] user= workdir=
mkdir: cannot create directory '/var/run/act/actions': Permission denied
Yaml (Example)
name: Release Example
on:
push:
branches: [ master ]
jobs:
release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
with:
token: ${{secrets.PRIVATE_SECRET}}
- name: Use version 16 of Node.js
uses: actions/setup-node#v1
with:
node-version: '16'
- name: Pre Install
run: echo "//npm.pkg.github.com/:_authToken=${{secrets.GITHUB_TOKEN}}"> ~/.npmrc
- name: Install
run: npm ci
env:
PRIVATE_SECRET: ${{secrets.PRIVATE_SECRET}}
- name: Release
env:
GITHUB_TOKEN: ${{secrets.PRIVATE_SECRET}}
PRIVATE_SECRET: ${{secrets.PRIVATE_SECRET}}
REGISTRY_TOKEN: ${{secrets.PRIVATE_SECRET}}
run: npx semantic-release
What I have tried:
I have tried setting the user to root on the container for example
container:
image: ghcr.io/catthehacker/ubuntu:full-20.04
options: --user root
I have tried setting sudo in steps
- run: sudo chown runner:docker /var/run/docker.sockenter
I have tried passing the secrets via acts flags
I have tried setting the working directory and setting env auth override to true
I have checked issues related to this topic on the repo and it seems like others are facing the same issue but I have not been able to figure out a solution.
NOTE: This is all working on GitHub, but fails locally with the error mentioned above. Trying hard to test locally to not muddy up my repo with broken commits. Any help is greatly appreciated.
Appears to be a bug with the recent release. I confirm that downgrading to 0.2.24 fixed this issue.
https://github.com/nektos/act/issues/935#issuecomment-1035261208
brew remove act
cd $(brew --repository)/Library/Taps/homebrew/homebrew-core/Formula
git checkout 3ab2604b1e630d4eccab40d0e78f29bd912a72b8 -- act.rb
brew install act
brew pin act
git checkout HEAD -- act.rb
act --version # make sure it's 0.2.24
I have a local gitlab ce server and gitlab-ci runner, all running on docker container. And I just want to test out if the gitlab-ci is working with minimal code in .gitlab-ci.yml; However, it ended up with the ci does not run at all, and git version wasn't posting as well, and showing error codes
Running with gitlab-runner 14.2.0 (58ba2b95)
on GitLab-cicd-practice GPdsWyY7
Preparing the "shell" executor 00:00
Using Shell executor...
Preparing environment 00:00
Running on gitlab...
Getting source from Git repository 00:01
Fetching changes...
bash: line 113: git: command not found
ERROR: Job failed: exit status 1
Code for .gitlab-ci.yml
build:
stage: build
before_script:
- git --version
script: echo hello
test:
script: echo
stage: test
needs: [build]
Looking at this, the runner is running using shell, therefore the problem is that git is not installed on the machine that's running the gitlab-runner.
To fix this just install git on the machine so that the gitlab-runner can use it.
If you're on linux you should be able to install it with apt or yum
apt install git
or
yum install git
I have the following .gitlab-ci.yml...
stages:
- test
- build
- art
image: golang:1.9.2
variables:
BIN_NAME: example
ARTIFACTS_DIR: artifacts
GO_PROJECT: example
GOPATH: /go
before_script:
- mkdir -p ${GOPATH}/src/${GO_PROJECT}
- mkdir -p ${CI_PROJECT_DIR}/${ARTIFACTS_DIR}
- go get -u github.com/golang/dep/cmd/dep
- cp -r ${CI_PROJECT_DIR}/* ${GOPATH}/src/${GO_PROJECT}/
- cd ${GOPATH}/src/${GO_PROJECT}
test:
stage: test
script:
# Run all tests
go test -run ''
build:
stage: build
script:
# Compile and name the binary as `hello`
- go build -o hello
- pwd
- ls -l hello
# Execute the binary
- ./hello
# Move to gitlab build directory
- mv ./hello ${CI_PROJECT_DIR}
artifacts:
paths:
- ./hello
This works great for linux but i now need to do the same so that it builds a windows executable.
I then plan to run a scheduled script to download the artifacts.
The other option is I run a virtual linux server on the windows server and use that to run my go binarys.
I know i need to change the image to windows but can't see to find an appropriate one online (one that is configured for golang).
Or is it possible to have this docker image build a windows exe?
This is not a gitlab question, but a go question.
Since Go version 1.5 cross-compiling has become very easy.
GOOS=windows GOARCH=386 go build -o hello.exe hello.go
You can now run hello.exe on a Windows machine