Helm Fail In Cloud Build - continuous-integration

I'm using alpine/helm:3.0.0 in the following Google Cloud Build step
- id: 'update helm app'
name: 'alpine/helm:3.0.0'
args: ['upgrade', 'staging', './iprocure-chart/']
env:
- CLOUDSDK_COMPUTE_ZONE=us-central1-a
- CLOUDSDK_CONTAINER_CLUSTER=iprocure-cluster
The problem is when i run this using cloud-build-local i get the following error and the pipeline ends with a fail
Starting Step #4 - "update helm app"
Step #4 - "update helm app": Already have image (with digest): alpine/helm:3.0.0
Step #4 - "update helm app": Error: UPGRADE FAILED: query: failed to query with labels: Get http://localhost:8080/api/v1/namespaces/default/secrets?labelSelector=name%3Dstaging%2Cowner%3Dhelm%2Cstatus%3Ddeployed: dial tcp 127.0.0.1:8080: connect: connection refused

This is because the configuration has not been set or passed.
To configure checkout = https://cloud.google.com/cloud-build/docs/build-debug-locally#before_you_begin
and in your build step add a evn like this :
id: 'update helm app'
name: 'alpine/helm:3.0.0'
args: ['upgrade', 'staging', './iprocure-chart/']
env:
CLOUDSDK_COMPUTE_ZONE=us-central1-a
CLOUDSDK_CONTAINER_CLUSTER=iprocure-cluster
KUBECONFIG=/workspace/.kube/config
If this does't work try passing the config with --kubeconfig flag in your helm command.Like this :
--kubeconfig=/workspace/.kube/config..

Related

sh: 1: nest: Permission denied in GitHub Action

For some reason the build step for my NestJS project in my GitHub Action fails for a few days now. I use Turborepo with pnpm in a monorepo and try to run the build with turbo run build. This works flawlessly on my local machine, but somehow in GitHub it fails with sh: 1: nest: Permission denied. ELIFECYCLE  Command failed with exit code 126. I'm not sure how this is possible, since I couldn't find any meaningful change I made to the code in the meantime. It just stopped working unexpectedly. I actually think it is an issue with GH Actions, since it actually works in my local Docker build as well.
Has anyone else encountered this issue with NestJS in GH Actions?
This is my action yml:
name: Test, lint and build
on:
push:
jobs:
test-lint-build:
runs-on: ubuntu-latest
services:
postgres:
# Docker Hub image
image: postgres
# Provide the password for postgres
env:
POSTGRES_HOST: localhost
POSTGRES_USER: test
POSTGRES_PASSWORD: docker
POSTGRES_DB: financing-database
ports:
# Maps tcp port 5432 on service container to the host
- 2345:5432
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Install pnpm
uses: pnpm/action-setup#v2.2.2
with:
version: latest
- name: Install
run: pnpm i
- name: Lint
run: pnpm run lint
- name: Test
run: pnpm run test
- name: Build
run: pnpm run build
env:
VITE_SERVER_ENDPOINT: http://localhost:8000/api
- name: Test financing-server (e2e)
run: pnpm --filter #project/financing-server run test:e2e
I found out what was causing the problem. I was using node-linker = hoisted to mitigate some issues the pnpm way of linking modules was causing with my jest tests. Removing this from my project suddenly made the action work again.
I still don't know why this only broke the build recently, since I've had this option activated for some time now.

Drone CI/CD only stuck in exec pipeline

When i use docker pipeline it will success building.
But when i use exec pipe line it's always stuck in pending.
And i don't what going wrong.
kind: pipeline
type: exec
name: deployment
platform:
os: linux
arch: amd64
steps:
- name: backend image build
commands:
- echo start build images...
# - export MAJOR_VERSION=1.0.rtm.
# - export BUILD_NUMBER=$DRONE_BUILD_NUMBER
# - export WORKSPACE=`pwd`
# - bash ./jenkins_build.sh
when:
branch:
- master
Docker Pipe Line is fine.
kind: pipeline
type: docker
name: deployment
steps:
- name: push image to repo
image: plugins/docker
settings:
dockerfile: src/ZR.DataHunter.Api/Dockerfile
tags: latest
insecure: true
registry: "xxx"
repo: "xxx"
username:
from_secret: username
password:
from_secret: userpassword
First of all it's important to note that the exec pipeline can be used only when Drone is self-hosted. It is written in the official docs:
Please note exec pipelines are disabled on Drone Cloud. This feature is only available when self-hosting
When Drone is self-hosted be sure, that:
The exec runner is installed
It is configured properly in its config file (to be able to connect to the Drone server)
And the drone-runner-exec service is running
After the service is started, look for its log file and you have to see an info message saying it was able to connect to your Drone server:
level=info msg="successfully pinged the remote server"
level=info msg="polling the remote server"
It's also possible to view the web UI (Dashboard) of the running service if you enable it.
So if you see it can poll your server, then your exec pipeline should run as expected.

Can someone advise me a good practice for my Azure DevOps deployment

Good afternoon,
I am building a CI pipeline in Azure DevOps which is new ground for me. I managed to create add the build tasks en steps that I wanted. Although there still are some issues. I explain those issues down here.
Issue #1
I misunderstood the meaning of the latest tag. I thought it would automatically pull the latest/newest version from the specified Docker Hub.
Currently my Docker build looks like this:
- task: Docker#2
displayName: 'Build Docker image'
inputs:
repository: '<my_repo_name>'
command: 'build'
Dockerfile: '**/Dockerfile'
tags: $(Build.BuildId)
This pipeline YAML is to deploy to my production VPS which I added under Pipelines -> Environments.
Here is the deployment step of the pipeline:
- deployment: VMDeploy
displayName: 'Deployment to VPS'
pool:
vmImage: 'Ubuntu-20.04'
environment:
name: CDB_VPS
resourceName: <my_resource_name>
resourceType: VirtualMachine
strategy:
runOnce:
deploy:
steps:
- script: docker pull <my_repo_name>:latest
- script: docker stop $(docker ps -aq)
- script: docker run -p 8085:8085 <my_repo_name>:latest
Issue #2
I do not get any errors in the pipeline while running it. But I am wondering if this is a good practice. By using this it will always run version latest. Also I don't think this is how I should deploy.
Issue #3
The deployment block gets executed before the build and push block is finished. To give extra information I will post the entire YAML file down here.
trigger:
- master
jobs:
- job: Build
displayName: 'Build Maven project and Docker build'
steps:
- task: replacetokens#3
displayName: 'Replace tokens'
inputs:
targetFiles: |
**/application.properties
- task: Maven#3
displayName: 'Build Maven project'
inputs:
mavenPomFile: 'pom.xml'
goals: 'package'
jdkVersionOption: 11
publishJUnitResults: true
- task: Docker#2
displayName: 'Build Docker image'
inputs:
repository: '<my_repo_name>'
command: 'build'
Dockerfile: '**/Dockerfile'
tags: $(Build.BuildId)
- task: Docker#2
displayName: 'Push Docker image to Docker hub'
inputs:
containerRegistry: 'Dockerhub connection'
repository: '<my_repo_name>'
command: 'push'
Dockerfile: '**/Dockerfile'
tags: $(Build.BuildId)
- deployment: VMDeploy
displayName: 'Deployment to VPS'
pool:
vmImage: 'Ubuntu-20.04'
environment:
name: CDB_VPS
resourceName: <my_vps_resource_name>
resourceType: VirtualMachine
strategy:
runOnce:
deploy:
steps:
- script: docker pull <my_repo_name>:latest
- script: docker stop $(docker ps -aq)
- script: docker run -p 8085:8085 <my_repo_name>:latest
If you want to make this on specific image please replace latest with $(Build.BuildId).
steps:
- script: docker pull <my_repo_name>:$(Build.BuildId)
- script: docker stop $(docker ps -aq)
- script: docker run -p 8085:8085 <my_repo_name>:$(Build.BuildId)
And if you want VMDeploy waits for Build please add dependsOn
- deployment: VMDeploy
depenedsOn: Build
Issue #1
The tag in the docker task mean: A list of tags in separate lines. These tags are used in build, push and buildAndPush commands. We could see the tag in the docker, such as below.
Issue #2
We could check the latest deploy in the docker and Azure DevOps pipeline log to ensure that it always run version latest
Issue #3
You could check Krzysztof Madej answer.

Github action: Build and push docker image fails. server message: insufficient_scope: authorization failed

I'm using the GitHub action "Build and push Docker images" as it's from Docker and a top rated verified action.
The relevant snippet of my YAML file is as follows
- name: Set up QEMU
uses: docker/setup-qemu-action#v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
- name: Login to DockerHub
uses: docker/login-action#v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_ACCESS_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action#v2
with:
push: true
tags: user/app:latest
- name: Image digest
run: echo ${{ steps.docker_build.outputs.digest }}
Just as it was shown in the example. When the workflow runs, I consistently see the error
10 [stage-1 2/2] COPY --from=build /workspace/target/*.jar app.jar
#10 DONE 0.9s
#12 exporting to image
#12 exporting layers
#12 exporting layers 4.3s done
#12 exporting manifest sha256:dafb0869387b325491aed0cdc10c2d0206aca28006b300554f48e4c389fc3bf1 done
#12 exporting config sha256:f64316c3b529b43a6cfcc933656c77e556fea8e5600b6d0cce8dc09f775cf107 done
#12 pushing layers
#12 pushing layers 0.8s done
#12 ERROR: server message: insufficient_scope: authorization failed
------
> exporting to image:
------
failed to solve: rpc error: code = Unknown desc = server message: insufficient_scope: authorization failed
Error: The process '/usr/bin/docker' failed with exit code 1
The contents of my Dockerfile for a standard spring-boot application is as shown below
FROM maven:3.6.3-jdk-11-slim AS build
RUN mkdir -p /workspace
WORKDIR /workspace
COPY pom.xml /workspace
COPY src /workspace/src
RUN mvn -B -f pom.xml clean package -DskipTests
FROM openjdk:11-jdk-slim
COPY --from=build /workspace/target/*.jar app.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","app.jar"]
Any clue how this can be fixed?
I'm able to publish to docker-hub when using a different GitHub action as shown below
- name: Build and push docker image
uses: elgohr/Publish-Docker-Github-Action#master
with:
name: bloque/sales-lead-management
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_ACCESS_TOKEN }}
You need to set a path context while using the Docker's build-push-action. It should look something like this:
- name: Build and push
id: docker_build
uses: docker/build-push-action#v2
with:
context: .
file: Dockerfile
push: true
tags: user/app:latest
The file option is entirely optional, but if left out it will find the Dockerfile inside the root directory.
It's also recommended to use the metadata action that provides more relevant metadata and tags for your Docker image.
Here is an example of how I did it for Spring Boot apps in few of my projects: https://github.com/moja-global/FLINT.Reporting/blob/d7504909f8f101054e503a2993f4f70ca92c2577/.github/workflows/docker.yml#L153

Is it possible to start PubSub Emulator from Cloud Build step

As the title mentions, I would like to know if, from a Cloud Build step, I can start and use the pubsub emulator?
options:
env:
- GO111MODULE=on
- GOPROXY=https://proxy.golang.org
- PUBSUB_EMULATOR_HOST=localhost:8085
volumes:
- name: "go-modules"
path: "/go"
steps:
- name: "golang:1.14"
args: ["go", "build", "."]
# Starts the cloud pubsub emulator
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args: [
'-c',
'gcloud beta emulators pubsub start --host-port 0.0.0.0:8085 &'
]
- name: "golang:1.14"
args: ["go", "test", "./..."]
For a test I need it, it works locally and instead of using a dedicated pubsub from cloud build, I want to use an emulator.
Thanks
As I found a workaround and an interesting git repository, I wanted to share with you the solution.
As required, you need a cloud-build.yaml and you want to add a step where the emulator will get launched:
options:
env:
- GO111MODULE=on
- GOPROXY=https://proxy.golang.org
- PUBSUB_EMULATOR_HOST=localhost:8085
volumes:
- name: "go-modules"
path: "/go"
steps:
- name: "golang:1.14"
args: ["go", "build", "."]
- name: 'docker/compose'
args: [
'-f',
'docker-compose.cloud-build.yml',
'up',
'--build',
'-d'
]
id: 'pubsub-emulator-docker-compose'
- name: "golang:1.14"
args: ["go", "test", "./..."]
As you can see, I run a docker-compose command which will actually start the emulator.
version: "3.7"
services:
pubsub:
# Required for cloudbuild network access (when external access is required)
container_name: pubsub
image: google/cloud-sdk
ports:
- '8085:8085'
command: ["gcloud", "beta", "emulators", "pubsub", "start", "--host-port", "0.0.0.0:8085"]
network_mode: cloudbuild
networks:
default:
external:
name: cloudbuild
It is important to set the container name as well as the network, otherwise you won't be able to access the pubsub emulator from another cloud build step.
It is possible since every step on Cloud Build is executed in a docker container, but the image gcr.io/cloud-builders/gcloud only has a minimum installation of the gcloud components, before start the emulator your need to install the pubsub emulator via gcloud command
gcloud components install pubsub-emulator
Also it is necessary to install Open JDK7 since most of the Gcloud emulators needs java to operate.

Resources