When i use docker pipeline it will success building.
But when i use exec pipe line it's always stuck in pending.
And i don't what going wrong.
kind: pipeline
type: exec
name: deployment
platform:
os: linux
arch: amd64
steps:
- name: backend image build
commands:
- echo start build images...
# - export MAJOR_VERSION=1.0.rtm.
# - export BUILD_NUMBER=$DRONE_BUILD_NUMBER
# - export WORKSPACE=`pwd`
# - bash ./jenkins_build.sh
when:
branch:
- master
Docker Pipe Line is fine.
kind: pipeline
type: docker
name: deployment
steps:
- name: push image to repo
image: plugins/docker
settings:
dockerfile: src/ZR.DataHunter.Api/Dockerfile
tags: latest
insecure: true
registry: "xxx"
repo: "xxx"
username:
from_secret: username
password:
from_secret: userpassword
First of all it's important to note that the exec pipeline can be used only when Drone is self-hosted. It is written in the official docs:
Please note exec pipelines are disabled on Drone Cloud. This feature is only available when self-hosting
When Drone is self-hosted be sure, that:
The exec runner is installed
It is configured properly in its config file (to be able to connect to the Drone server)
And the drone-runner-exec service is running
After the service is started, look for its log file and you have to see an info message saying it was able to connect to your Drone server:
level=info msg="successfully pinged the remote server"
level=info msg="polling the remote server"
It's also possible to view the web UI (Dashboard) of the running service if you enable it.
So if you see it can poll your server, then your exec pipeline should run as expected.
Related
For some reason the build step for my NestJS project in my GitHub Action fails for a few days now. I use Turborepo with pnpm in a monorepo and try to run the build with turbo run build. This works flawlessly on my local machine, but somehow in GitHub it fails with sh: 1: nest: Permission denied. ELIFECYCLE  Command failed with exit code 126. I'm not sure how this is possible, since I couldn't find any meaningful change I made to the code in the meantime. It just stopped working unexpectedly. I actually think it is an issue with GH Actions, since it actually works in my local Docker build as well.
Has anyone else encountered this issue with NestJS in GH Actions?
This is my action yml:
name: Test, lint and build
on:
push:
jobs:
test-lint-build:
runs-on: ubuntu-latest
services:
postgres:
# Docker Hub image
image: postgres
# Provide the password for postgres
env:
POSTGRES_HOST: localhost
POSTGRES_USER: test
POSTGRES_PASSWORD: docker
POSTGRES_DB: financing-database
ports:
# Maps tcp port 5432 on service container to the host
- 2345:5432
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Install pnpm
uses: pnpm/action-setup#v2.2.2
with:
version: latest
- name: Install
run: pnpm i
- name: Lint
run: pnpm run lint
- name: Test
run: pnpm run test
- name: Build
run: pnpm run build
env:
VITE_SERVER_ENDPOINT: http://localhost:8000/api
- name: Test financing-server (e2e)
run: pnpm --filter #project/financing-server run test:e2e
I found out what was causing the problem. I was using node-linker = hoisted to mitigate some issues the pnpm way of linking modules was causing with my jest tests. Removing this from my project suddenly made the action work again.
I still don't know why this only broke the build recently, since I've had this option activated for some time now.
Google Cloud docs use the first code block but I'm wondering why they don't use the second one. As far as I can tell they achieve the same result. Is there any practical difference?
# config 1
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/project-id/project-name','.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/project-id/project-name']
# Deploy container image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
args: ['run', 'deploy', 'project-name', '--image', 'gcr.io/project-id/project-name', '--region', 'us-central1']
images: ['gcr.io/project-id/project-name']
# config 2
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/gcloud'
args: ['builds', 'submit', '--region', 'us-central1', '--tag', 'gcr.io/project-id/project-name','.']
# Deploy container image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
args: ['run', 'deploy', 'project-name', '--image', 'gcr.io/project-id/project-name', '--region', 'us-central1']
I run this with gcloud builds submit --config cloudbuild.yaml
In the second config, you call a Cloud Build from inside a Cloud Build, that means you pay twice the docker build/push process in the second config.
That time of timeline in fact
Cloud Build 1
Cloud build 2
Docker Build
Docker Push
Deploy on cloud run
In addition, the number of concurrent build are limited and with the config 2 you use 2 times more quotas.
And the result is the same (it should be slightly faster with the config 1 because you haven't a new Cloud Build to spin up)
I have some python files inside of a VM that will run every week to scrape info from a website. This is automated through cloud scheduler and cloud function and it is confirmed that it works. I wanted to use cloud build and cloud run to update the code inside of the VM each time I update the code inside of the Github. I read somewhere that in order to deploy a container image to a VM, the VM has to have a container-os, so I manually made a new VM matching that criteria through compute engine. The container-os VM is already made. I just need to have the container image for it updated with the new container image built with the updated code from Github.
I'm trying to build a container image that I will later use to update the code inside of a virtual machine. Cloud run is triggered every time I push to a folder in my Github repository.
I checked Container Registry and the images are being created, but I keep getting this error when I check the virtual machine:
"Error: Failed to start container: Error response from daemon:
{
"message":"manifest for gcr.io/$PROJECT_ID/$IMAGE:latest not found: manifest unknown: Failed to fetch \"latest\" from request \"/v2/$PROJECT_ID/$IMAGE:latest/manifests/latest\"."
}"
Why is the request being made for the latest tag when I wanted the tag with the commit hash and how can I fix it?
This is the virtual machine log (sudo journalctl -u konlet-startup):
Started Containers on GCE Setup.
Starting Konlet container startup agent
Downloading credentials for default VM service account from metadata server
Updating IPtables firewall rules - allowing tcp traffic on all ports
Updating IPtables firewall rules - allowing udp traffic on all ports
Updating IPtables firewall rules - allowing icmp traffic on all ports
Launching user container $CONTAINER
Configured container 'preemptive-public-email-vm' will be started with name 'klt-$IMAGE-xqgm'.
Pulling image: 'gcr.io/$PROJECT_ID/$IMAGE'
Error: Failed to start container: Error response from daemon: {"message":"manifest for gcr.io/$PROJECT_ID/$IMAGE:latest not found: manifest unknown: Failed to fetch \"latest\" from request \"/v2/$PROJECT_ID/$IMAGE/manifests/latest\"."}
Saving welcome script to profile.d
Main process exited, code=exited, status=1/FAILURE
Failed with result 'exit-code'.
Consumed 96ms CPU time
This is the cloudbuild.yaml:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA', './folder_name']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA']
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'run-public-email'
- '--image'
- 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA'
- '--region'
- 'us-central1'
images:
- 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA'
This is the dockerfile:
FROM python:3.9.7-slim-buster
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
CMD [ "python3", "hello.py" ]
This is hello.py:
import os
from flask import Flask
app = Flask(__name__)
#app.route('/')
def home():
return "Hello world"
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port=int(os.environ.get('PORT', 8080)))
You can update the GCE Container-os VM with the latest image by using Gcloud command in cloudbuild.yaml file. This command is used to update Compute Engine VM instances running container images.
You could encounter a vm restart whenever the image is updated to the GCE container-os VM. When this happens, it will allocate a new ip to the VM, you can use a Static IP to avoid it, if required.
Example Cloudbuild.yaml :
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA', './folder_name']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA']
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'compute'
- 'instances'
- 'update-container'
- 'Instance Name'
- '--container-image'
- 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA'
So I have localstack running locally (on my laptop) and can deploy serverless app to it and then invoke a Lambda.
However, I am really struggling with doing the same thing in gitlab-ci.
This is the relevant part of .gitlab-ci.yml:
integration-test:
stage: integration-test
image: node:14-alpine3.12
tags:
- docker
services:
- name: localstack/localstack
alias: localstack
variables:
LAMBDA_EXECUTOR: docker
HOSTNAME_EXTERNAL: localstack
DEFAULT_REGION: eu-west-1
USE_SSL: "false"
DEBUG: "1"
AWS_ACCESS_KEY_ID: test
AWS_SECRET_ACCESS_KEY: test
AWS_DEFAULT_REGION: eu-west-1
script:
- npm ci
- npx sls deploy --stage local
- npx jest --testMatch='**/*.integration.js'
only:
- merge_requests
The localstack gets started and the deployment works fine. But as soon as lambda is invoked (in an integration test), localstack is trying to create a container for the lambda to run in and that's when it fails with the following:
Lambda process returned error status code: 1. Result: . Output:\\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\\nmust specify at least one container source (.....)
I tried to set DOCKER_HOST to tcp://docker:2375 but then it fails with:
Lambda process returned error status code: 1. Result: . Output:\\nerror during connect: Post http://docker:2375/v1.29/containers/create: dial tcp: lookup docker on 169.254.169.254:53: no such host\
DOCKER_HOST set to tcp://localhost:2375 complains too:
Lambda process returned error status code: 1. Result: . Output:\\nCannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running?\\nmust specify at least one container source
Did anyone ever get lambdas to run within localstack within shared gitlab runners?
Thanks for your help :)
Running docker in docker is usually a bad idea, since it's a big security vulnerability. Granting access to local docker daemon equals granting root privileges on a runner.
If you still want to use docker installed on host to spawn containers, refer to official documentation - https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-socket-binding
which boils down to adding
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
to [runners.docker] section in your runner config.
The question is, why do you need docker? According to https://github.com/localstack/localstack , setting LAMBDA_EXECUTOR to local will
run Lambda functions in a temporary directory on the local machine
Which should be the best approach to your problem, and won't compromise security of your runner host.
I am using a CI tool called Drone(drone.io). So i really want to do some integration tests with it. What i want is Drone to start my application container on some port on the drone host and then I would be able to run integration tests against it. For example in .drone.yml file:
build:
image: python3.5-uwsgi
pull: true
auth_config:
username: some_user
password: some_password
email: email
commands:
- pip install --user --no-cache-dir -r requirements.txt
- python manage.py integration_test -h 127.0.0.1:5000
# this should send various requests to 127.0.0.1:5000
# to test my application's behaviour
compose:
my_application:
# build and run a container based on dockerfile in local repo on port 5000
publish:
deploy:
Drone 0.4 can't start service from your Dockerfile, if you want start docker container, you should build it before, outside this build, and push to dockerhub or your own registry and put this in compose section, see http://readme.drone.io/usage/services/#images:bfc9941b6b6fd7b4ef09dd0ccd08af0c
You can also start your application in build, nohup python manage.py server -h 127.0.0.1:5000 & before you running your integration tests. Make sure that your application is started and listening 5000 port, before you run integration_test.
I recommend you use drone 0.5 with pipelines, you can build docker image and push that to registry before build, and use that as service inside your build.