manifest unknown: Failed to fetch \"latest\" from request \"/v2/$PROJECT_ID/$IMAGE:latest/manifests/latest\ "." - google-cloud-build

I have some python files inside of a VM that will run every week to scrape info from a website. This is automated through cloud scheduler and cloud function and it is confirmed that it works. I wanted to use cloud build and cloud run to update the code inside of the VM each time I update the code inside of the Github. I read somewhere that in order to deploy a container image to a VM, the VM has to have a container-os, so I manually made a new VM matching that criteria through compute engine. The container-os VM is already made. I just need to have the container image for it updated with the new container image built with the updated code from Github.
I'm trying to build a container image that I will later use to update the code inside of a virtual machine. Cloud run is triggered every time I push to a folder in my Github repository.
I checked Container Registry and the images are being created, but I keep getting this error when I check the virtual machine:
"Error: Failed to start container: Error response from daemon:
{
"message":"manifest for gcr.io/$PROJECT_ID/$IMAGE:latest not found: manifest unknown: Failed to fetch \"latest\" from request \"/v2/$PROJECT_ID/$IMAGE:latest/manifests/latest\"."
}"
Why is the request being made for the latest tag when I wanted the tag with the commit hash and how can I fix it?
This is the virtual machine log (sudo journalctl -u konlet-startup):
Started Containers on GCE Setup.
Starting Konlet container startup agent
Downloading credentials for default VM service account from metadata server
Updating IPtables firewall rules - allowing tcp traffic on all ports
Updating IPtables firewall rules - allowing udp traffic on all ports
Updating IPtables firewall rules - allowing icmp traffic on all ports
Launching user container $CONTAINER
Configured container 'preemptive-public-email-vm' will be started with name 'klt-$IMAGE-xqgm'.
Pulling image: 'gcr.io/$PROJECT_ID/$IMAGE'
Error: Failed to start container: Error response from daemon: {"message":"manifest for gcr.io/$PROJECT_ID/$IMAGE:latest not found: manifest unknown: Failed to fetch \"latest\" from request \"/v2/$PROJECT_ID/$IMAGE/manifests/latest\"."}
Saving welcome script to profile.d
Main process exited, code=exited, status=1/FAILURE
Failed with result 'exit-code'.
Consumed 96ms CPU time
This is the cloudbuild.yaml:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA', './folder_name']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA']
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'run-public-email'
- '--image'
- 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA'
- '--region'
- 'us-central1'
images:
- 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA'
This is the dockerfile:
FROM python:3.9.7-slim-buster
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
CMD [ "python3", "hello.py" ]
This is hello.py:
import os
from flask import Flask
app = Flask(__name__)
#app.route('/')
def home():
return "Hello world"
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port=int(os.environ.get('PORT', 8080)))

You can update the GCE Container-os VM with the latest image by using Gcloud command in cloudbuild.yaml file. This command is used to update Compute Engine VM instances running container images.
You could encounter a vm restart whenever the image is updated to the GCE container-os VM. When this happens, it will allocate a new ip to the VM, you can use a Static IP to avoid it, if required.
Example Cloudbuild.yaml :
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA', './folder_name']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA']
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'compute'
- 'instances'
- 'update-container'
- 'Instance Name'
- '--container-image'
- 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA'

Related

Minikube Accessing Images Added With Registry Addon

I’ve followed the instructions outlined on this page and pushed a local image to a local 3 node Minikube cluster with the registry add-on enabled and the cluster started with insecure-registry flag, but I get the following error when I try to create a Pod with the image:
Normal Pulling 9m18s (x4 over 10m) kubelet Pulling image "192.168.99.100:5000/myapp:v1”
Warning Failed 9m18s (x4 over 10m) kubelet Failed to pull image "192.168.99.100:5000/myapp:v1”: rpc error: code = Unknown desc = Error response from daemon: Get "https://192.168.99.100:5000/v2/": http: server gave HTTP response to HTTPS client
Any advice on resolving this would be greatly appreciated
My Minikube (v1.23.2) is on macOS (Big Sur 11.6) using the VirtualBox driver. It is a three node cluster. My Docker Desktop version is (20.10.8)
These are the steps I followed:
Get my cluster’s VMs’ IP range - 192.168.99.0/24
Added the following entry to my Docker Desktop config:
insecure-registries": [
"192.168.99.0/24"
]
Started Minikube with insecure registries flag:
$ minikube start —insecure-registry=“192.168.99.0/24”
Run:
$ minikube addons enable registry
Tagged the image I want to push:
$ docker tag docker.io/library/myapp:v1 $(minikube ip):5000/myapp:v1
Pushed the image:
$ docker push $(minikube ip):5000/myapp:v1
The push works ok - when I exec onto the registry Pod I can see the image in the filesystem. However, when I try to create a Pod using the image, I get the error mentioned above.
My Pod manifest is:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: myapp
name: myapp
spec:
containers:
- image: 192.168.99.100:5000/myapp:v1
name: myapp
imagePullPolicy: IfNotPresent
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
The issue was resolved by deleting the cluster and recreating it using the insecure-registry flag from the start - originally I had created the cluster, stopped it, and then started it again with the insecure-registry flag. For some reason this didn't work, but starting it for the first time with the flag did.
If you're going to be creating clusters with the registry addon a lot, it might be worth adding the flag permanently to your config. Replace the IP with your cluster's subnet:
$ minikube config set insecure-registry "192.168.99.0/24"

docker-compose pull Error: "error creating temporary lease: read-only file system"

I'm trying to run docker-compose pull but I get some errors that I don't know what to do with.
My docker-compose.yaml file:
version: '3'
services:
strapi:
image: strapi/strapi
environment:
DATABASE_CLIENT: postgres
DATABASE_NAME: strapi
DATABASE_HOST: postgres
DATABASE_PORT: 5432
DATABASE_USERNAME: strapi
DATABASE_PASSWORD: strapi
volumes:
- ./app:/srv/app
ports:
- '1337:1337'
depends_on:
- postgres
postgres:
image: postgres
environment:
POSTGRES_DB: strapi
POSTGRES_USER: strapi
POSTGRES_PASSWORD: strapi
volumes:
- ./data:/var/lib/postgresql/data
The error message:
Pulling postgres ... error
Pulling strapi ... error
ERROR: for strapi error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown
ERROR: for postgres error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown
ERROR: error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown
error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown
I tried a multitude of things so YMMV, but here are all of the steps I did that ultimately got it working.
I am using Windows 10 with WSL2 backend on Ubuntu, so again YMMV as I see MacOS is tagged. This is one of the few questions I see related to mine, so I thought it would be valuable.
Steps for success:
Update WSL (wsl --update -- unrelated to the GitHub issue below)
stop Docker Desktop
stop WSL (wsl --shutdown)
unregister the docker-desktop distro (which contains binaries, but no data)
wsl --unregister docker-desktop
restart Docker Desktop (try running as admin)
Enable use of docker compose V2 (settings -> general -> Use Docker Compose V2)
Associated GitHub issue link
Extra Info:
I ended up using V2 of docker compose when it worked... it works either way now that the image has pulled properly, though.
I unsuccessfully restarted, reinstalled, and factory reset Docker Desktop many times.

Drone CI/CD only stuck in exec pipeline

When i use docker pipeline it will success building.
But when i use exec pipe line it's always stuck in pending.
And i don't what going wrong.
kind: pipeline
type: exec
name: deployment
platform:
os: linux
arch: amd64
steps:
- name: backend image build
commands:
- echo start build images...
# - export MAJOR_VERSION=1.0.rtm.
# - export BUILD_NUMBER=$DRONE_BUILD_NUMBER
# - export WORKSPACE=`pwd`
# - bash ./jenkins_build.sh
when:
branch:
- master
Docker Pipe Line is fine.
kind: pipeline
type: docker
name: deployment
steps:
- name: push image to repo
image: plugins/docker
settings:
dockerfile: src/ZR.DataHunter.Api/Dockerfile
tags: latest
insecure: true
registry: "xxx"
repo: "xxx"
username:
from_secret: username
password:
from_secret: userpassword
First of all it's important to note that the exec pipeline can be used only when Drone is self-hosted. It is written in the official docs:
Please note exec pipelines are disabled on Drone Cloud. This feature is only available when self-hosting
When Drone is self-hosted be sure, that:
The exec runner is installed
It is configured properly in its config file (to be able to connect to the Drone server)
And the drone-runner-exec service is running
After the service is started, look for its log file and you have to see an info message saying it was able to connect to your Drone server:
level=info msg="successfully pinged the remote server"
level=info msg="polling the remote server"
It's also possible to view the web UI (Dashboard) of the running service if you enable it.
So if you see it can poll your server, then your exec pipeline should run as expected.

Running lambdas in localstack in gitlab-ci

So I have localstack running locally (on my laptop) and can deploy serverless app to it and then invoke a Lambda.
However, I am really struggling with doing the same thing in gitlab-ci.
This is the relevant part of .gitlab-ci.yml:
integration-test:
stage: integration-test
image: node:14-alpine3.12
tags:
- docker
services:
- name: localstack/localstack
alias: localstack
variables:
LAMBDA_EXECUTOR: docker
HOSTNAME_EXTERNAL: localstack
DEFAULT_REGION: eu-west-1
USE_SSL: "false"
DEBUG: "1"
AWS_ACCESS_KEY_ID: test
AWS_SECRET_ACCESS_KEY: test
AWS_DEFAULT_REGION: eu-west-1
script:
- npm ci
- npx sls deploy --stage local
- npx jest --testMatch='**/*.integration.js'
only:
- merge_requests
The localstack gets started and the deployment works fine. But as soon as lambda is invoked (in an integration test), localstack is trying to create a container for the lambda to run in and that's when it fails with the following:
Lambda process returned error status code: 1. Result: . Output:\\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\\nmust specify at least one container source (.....)
I tried to set DOCKER_HOST to tcp://docker:2375 but then it fails with:
Lambda process returned error status code: 1. Result: . Output:\\nerror during connect: Post http://docker:2375/v1.29/containers/create: dial tcp: lookup docker on 169.254.169.254:53: no such host\
DOCKER_HOST set to tcp://localhost:2375 complains too:
Lambda process returned error status code: 1. Result: . Output:\\nCannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running?\\nmust specify at least one container source
Did anyone ever get lambdas to run within localstack within shared gitlab runners?
Thanks for your help :)
Running docker in docker is usually a bad idea, since it's a big security vulnerability. Granting access to local docker daemon equals granting root privileges on a runner.
If you still want to use docker installed on host to spawn containers, refer to official documentation - https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-socket-binding
which boils down to adding
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
to [runners.docker] section in your runner config.
The question is, why do you need docker? According to https://github.com/localstack/localstack , setting LAMBDA_EXECUTOR to local will
run Lambda functions in a temporary directory on the local machine
Which should be the best approach to your problem, and won't compromise security of your runner host.

Integration testing with drone.io

I am using a CI tool called Drone(drone.io). So i really want to do some integration tests with it. What i want is Drone to start my application container on some port on the drone host and then I would be able to run integration tests against it. For example in .drone.yml file:
build:
image: python3.5-uwsgi
pull: true
auth_config:
username: some_user
password: some_password
email: email
commands:
- pip install --user --no-cache-dir -r requirements.txt
- python manage.py integration_test -h 127.0.0.1:5000
# this should send various requests to 127.0.0.1:5000
# to test my application's behaviour
compose:
my_application:
# build and run a container based on dockerfile in local repo on port 5000
publish:
deploy:
Drone 0.4 can't start service from your Dockerfile, if you want start docker container, you should build it before, outside this build, and push to dockerhub or your own registry and put this in compose section, see http://readme.drone.io/usage/services/#images:bfc9941b6b6fd7b4ef09dd0ccd08af0c
You can also start your application in build, nohup python manage.py server -h 127.0.0.1:5000 & before you running your integration tests. Make sure that your application is started and listening 5000 port, before you run integration_test.
I recommend you use drone 0.5 with pipelines, you can build docker image and push that to registry before build, and use that as service inside your build.

Resources