When attempting to download private go dependencies in Gitlab CI, I am given the error:
go: gitlab.com/path/to/private/repo#tag: reading gitlab.com/path/to/private/repo/go.mod at revision repo/tag: git ls-remote -q origin in /go/pkg/mod/cache/vcs/b9c14ef4557b9e7a5cee451c4c1a3b7c3f05519199a2d30cb24ae253b6a6fa2b: exit status 128:
remote: The project you were looking for could not be found or you don't have permission to view it.
fatal: repository 'https://gitlab.com/path/to.git/' not found
The .netrc file is set with a known good value, but it doesn't appear to be working correctly in Gitlab CI. I have replaced my own .netrc file with the one I am attempting to use, deleted ~/go/pkg/mod, and ran go mod tidy in my repo which successfully downloaded the private dependencies. Additionally, if I inject the .netrc contents into a docker build, and have the docker file retrieve the dependencies, there are no issues.
This is the simplest .gitlab-ci.yml file that will replicate:
failingJob:
image: golang:1.17-alpine
script:
- apk update && apk add --no-cache git
- echo "${NETRC}" > /root/.netrc
- go mod download
stage: test
variables:
GOPRIVATE: gitlab.com/private-org
workingJob:
image: docker:19.03.12
services:
- docker:19.03.12-dind
script:
- docker build -t my-docker-image --build-arg SHARE_NETRC="$NETRC" --build-arg GOPRIVATE=gitlab.com/private-org -f docker/Dockerfile .
Dockerfile:
FROM golang:1.17-alpine AS BUILDER
ARG SHARE_NETRC
ARG GOPRIVATE
RUN apk update && apk add --no-cache git ca-certificates
COPY . /app/src
ENV GOPRIVATE=${GOPRIVATE}
ENV SHARE_NETRC=${SHARE_NETRC}
RUN echo "${SHARE_NETRC}" > /root/.netrc
WORKDIR /app/src
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags="-w -s" -o /go/bin/app
FROM scratch
COPY --from=BUILDER /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY --from=BUILDER /go/bin/app /
ENTRYPOINT [ "/app" ]
As you can see, the image used in the Dockerfile is the same as the image used in Job 1. The values for the .netrc file are the same between the jobs, and the location of the file is the same. The only difference I can see between the two jobs is that one is running directly on the runner while the other is in a Docker build context.
Related
I'm having a lot of difficulty getting docker to work on osx. I have the following Dockerfile:
FROM node:10.17.0-alpine AS staging
CMD ["mkdir","/srv/app"]
WORKDIR /srv/app
COPY . .
RUN ["npm","install"]
FROM node:10.17.0-alpine AS builder
RUN ["mkdir", "/srv/app" ]
COPY --from=staging /srv/app/ /srv/app/
WORKDIR /srv/app
WORKDIR /srv/app
RUN ["npm","run-script","build-staging"]
FROM nginx:alpine
COPY --from=builder /srv/app/www/ /usr/share/nginx/html/
COPY nginx.conf /etc/nginx/nginx.conf
I call this file like so: docker build -f ./Dockerfile --no-cache -t <my-app>:staging .
In windows this works perfectly, but my team has a mix of Windows and OSX computers, and docker build fails on all of our OSX machines with this error:
=> ERROR [stage-2 2/3] COPY --from=builder /srv/app/www/ /usr/share/nginx/html/
------
> [stage-2 2/3] COPY --from=builder /srv/app/www/ /usr/share/nginx/html/:
------
failed to compute cache key: "/srv/app/www" not found: not found
I came across a possible cause of the situation where I could try adding to "* text=auto" to my .gitattributes file but this didn't fix the issue even after recloning my repo.
How is it possible for my Dockerfile to work on Windows and not on OSX?
EDIT: I should add that I have a nearly identical project with an identical Docker file whose docker push works on all Windows machines and all OSX machines.
I discovered the issue. My docker memory and swap resources available were too small for the project in question. The error message provided by Docker was not very helpful in the case and docker build was getting interrupted once docker reached its memory limit.
Trying to build image using Dockerfile, but seeing following error:
[6/7] RUN go mod download && go mod verify:
#10 4.073 go: github.com/private-repo/repo-name#v0.0.0-20210608233213-12dff748001d: invalid version: git fetch -f origin refs/heads/*:refs/heads/* refs/tags/*:refs/tags/* in /app/pkg/mod/cache/vcs/40ae0075df7a81e12f09aaa30354204db332938b8767b01daf7fd56ca3ad7956: exit status 128:
#10 4.073 git#github.com: Permission denied (publickey).
#10 4.073 fatal: Could not read from remote repository.
#10 4.073 Please make sure you have the correct access rights
#10 4.073 and the repository exists.
------
executor failed running [/bin/sh -c go mod download && go mod verify]: exit code: 1
Here is my Dockerfile:
#Start from base image 1.16.5:
FROM golang:1.16.5
ARG SSH_PRIVATE_KEY
ENV ELASTIC_HOSTS=localhost:9200
ENV LOG_LEVEL=info
#Configure the repo url so we can configure our work directory:
ENV REPO_URL=github.com/private-repo/repo-name
#Setup out $GOPATH
ENV GOPATH=/app
ENV APP_PATH=$GOPATH/src/$REPO_URL
#/app/src/github.com/private-repo/repo-name/src
#Copy the entire source code from the current directory to $WORKPATH
ENV WORKPATH=$APP_PATH/src
COPY src $WORKPATH
WORKDIR $WORKPATH
RUN mkdir -p ~/.ssh && umask 0077 && echo "${SSH_PRIVATE_KEY}" > ~/.ssh/id_rsa \
&& git config --global url."ssh://git#github.com/private-repo".insteadOf
https://github.com \
&& ssh-keyscan github.com >> ~/.ssh/known_hosts
#prevent the reinstallation of vendors at every change in the source code
COPY go.mod go.sum $WORKDIR
RUN go mod download && go mod verify
RUN go build -x -o image-name .
#Expose port 8081 to the world:
EXPOSE 8081
CMD ["./image-name"]
In my GO env, I do have GO111MODULE=on & GOPRIVATE=github.com/private-repo/*
Also, I'm able to authenticate at my terminal:
ssh -T git#github.com
Hi private-repo! You've successfully authenticated, but GitHub does not provide shell access.
'go get private-repo-name' is successful.
I build through Dockerfile:
docker build --build-arg SSH_PRIVATE_KEY -t image-name .
which has command:
RUN go build -x -o image-name .
What I tried:
GO111MODULE="on"
GONOPROXY="github.com/user-id/*"
GONOSUMDB="github.com/user-id/*"
GOPRIVATE="github.com/user-id/*"
GOPROXY="https://proxy.golang.org,direct"
[url "ssh://git#github.com/"]
insteadOf = https://github.com/
Basically, there are multiple repos under github.com/user-name/ and all of them are private which I want to use.
I would rather use a more specific directive:
git config --global \
url."ssh://git#github.com/user-name/*".insteadOf https://github.com/user-name/*
That way, the instead of won't apply to all https://github.com URLs, only to the one matching the private repository for which you need to use SSH.
The OP ios-mxe confirms in the comments it is indeed working as expected.
I am trying to deploy my Go 1.14 microservices application on to Google's Flexible environment. I read that there are issues with finding GOROOT, and how it is unable to obtain the correct dependencies.
I wanted to use the flexible environment because I needed to do port forwarding. Since my domain name was used to run the actual application, I wanted port 8081 to run my microservices.
I followed the instruction from this link:
https://blog.cubieserver.de/2019/go-modules-with-app-engine-flexible/
I tried option 3.
This is my gitlab-ci.yaml configurations file.
# .gitlab-ci.yaml
stages:
- build
- deploy
build_server:
stage: build
image: golang:1.14
script:
- go mod vendor
- go install farmcycle.us/user/farmcycle
artifacts:
paths:
- vendor/
deploy_app_engine:
stage: deploy
image: google/cloud-sdk:270.0.0
script:
- echo $SERVICE_ACCOUNT > /tmp/$CI_PIPELINE_ID.json
- gcloud auth activate-service-account --key-file /tmp/$CI_PIPELINE_ID.json
- gcloud --quiet --project $PROJECT_ID app deploy app.yaml
after_script:
- rm /tmp/$CI_PIPELINE_ID.json
This my app.yaml configuration file
runtime: go
env: flex
network:
forwarded_ports:
- 8081/tcp
When I deployed this using the Git CI pipeline. Build stage passes, but the Deploy stage failed.
Running with gitlab-runner 13.4.1 (e95f89a0)
on docker-auto-scale 72989761
Preparing the "docker+machine" executor
Preparing environment
00:03
Getting source from Git repository
00:04
Downloading artifacts
00:02
Executing "step_script" stage of the job script
00:03
$ echo $SERVICE_ACCOUNT > /tmp/$CI_PIPELINE_ID.json
$ gcloud auth activate-service-account --key-file /tmp/$CI_PIPELINE_ID.json
Activated service account credentials for: [farmcycle-hk1996#appspot.gserviceaccount.com]
$ gcloud --quiet --project $PROJECT_ID app deploy app.yaml
ERROR: (gcloud.app.deploy) Staging command [/usr/lib/google-cloud-sdk/platform/google_appengine/go-app-stager /builds/JLiu1272/farmcycle-backend/app.yaml /builds/JLiu1272/farmcycle-backend /tmp/tmprH6xQd/tmpSIeACq] failed with return code [1].
------------------------------------ STDOUT ------------------------------------
------------------------------------ STDERR ------------------------------------
2020/10/10 20:48:27 staging for go1.11
2020/10/10 20:48:27 Staging Flex app: failed analyzing /builds/JLiu1272/farmcycle-backend: cannot find package "farmcycle.us/user/farmcycle/api" in any of:
($GOROOT not set)
/root/go/src/farmcycle.us/user/farmcycle/api (from $GOPATH)
GOPATH: /root/go
--------------------------------------------------------------------------------
Running after_script
00:01
Running after script...
$ rm /tmp/$CI_PIPELINE_ID.json
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1
This was the error. Honestly I am not really sure what this error is and how to fix it.
Surprisingly, even using the latest Go runtime, runtime: go1.15, go modules appear to not be used. See golang-docker.
However, Flex builds your app into a container regardless of runtime and so, in this case, it may be better to use a custom runtime and build your own Dockerfile?
runtime: custom
env: flex
Then you get to use e.g. Go 1.15 and go modules (without vendoring) and whatever else you'd like. For a simple main.go that uses modules e.g.:
FROM golang:1.15 as build
ARG PROJECT="flex"
WORKDIR /${PROJECT}
COPY go.mod .
RUN go mod download
COPY main.go .
RUN GOOS=linux \
go build -a -installsuffix cgo \
-o /bin/server \
.
FROM gcr.io/distroless/base-debian10
COPY --from=build /bin/server /
USER 999
ENV PORT=8080
EXPOSE ${PORT}
ENTRYPOINT ["/server"]
This ought to be possible with Google's recently announced support for buildpacks but I've not tried it.
I want to deploy a dockerized Spring-Boot application built with gradle on the Heroku platform when a commit is pushed on github.
I deployed successfully the docker image with CLI, by building the image locally and then deploying it
I deployed successfully "on github push" with the "heroku-18" stack. After each commit, Heroku detects that my project is a gradle one, build it and deploys it with no problem. This method doesn't trigger Docker.
Now, I want to switch to the "container" stack, for Heroku to build my Dockerfile and deploys my app after each commit. The Dockerfile is correctly detected, but because the JAR is not generated, the Dockerfile step fails.
How can I trigger the generation of the JAR on Heroku side for the Dockerfile to be able to copy this JAR to the container ?
Heroku logs
=== Fetching app code
=== Building web (Dockerfile)
Sending build context to Docker daemon 50.69kBStep 1/11 : FROM openjdk:11-jre-slim
11-jre-slim: Pulling from library/openjdk
...
Step 4/11 : ADD build/libs/myapp-1.0-SNAPSHOT.jar /app/myapp.jar
ADD failed: stat /var/lib/docker/tmp/docker-builder545575378/build/libs/myapp-1.0-SNAPSHOT.jar: no such file or directory
heroku.yml
build:
docker:
web: Dockerfile
Dockerfile
FROM openjdk:11-jre-slim
VOLUME /var/log/my-app
ARG JAR_FILE
ADD build/libs/my-app-1.0-SNAPSHOT.jar /app/my-app.jar
RUN chgrp -R 0 /app
RUN chmod -R g+rwX /app
RUN chgrp -R 0 /var/log/my-app
RUN chmod -R g+rwX /var/log/my-app
CMD [ "-jar", "/app/my-app.jar" ]
ENTRYPOINT ["java"]
EXPOSE 8080
Probably you can't do that outside your Dockerfile.
But you can use Docker multi-stage builds like this:
FROM adoptopenjdk/openjdk14-openj9:alpine as build
COPY . /opt/app/src
WORKDIR /opt/app/src
RUN ./gradlew clean bootJar
FROM adoptopenjdk/openjdk14-openj9:alpine-jre
COPY --from=build opt/app/app.jar /opt/app/app.jar
CMD ["java", "-server", "-XX:MaxRAM=256m", "-jar", "/opt/app/app.jar"]
I wish to use GitLab Container Registry to temporary store my newly built Docker image; in order to have Docker function (i.e. docker login, docker build, docker push), I applied docker-in-docker executor; then from GitLab Piplelines error messages, I realize I need to place a Dockerfile at project root:-
$ docker build --pull -t $CONTAINER_TEST_IMAGE .
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /builds/xxxxx.com/laravel/Dockerfile: no such file or directory
My Dockerfile includes centos:7, php, nodejs, composer and sass installations. I observe after each commit, GitLab runner will go through the Dockerfile once and install all of them from beginning, which makes the whole build stage very slow and funny - how come I just want to amend 1 word in my project but I need to install so many things again for deployment?
From my imagination, it will be nice if I can pre-build a Docker image from a Dockerfile that contains the installations mentioned above plus Docker (so that docker login, docker build and docker push can work) and stored in the GitLab-runner server, and after each commit, this image can be reused to build the new image to be pushed to GitLab Container Registry.
However, I faced 2 problems:-
1) Even I include Docker installation in the pre-build a Docker image, I cannot systemctl docker start, due to some D-bus problem
Failed to get D-Bus connection: Operation not permitted
moreover some articles also mentioned a docker in container shall not run background services;
2) when I use dind, it will require a Dockerfile at project root; with the pre-build a Docker image, actually I have nothing to do with this Dockerfile at project root; hence is dind a wrong option?
Acutally, what is the proper way to push a Laravel project image to GitLab Container Registry? (where to place those npm install and composer install commands?)
image: docker:latest
services:
- docker:dind
stages:
- build
- test
- deploy
variables:
CONTAINER_TEST_IMAGE: xxxx
CONTAINER_RELEASE_IMAGE: yyyy
before_script:
- docker login -u xxx -p yyy registry.gitlab.com
build:
stage: build
script:
- npm install here?
- docker build --pull -t $CONTAINER_TEST_IMAGE .
- docker push $CONTAINER_TEST_IMAGE
There are many questions in your post. I would like to target them as follows:
You can pre-build a docker image and then use it in your gitlab-ci.yaml file. This can be used to add your specific dependencies.
image: my custom image
services:
- docker:dind
Important to add the service to the configuration.
You problem about trying to run the docker service inside the gitlab-ci.yml. You actually don't need to do that. Gitlab exposes the docker engine to the executor (either via unix:///var/run/docker.sock or tcp://localhost:2375/). Note, that if the runners are executed in a kubernetes environment, you need to specify the DOCKER_HOST as follows:
variable:
DOCKER_HOST: tcp://localhost:2375/
You question about where to place npm install is more a fundamental question about how docker images are build. In short, npm install should be place in the Dockerfile. For a long description, please this for example.
Some references:
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-in-docker-executor
https://nodejs.org/en/docs/guides/nodejs-docker-webapp/