Google App Engine Flex Container Deployment Issues - go

I am trying to deploy my Go 1.14 microservices application on to Google's Flexible environment. I read that there are issues with finding GOROOT, and how it is unable to obtain the correct dependencies.
I wanted to use the flexible environment because I needed to do port forwarding. Since my domain name was used to run the actual application, I wanted port 8081 to run my microservices.
I followed the instruction from this link:
https://blog.cubieserver.de/2019/go-modules-with-app-engine-flexible/
I tried option 3.
This is my gitlab-ci.yaml configurations file.
# .gitlab-ci.yaml
stages:
- build
- deploy
build_server:
stage: build
image: golang:1.14
script:
- go mod vendor
- go install farmcycle.us/user/farmcycle
artifacts:
paths:
- vendor/
deploy_app_engine:
stage: deploy
image: google/cloud-sdk:270.0.0
script:
- echo $SERVICE_ACCOUNT > /tmp/$CI_PIPELINE_ID.json
- gcloud auth activate-service-account --key-file /tmp/$CI_PIPELINE_ID.json
- gcloud --quiet --project $PROJECT_ID app deploy app.yaml
after_script:
- rm /tmp/$CI_PIPELINE_ID.json
This my app.yaml configuration file
runtime: go
env: flex
network:
forwarded_ports:
- 8081/tcp
When I deployed this using the Git CI pipeline. Build stage passes, but the Deploy stage failed.
Running with gitlab-runner 13.4.1 (e95f89a0)
on docker-auto-scale 72989761
Preparing the "docker+machine" executor
Preparing environment
00:03
Getting source from Git repository
00:04
Downloading artifacts
00:02
Executing "step_script" stage of the job script
00:03
$ echo $SERVICE_ACCOUNT > /tmp/$CI_PIPELINE_ID.json
$ gcloud auth activate-service-account --key-file /tmp/$CI_PIPELINE_ID.json
Activated service account credentials for: [farmcycle-hk1996#appspot.gserviceaccount.com]
$ gcloud --quiet --project $PROJECT_ID app deploy app.yaml
ERROR: (gcloud.app.deploy) Staging command [/usr/lib/google-cloud-sdk/platform/google_appengine/go-app-stager /builds/JLiu1272/farmcycle-backend/app.yaml /builds/JLiu1272/farmcycle-backend /tmp/tmprH6xQd/tmpSIeACq] failed with return code [1].
------------------------------------ STDOUT ------------------------------------
------------------------------------ STDERR ------------------------------------
2020/10/10 20:48:27 staging for go1.11
2020/10/10 20:48:27 Staging Flex app: failed analyzing /builds/JLiu1272/farmcycle-backend: cannot find package "farmcycle.us/user/farmcycle/api" in any of:
($GOROOT not set)
/root/go/src/farmcycle.us/user/farmcycle/api (from $GOPATH)
GOPATH: /root/go
--------------------------------------------------------------------------------
Running after_script
00:01
Running after script...
$ rm /tmp/$CI_PIPELINE_ID.json
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1
This was the error. Honestly I am not really sure what this error is and how to fix it.

Surprisingly, even using the latest Go runtime, runtime: go1.15, go modules appear to not be used. See golang-docker.
However, Flex builds your app into a container regardless of runtime and so, in this case, it may be better to use a custom runtime and build your own Dockerfile?
runtime: custom
env: flex
Then you get to use e.g. Go 1.15 and go modules (without vendoring) and whatever else you'd like. For a simple main.go that uses modules e.g.:
FROM golang:1.15 as build
ARG PROJECT="flex"
WORKDIR /${PROJECT}
COPY go.mod .
RUN go mod download
COPY main.go .
RUN GOOS=linux \
go build -a -installsuffix cgo \
-o /bin/server \
.
FROM gcr.io/distroless/base-debian10
COPY --from=build /bin/server /
USER 999
ENV PORT=8080
EXPOSE ${PORT}
ENTRYPOINT ["/server"]
This ought to be possible with Google's recently announced support for buildpacks but I've not tried it.

Related

Gitlab CI/CD fail with "bash: line 132: go: command not found"

We have installed Gitlab on our custom server. We are looking to use the gitlab CI/CD pipeline to build and release our software for that I'm working on a POC. I have created a project with the following .gitlab-ci.yml
variables:
GOOS: linux
GOARCH: amd64
stages:
- test
- build
- deb-build
run_tests:
stage: test
image: golang:latest
before_script:
- go mod tidy
script:
- go test ./...
build_binary:
stage: build
image: golang:latest
artifacts:
untracked: true
script:
- GOOS=$GOOS GOARCH=$GOARCH go build -o newer .
build deb:
stage: deb-build
image: ubuntu:latest
before_script:
- mkdir -p deb-build/usr/local/bin/
- chmod -R 0755 deb-build/*
- mkdir build
script:
- cp newer deb-build/usr/local/bin/
- dpkg-deb --build deb-build release-1.1.1.deb
- mv release-1.1.1.deb build
artifacts:
paths:
- build/*
TLDR: I have updated the gitlab-ci.yml and the screenshot of the error.
What I have noticed, the error is persistent if I use the shared runner(GJ7z2Aym) if you register a runner (i.e Specific Runner)
gitlab-runner register --non-interactive --url "https://gitlab.sboxdc.com/" --registration-token "<register_token>" --description "" --executor "docker" --docker-image "docker:latest"
I see the build passing without any problem
Failed case.
https://gist.github.com/meetme2meat/0676c2ee8b78b3683c236d06247a8a4d
One that Passed
https://gist.github.com/meetme2meat/058e2656595a428a28fcd91ba68874e8
The failing job is using a runner with shell executor, that was probably setup when you configured your GitLab instance. This can be seen on logs by this line:
Preparing the "shell" executor
Using Shell executor...
shell executor will ignore your job's image: config. It will run job script directly on the machine on which the runner is hosted, and will try to find go binary on this machine (failing in your case). It's a bit like running go commands on some Ubuntu without having go installed.
Your successful job is using a runner with docker executor, running your job's script in a golang:latest image as you requested. It's like running docker run golang:latest sh -c '[your script]'. This can be seen in job's logs:
Preparing the "docker" executor
Using Docker executor with image golang:latest ...
Pulling docker image golang:latest ...
Using docker image sha256:05e[...]
golang:latest with digest golang#sha256:04f[...]
What you can do:
Make sure you configure a runner with a docker executor. Your config.toml would then look like:
[[runners]]
# ...
executor = "docker"
[runners.docker]
# ...
It seems you already did by registering your own runner.
Configure your jobs to use this runner with job tags. You can set tag docker_executor on your Docker runner (when registering or via Gitlab UI) and setup something like:
build_binary:
stage: build
# Tags a runner must have to run this job
tags:
- docker_executor
image: golang:latest
artifacts:
untracked: true
script:
- GOOS=$GOOS GOARCH=$GOARCH go build -o newer .
See Runner registration and Docker executor for details.
Since you have used image: golang:latest, go should be in the $PATH
You need to check at which stage it is failing: run_tests or build_binary.
Add echo $PATH in your script steps, to check what $PATH is considered.
Check also if the error comes from the lack of git, used by Go for accessing modules remote repositories. See this answer as an example.
From your gists, the default GitLab runner uses a shell executor (which knows nothing about Go)
Instead, the second one uses a Docker executor, based on the Go image.
Registering the (Docker) runner is therefore the right way to ensure the expected executor.

Cache Cloud Native Buildpacks/Paketo.io pack CLI builds on GitHub Actions (e.g. with Spring Boot/Java/Maven buildpacks)?

I'm working on a Spring Boot application that should be packaged into a OCI container using Cloud Native Build Packs / Paketo.io. I build it with GitHub Actions, where my workflow build.yml looks like this:
name: build
on: [push]
jobs:
build-with-paketo-push-2-dockerhub:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Login to DockerHub Container Registry
run: echo $DOCKER_HUB_TOKEN | docker login -u jonashackt --password-stdin
env:
DOCKER_HUB_TOKEN: ${{ secrets.DOCKER_HUB_TOKEN }}
- name: Install pack CLI via apt. See https://buildpacks.io/docs/tools/pack/#pack-cli
run: |
sudo add-apt-repository ppa:cncf-buildpacks/pack-cli
sudo apt-get update
sudo apt-get install pack-cli
- name: Build app with pack CLI
run: pack build spring-boot-buildpack --path . --builder paketobuildpacks/builder:base
- name: Tag & publish to Docker Hub
run: |
docker tag spring-boot-buildpack jonashackt/spring-boot-buildpack:latest
docker push jonashackt/spring-boot-buildpack:latest
Now the step Build app with pack CLI takes relatively long, since it always downloads the Paketo builder Docker image and then does a full fresh build. This means downloading the JDK and every single Maven dependency. Is there a way to cache a Paketo build on GitHub Actions?
Caching the Docker images on GitHub Actions might be an option - which doesn't seem to be that easy. Another option would be to leverage the Docker official docker/build-push-action Action, which is able to cache the buildx-cache. But I didn't get the combination of pack CLI and buildx-caching to work (see this build for example).
Finally I stumbled upon the general Cloud Native Buildpacks approach on how to cache in the docs:
Cache Images are a way to preserve build optimizing layers across
different host machines. These images can improve performance when
using pack in ephemeral environments such as CI/CD pipelines.
I found this concept quite nice, since it uses a second cache image, which gets published on a container registry of your choice. And this image is simply used for all Paketo pack CLI builds on every machine you append the --cache-image parameter - be it your local desktop or any CI/CD platform (like GitHub Actions).
In order to use the --cache-image parameter, we also have to use the --publish flag (since the cache image needs to get published to your container registry!). This also means we need to log into the container registry before we're able to run our pack CLI command. Using Docker Hub this is something like:
echo $DOCKER_HUB_TOKEN | docker login -u YourUserNameHere --password-stdin
Also the Paketo builder image must be a trusted one. As the docs state:
By default, any builder suggested by pack builder suggest is
considered trusted.
Since I use a suggested builder, I don't have to do anything here. If you want to use another builder that isn't trusted by default, you need to run a pack config trusted-builders add your/builder-to-trust:bionic command before the final pack CLI command.
Here's the pack CLI command, which is cache enabled in case you want to build a Spring Boot app and using Docker Hub as the container registry:
pack build index.docker.io/yourApplicationImageNameHere:latest \
--builder paketobuildpacks/builder:base \
--path . \
--cache-image index.docker.io/yourCacheImageNameHere:latest \
--publish
Finally the GitHub Action workflow to build and publish the example Spring Boot app https://github.com/jonashackt/spring-boot-buildpack looks like this:
name: build
on: [push]
jobs:
build-with-paketo-push-2-dockerhub:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Login to DockerHub Container Registry
run: echo $DOCKER_HUB_TOKEN | docker login -u jonashackt --password-stdin
env:
DOCKER_HUB_TOKEN: ${{ secrets.DOCKER_HUB_TOKEN }}
- name: Install pack CLI via the official buildpack Action https://github.com/buildpacks/github-actions#setup-pack-cli-action
uses: buildpacks/github-actions/setup-pack#v4.1.0
- name: Build app with pack CLI using Buildpack Cache image (see https://buildpacks.io/docs/app-developer-guide/using-cache-image/) & publish to Docker Hub
run: |
pack build index.docker.io/jonashackt/spring-boot-buildpack:latest \
--builder paketobuildpacks/builder:base \
--path . \
--cache-image index.docker.io/jonashackt/spring-boot-buildpack-paketo-cache-image:latest \
--publish
Note that with using the pack CLI's --publish flag, we also don't need an extra step Tag & publish to Docker Hub anymore. Since this is done by pack CLI for us already.

Deployed wrong app to Google Cloud, how to prevent?

When I run gcloud app deploy I get the message:
ERROR: (gcloud.app.deploy) The required property [project] is not currently set.
You may set it for your current workspace by running:
$ gcloud config set project VALUE
or it can be set temporarily by the environment variable [CLOUDSDK_CORE_PROJECT]
Setting the project in my current workspace caused me to deploy the wrong app, so I don't want to do that again. I want to try deploying using the environment variable option listed above. How do I do this? What is the deploy syntax to use CLOUDSDK_CORE_PROJECT? I thought this would come from my app.yaml but haven't gotten it working.
You should configure it with your project using projectID.
gcloud config set project <PROJECT_ID>
You can find your projectID your GCP account.
You should be able to pass the project as part of the app deploy command:
gcloud app deploy ~/my_app/app.yaml --project=PROJECT
Look at the examples in the Documentation.
I had the same error message recently in a job which worked fine before.
Old:
# Deploy template
.deploy_template: &deploy_definition
image: google/cloud-sdk
stage: deploy
script:
- echo $GOOGLE_KEY > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone $GC_REGION
- gcloud config set project $GC_PROJECT
- gcloud container clusters get-credentials $GC_XXX_CLUSTER
- kubectl set image deployment/XXXXXX --record
New:
# Deploy template
.deploy_template: &deploy_definition
image: google/cloud-sdk
stage: deploy
script:
- echo $GOOGLE_KEY > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set project $GC_PROJECT
- gcloud config set compute/zone $GC_REGION
- gcloud container clusters get-credentials $GC_XXX_CLUSTER
- kubectl set image deployment/XXXXXX --record
I just change the order of compute/zone, and it is worked like before.

AWS Code Pipeline deploy results in a 404

I am trying to create a pipeline for an existing application. It is a React/Java Spring Boot application. It usually gets bundled into a single war file and uploaded to ElasticBeanstalk. I created my codebuild project and when I run it manually it will generate a war file that I can then upload to ElasticBeanstalk and everything works correctly. The buildspec for that is below:
version: 0.2
phases:
install:
commands:
- echo Nothing to do in the install phase...
pre_build:
commands:
- echo Nothing to do in the pre_build phase...
build:
commands:
- echo Build started on `date`
- mvn -Pprod package -X
post_build:
commands:
- echo Build completed on `date`
- mv target/cogcincinnati-0.0.1-SNAPSHOT.war cogcincinnati-0.0.1-SNAPSHOT.war
artifacts:
files:
- cogcincinnati-0.0.1-SNAPSHOT.war
When I run this build step in my pipeline it generates a zip file that gets dropped onto S3. My deploy step takes that build artifact and sends it to ElasticBeanstalk. Elasticbeanstalk does not give me any errors, but when I navigate to my url, I get a 404.
I have tried uploading the zip directly to Elasticbeanstalk and I get the same result. I have unzipped the file and it does appear to have all of my project files.
When I look at the server logs, I do not see any errors. I don't understanding why codebuild appears to be generating a war file when I run it manually, but a zip when executed in code pipeline.
Change artifacts war file name to ROOT.war this will resolve your problem actually your application is deployed successfully but on a different path, this is tomcat inbuild functionality by changing the ROOT it will run the application on '/'
So updated buildspec.yml will be
version: 0.2
phases:
install:
commands:
- echo Nothing to do in the install phase...
pre_build:
commands:
- echo Nothing to do in the pre_build phase...
build:
commands:
- echo Build started on `date`
- mvn -Pprod package -X
post_build:
commands:
- echo Build completed on `date`
- mv target/cogcincinnati-0.0.1-SNAPSHOT.war ROOT.war
artifacts:
files:
- ROOT.war
Seems your application is failing, you should review the logs from the Beanstalk environment, specially:
"tomcat8/catalina.out"
"tomcat8/catalina.[date].log"
[1] Viewing logs from Amazon EC2 instances in your Elastic Beanstalk environment - https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.logging.html
For more details about using Tomcat platform on EB environment, you can refer to this document:
- Using the Elastic Beanstalk Tomcat platform - https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/java-tomcat-platform.html
About the folder structuring in your project, please refer to this document:
- Structuring your project folder - https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/java-tomcat-platform-directorystructure.html
Try adding discard-paths: yes at the end of the buildspec.yml file. That will help you resolving the path error.

How to deploy Spring boot application with GitLab serverless?

I configured a Google could demo project and created a cluster for it in the GitLab Serverless settings for a Hello world Spring boot application. The only information I find on deploying applications is https://docs.gitlab.com/ee/user/project/clusters/serverless/#deploying-serverless-applications which might explain how to deploy a Ruby application only. I'm not sure about that because none of the variables used in the script are explained and the hint
Note: You can reference the sample Knative Ruby App to get started.
is somehow confusing because if I'm not familiar with building Ruby applications which I'm not, that won't get me started.
Following the instruction to put
stages:
- build
- deploy
build:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
only:
- master
script:
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE
deploy:
stage: deploy
image: gcr.io/triggermesh/tm#sha256:e3ee74db94d215bd297738d93577481f3e4db38013326c90d57f873df7ab41d5
only:
- master
environment: production
script:
- echo "$CI_REGISTRY_IMAGE"
- tm -n "$KUBE_NAMESPACE" --config "$KUBECONFIG" deploy service "$CI_PROJECT_NAME" --from-image "$CI_REGISTRY_IMAGE" --wait
in .gitlab-ci.yml causes the deploy stage to fail due to
$ tm -n "$KUBE_NAMESPACE" --config "$KUBECONFIG" deploy service "$CI_PROJECT_NAME" --from-image "$CI_REGISTRY_IMAGE" --wait
2019/02/09 11:08:09 stat /root/.kube/config: no such file or directory, falling back to in-cluster configuration
2019/02/09 11:08:09 Can't read config file
ERROR: Job failed: exit code 1
My Dockerfile which allows to build locally looks as follows:
FROM maven:3-jdk-11
COPY . .
RUN mvn --batch-mode --update-snapshots install
EXPOSE 8080
CMD java -jar target/hello-world-0.1-SNAPSHOT.jar
(the version in the filename doesn't make sense for further deployment, but that's a follow-up problem).
Reason is a mismatch in the environment value specified in .gitlab-ci.yml and the GitLab Kubernetes configuration, see https://docs.gitlab.com/ee/user/project/clusters/#troubleshooting-missing-kubeconfig-or-kube_token for details.

Resources