Spring Boot app in Docker container not starting in Cloud Run after building successfully - cannot access jarfile - spring-boot

I've set up continuous deployment to Cloud Run from GitHub for my Spring Boot project, and while it's successfully building in Cloud Build, when I go over to Cloud Run, I get the following error under Creating Revision:
The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable.
When I go over to the Logs, I see the following errors:
2022-09-23 09:42:47.881 BST
Error: Unable to access jarfile /app/target/educity-manager-0.0.1-SNAPSHOT.jar
{
insertId: "632d7187000d739d29eb84ad"
labels: {5}
logName: "projects/educity-manager/logs/run.googleapis.com%2Fstderr"
receiveTimestamp: "2022-09-23T08:42:47.883252595Z"
resource: {2}
textPayload: "Error: Unable to access jarfile /app/target/educity-manager-0.0.1-SNAPSHOT.jar"
timestamp: "2022-09-23T08:42:47.881565Z"
}
2022-09-23 09:43:48.800 BST
run.googleapis.com
…ager/revisions/educity-manager-00011-fod
Ready condition status changed to False for Revision educity-manager-00011-fod with message: Deploying Revision.
{
insertId: "w6ptr6d20ve"
logName: "projects/educity-manager/logs/cloudaudit.googleapis.com%2Fsystem_event"
protoPayload: {
#type: "type.googleapis.com/google.cloud.audit.AuditLog"
resourceName: "namespaces/educity-manager/revisions/educity-manager-00011-fod"
response: {6}
serviceName: "run.googleapis.com"
status: {2}}
receiveTimestamp: "2022-09-23T08:43:49.631015104Z"
resource: {2}
severity: "ERROR"
timestamp: "2022-09-23T08:43:48.800371Z"
}
Dockerfile is as follows (and looking at the build log all of the commands in it completed successfully):
FROM openjdk:17-jdk-alpine
RUN addgroup -S spring && adduser -S spring -G spring
USER spring:spring
COPY . /app
ENTRYPOINT [ "java","-jar","/app/target/educity-manager-0.0.1-SNAPSHOT.jar" ]
I've read that Cloud Run defaults to exposing Port 8080, but just to be on the safe side I've put server.port=${PORT:8080} in my application.properties file (but it seems to make no difference one way or the other).

I have run into similar issues in the past. Usually, I am able to resolve this issue by:
specifying the port in the application itself (as you indicated in your post), and
exposing the required port in my dockerfile eg. EXPOSE 8080

Oh my good god I have done it. After two full days of digging, I realised that because I was doing it through github, my .gitignore file was excluding the /target folder containing the jar file, so Cloud Build never got the jar file mentioned in the Dockerfile.
I am going to have a cry and then go to the pub.

Related

Service "elasticsearch" failed to build:Invalid reference format

Project Screenshot
I was working in a project in which i has to use docker,elastic search etc,i installed all necessary packages and mounted my github repo and i build it , and then this error pops us that Service elastic search failed to build :invalid reference format
The ELK_VERSION argument is not passed into the build context.
You have also a warning there mention that for you. Your compose file needs to like this:
version: "3.8"
services:
elasticsearch:
build:
args:
ELK_VERSION: "1.2.3"

GitLab CI CD runner not loading properties file for profile

When I run a command mvn clean test -Dspring.profiles.active=GITLAB-CI-TEST in the GitLab CI CD it not loading properties file application-gitlab-ci-test.properties. It is loading only application.properties.
As file application-gitlab-ci-test.properties contains the different value for spring.datasource.url the pipeline is failing in the remote runners with error
The last packet sent successfully to the server was 0 milliseconds ago.
The driver has not received any packets from the server.
Of course, this error is expected as properties file application.properties refers to the localhost database.
Code which loading application-gitlab-ci-test.properties:
#Profile("GITLAB-CI-TEST")
#PropertySource("classpath:application-gitlab-ci-test.properties")
#Configuration
public class GitLabCiTestProfile {
}
When I try to run the same command locally it's working as expected and in logs, I see the following records:
2020-03-30 19:23:00.609 DEBUG 604 --- [ main]
o.s.b.c.c.ConfigFileApplicationListener : Loaded config file
'file:/G:/****/****/****/****/target/classes/application.properties'
(classpath:/application.properties)
2020-03-30 19:23:00.609 DEBUG 604 --- [ main]
o.s.b.c.c.ConfigFileApplicationListener : Loaded config file
'file:/G:/****/****/****/****/target/classes/application-GITLAB-CI-TEST.properties' (classpath:/application-GITLAB-CI-TEST.properties) for profile
GITLAB-CI-TEST
I noticed that remote runners missing the second line. This one which loading application-GITLAB-CI-TEST.properties.
I also tried mvn clean test --batch-mode -PGITLAB-CI-TEST and this one too failing in the remote host but in local run working as expected.
I found the workaround for this issue by using the command
mvn clean test --batch-mode -Dspring.datasource.url=jdbc:mysql://mysql-db:3306/*******?useSSL=false&allowPublicKeyRetrieval=true
Can you please help me to solve this issue as this workaround is not satisfying me?
I found the solution to this issue.
I changed the name of the profile from the upper case (GITLAB-CI-TEST) to lower case (gitlab-ci-test), to match the lower case of profile name in properties file - application-gitlab-ci-test.properties.
Now in the remote runner, I'm using the following command:
mvn clean test -Dspring.profiles.active=gitlab-ci-test
Spring doc - link

Envoy and Evans cli running issue

I'm writing some envoy control plane based on
https://github.com/envoyproxy/go-control-plane
And trying to use evans cli for debugging
There is some issue I can't making it work with envoy data plane.
I've downloaded data-plane
https://github.com/envoyproxy/data-plane-api
Running evans
evans -p 5678 envoy/api/v2/*.proto
evans: failed to run REPL mode: failed to instantiate a new spec: failed to instantiate the spec from proto files: envoy/api/v2/core/http_uri.proto:11:8: open validate/validate.proto: no such file or dir$
ctory
Ok install https://github.com/envoyproxy/protoc-gen-validate
And run again
evans -p 5678 --path $GOPATH/src/github.com/envoyproxy/protoc-gen-validate envoy/api/v2/*.proto
evans: failed to run REPL mode: failed to instantiate a new spec: failed to instantiate the spec from proto files: envoy/api/v2/discovery.proto:12:8: open google/rpc/status.proto: no such file or directory
Is any right way to use data-plane?
And you know correct how-to to generate *.go files with protoc from envoy data-plane *.proto

Executing maven unit tests on a Google Cloud SQL environment

I have a Jenkins pod running in GCP's Kubernetes Engine and I'm trying to run a maven unit test that connects to a google cloud SQL database to perform said test. My application.yaml for my project looks like this:
spring:
cloud:
gcp:
project-id: <my_project_id>
sql:
database-name: <my_database_name>
instance-connection-name: <my_instance_connection_name>
jpa:
database-platform: org.hibernate.dialect.MySQL55Dialect
hibernate:
ddl-auto: create-drop
datasource:
continue-on-error: true
driver-class-name: com.mysql.cj.jdbc.Driver
username: <my_cloud_sql_username>
password: <my_cloud_sql_password>
The current Jenkinsfile associated with this project is:
Jenkinsfile:
pipeline {
agent any
tools{
maven 'Maven 3.5.2'
jdk 'jdk8'
}
environment {
IMAGE = readMavenPom().getArtifactId()
VERSION = readMavenPom().getVersion()
DEV_DB_USER = "${env.DEV_DB_USER}"
DEV_DB_PASSWORD = "${env.DEV_DB_PASSWORD}"
}
stages {
stage('Build docker image') {
steps {
sh 'mvn -Dmaven.test.skip=true clean package'
script{
docker.build '$IMAGE:$VERSION'
}
}
}
stage('Run unit tests') {
steps {
withEnv(['GCLOUD_PATH=/var/jenkins_home/google-cloud-sdk/bin']) {
withCredentials([file(credentialsId: 'key-sa', variable: 'GC_KEY')]) {
sh("gcloud auth activate-service-account --key-file=${GC_KEY}")
sh("gcloud container clusters get-credentials <cluster_name> --zone northamerica-northeast1-a --project <project_id>")
sh 'mvn test'
}
}
}
}
}
}
}
My problem is when the Pipeline actually tries to run the mvn test using the above configuration (in my application.yaml) I'm getting this error:
Caused by:
com.google.api.client.googleapis.json.GoogleJsonResponseException: 403
Forbidden
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "Insufficient Permission: Request had insufficient authentication scopes.",
"reason" : "insufficientPermissions"
} ],
"message" : "Insufficient Permission: Request had insufficient authentication scopes."
}
I have two Google Cloud projects:
One that has the Kubernetes Cluster where the Jenkins pod is running.
Another project where the K8s Cluster contains my actual Spring Boot Application and the Cloud SQL database that I'm trying to access.
I also created the service account only in my Spring Boot Project for Jenkins to use with three roles: Cloud SQL Editor, Kubernetes Engine Cluster Admin and Project owner (to verify that the service account is not at fault).
I enabled the Cloud SQL, Cloud SQL admin and Kubernetes APIs in both projects and I double checked my Cloud SQL credentials and they are ok. In addition, I authenticated the Jenkins pipeline using the json file generated when I created the service account, following the recommendations discussed here:
Jenkinsfile (extract):
...
withCredentials([file(credentialsId: 'key-sa', variable: 'GC_KEY')]) {
sh("gcloud auth activate-service-account --key-file=${GC_KEY}")
sh("gcloud container clusters get-credentials <cluster_name> --zone northamerica-northeast1-a --project <project_id>")
sh 'mvn test'
}
...
I don't believe the GCP Java SDK relies on gcloud CLI at all. Instead, it looks for an environment variable GOOGLE_APPLICATION_CREDENTIALS that points to your service account key file and GCLOUD_PROJECT (see https://cloud.google.com/docs/authentication/getting-started).
Try adding the following:
sh("export GOOGLE_APPLICATION_CREDENTIALS=${GC_KEY}")
sh("export GCLOUD_PROJECT=<project_id>")
There are a couple of different things you should verify to get this working. I'm assuming you are using the Cloud SQL JDBC SocketFactory for Cloud SQL.
You should create a testing service account and give it whatever permissions are needed to execute the tests. To connect to Cloud SQL, it needs at a minimum the "Cloud SQL Client" role for the same project as the Cloud SQL instance.
The Cloud SQL SF uses the Application Default Credentials (ADC) strategy for determining what authentication to use. This means the first place it looks for credentials is the GOOGLE_APPLICATION_CREDENTIALS env var, which should be a path to the key for the testing service account.

Staging Error while Pushing a Spring Application to Cloud Foundry

I am getting the following Error while pushing the Sample Hello World spring application on CloudFoundry.
Using manifest file C:\Users\I321571\Desktop\helo\Hello\manifest.yml
Updating app Hello in org trial / space I321571 as I321571...
OK
Uploading Hello...
Uploading app files from: C:\Users\I321571\Desktop\helo\Hello
Uploading 20.1K, 46 files
Done uploading
OK
Stopping app Hello in org trial / space I321571 as I321571...
OK
Starting app Hello in org trial / space I321571 as I321571...
-----> Downloaded app package (12K)
Cloning into '/tmp/buildpacks/java-buildpack'...
-----> Java Buildpack Version: b050954 | https://github.com/cloudfoundry/java-buildpack.git#b050954
[Buildpack] ERROR Compile failed with exception #<RuntimeError: No container can run this application. Please ensure that you've pushed a valid JVM artifact or artifacts using the
-p command line argument or path manifest entry. Information about valid JVM artifacts can be found at https://github.com/cloudfoundry/java-buildpack#additional-documentation. >
No container can run this application. Please ensure that you've pushed a valid JVM artifact or artifacts using the -p command line argument or path manifest entry. Information about valid JVM artifac
ts can be found at https://github.com/cloudfoundry/java-buildpack#additional-documentation.
Staging failed: Buildpack compilation step failed
FAILED
Error restarting application: BuildpackCompileFailed
TIP: use 'cf logs Hello --recent' for more information
this is my manifest.yml
applications:
- name: Hello
memory: 512M
instances: 1
Please help me in resolving the issue.
I encountered this error too!
Make sure the command given is valid
cf push {your-app-name} -p {path to your executable jar}

Resources