I have Spring boot web application where I want to call some commands in command line. When I use ProcessBuilder and Process class after I run the process, ExecutorService is shut down.
Method where I run the process:
public void runTestsInProject(String projectPath){
System.out.println("Starting runTestsInProject() ------");
try{
ProcessBuilder builder = new ProcessBuilder(
"cmd.exe", "/c", "cd \"" + projectPath + "\" && mvn clean test");
builder.redirectErrorStream(true);
Process p = builder.start();
BufferedReader r = new BufferedReader(new InputStreamReader(p.getInputStream()));
String line;
while (true) {
line = r.readLine();
if (line == null) { break; }
}
} catch (IOException e){e.printStackTrace();}
}
Error log:
2020-07-27 20:33:20.246 INFO 7248 --- [ Thread-4] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor'
2020-07-27 20:33:20.250 INFO 7248 --- [ Thread-4] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
2020-07-27 20:33:20.254 INFO 7248 --- [ Thread-4] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown initiated...
2020-07-27 20:33:20.270 INFO 7248 --- [ Thread-4] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown completed.
Spring starts again ...
2020-07-27 20:33:29.778 INFO 7248 --- [nio-8080-exec-9] o.a.c.loader.WebappClassLoaderBase : Illegal access: this web application instance has been stopped already. Could not load [META-INF/services/javax.xml.bind.JAXBContext]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
java.lang.IllegalStateException: Illegal access: this web application instance has been stopped already. Could not load [META-INF/services/javax.xml.bind.JAXBContext]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
You probably have added spring-boot-devtools in your dependencies. Devtools restarts the application whenever it finds a change in the classpath of the project.
The process you are running(mvn-clean) is causing a change in the classpath of the project and hence your application is restarting.
If you run ordinary processes that don't interfere with the project's classpath, you will not face the restart or executor shutdown problem.
Look at this snapshot from spring dev tools documentation:
As DevTools monitors classpath resources, the only way to trigger a restart is to update the classpath. The way in which you cause the classpath to be updated depends on the IDE that you are using. In Eclipse, saving a modified file will cause the classpath to be updated and trigger a restart. In IntelliJ IDEA, building the project (Build -> Build Project) will have the same effect.
1) First problem is in command
ProcessBuilder builder = new ProcessBuilder(
"cmd.exe", "/c", "c: && cd \"" + projectPath + "\" && mvn clean test");
instead of
ProcessBuilder builder = new ProcessBuilder(
"cmd.exe", "/c", "cd \"" + projectPath + "\" && mvn clean test");
Windows cmd propably is not always at the same starting point, so when cmd is started and points on D: disc and your command si for example cd C:\smt\doc is pointing then it doesnt work. So it needs to be sayed on which disc to operate at the beginning of the command.
2) Second problem could be that maven used by IDE to build and run the application is being shut down by maven command that I make by ProcessBuilder. So the solution is to build the application jar file and run it from cmd.
Related
I have set jOOQ configuration with Liquibase database, all is working good. I want to execute gradle generateJooq task in silent mode, but there are some other logs got from liquibase for anyway for some reason, is there any property that I can set to prevent them?
jOOQ setup:
jooqConfiguration.apply {
logging = if (gradle.startParameter.logLevel == LogLevel.QUIET)
org.jooq.meta.jaxb.Logging.WARN
else
org.jooq.meta.jaxb.Logging.INFO
generator.apply {
database.apply {
name = "org.jooq.meta.extensions.liquibase.LiquibaseDatabase"
properties = listOf(
Property()
.withKey("scripts")
.withValue("/changelog/changelog-master.yaml")
)
...
}
...
}
...
}
Logs that I still got while running gradle clean build -q:
02:25:43 INFO Set default schema name to PUBLIC
02:25:43 INFO Successfully acquired change log lock
02:25:43 INFO Creating database history table with name: PUBLIC.DATABASECHANGELOG
02:25:43 INFO Reading from PUBLIC.DATABASECHANGELOG
Running Changeset: /changelog/changelog-generated.yaml::empty::tpd
02:25:43 INFO Empty change did nothing
02:25:43 INFO ChangeSet /changelog/changelog-generated.yaml::empty::tpd ran successfully in 0ms
02:25:43 INFO Successfully released change log lock
Hello All I have a spring scheduler job running which has to be run on google cloud run with a scheduled time gap.
It works perfectly fine with docker-compose local deployment. It gets triggered without any issue.
Although it works fine locally in google cloud run service with CPU throttling off which keeps CPU 100% on always it is not working after the first run.
I will paste the docker file for any once reference but am pretty sure it is working fine
FROM maven:3-jdk-11-slim AS build-env
# Set the working directory to /app
WORKDIR /app
COPY pom.xml ./
COPY src ./src
COPY css-common ./css-common
RUN echo $(ls -1 css-common/src/main/resources)
# Build and create the common jar
RUN cd css-common && mvn clean install
# Build and the job
RUN mvn package -DskipTests
# It's important to use OpenJDK 8u191 or above that has container support enabled.
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM openjdk:11-jre-slim
# Copy the jar to the production image from the builder stage.
COPY --from=build-env /app/target/css-notification-job-*.jar /app.jar
# Run the web service on container startup
CMD ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
And below is the terraform script used for the deployment
resource "google_cloud_run_service" "job-staging" {
name = var.cloud_run_job_name
project = var.project
location = var.region
template {
spec {
containers {
image = "${var.docker_registry}/${var.project}/${var.cloud_run_job_name}:${var.docker_tag_notification_job}"
env {
name = "DB_HOST"
value = var.host
}
env {
name = "DB_PORT"
value = 3306
}
}
}
metadata {
annotations = {
"autoscaling.knative.dev/maxScale" = "4"
"run.googleapis.com/vpc-access-egress" = "all-traffic"
"run.googleapis.com/cpu-throttling" = false
}
}
}
timeouts {
update = "3m"
}
}
Something I noticed in the logs itself
2022-01-04T00:19:39.178057Z2022-01-04 00:19:39.177 INFO 1 --- [ionShutdownHook] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
Standard
2022-01-04T00:19:39.182017Z2022-01-04 00:19:39.181 INFO 1 --- [ionShutdownHook] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown initiated...
Standard
2022-01-04T00:19:39.194117Z2022-01-04 00:19:39.193 INFO 1 --- [ionShutdownHook] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown completed.
It is shutting down the entity manager. I provided -Xmx1024m heap memory to make sure it has enough memory.
Although in google documentation it has mentioned it should work I am not sure for some reason the scheduler not getting triggered. Any help would be really nice.
TL;DR: Using Spring Scheduler on Cloud Run is a bad idea. Prefer Cloud Scheduler instead
In fact, you have to understand what is the lifecycle of a Cloud Run instance. First of all, CPU is allocated to the process ONLY when a request is processed.
The immediate effect of that is that background process, like a scheduler, can't work, because there isn't CPUs allocated out of request processing.
Except if you set the CPU Throttling to off. You did it? Yes great, but there are another caveats!
An instance is created when a request comes in, and live up to 15 minutes without any request processing. Then the instance is offloaded and you scale to 0.
Here again, the scheduler can't work if the instance is shut down. The solution is to set the min instance to 1 AND the CPU throttling to false to keep 1 instance 100% up and let the scheduler do its job.
Final issue with Cloud Run, is the scalability. You set 4 in your terraform, that means, you can have up to 4 instances in parallel, and therefore 4 scheduler running in parallel, one on each instance. Is it really what you want? If not, you can set the max instance to 1 to limit the number of parallel instance to 1.
At the end, you have 1 instance, full time up, and that can't scale up and down. Because it can't scale, I don't recommend you to perform processing on the current instance but to call another API which run on another Cloud Run instance and that will be able to scale up and down according to the scheduler requirement.
And so, you will have only 1 scheduler that will perform API call to another Cloud Run services to perform task. That's the purpose of Cloud Scheduler.
I am trying to run my spring-boot/liquibase/H2 database with a non-admin user and am having some problems understanding how to do this.
First off, I have seen some information here and tried to set up my application.yml this way.
datasource:
type: com.zaxxer.hikari.HikariDataSource
url: jdbc:h2:mem:test
username: USERLIMITED
password: user_limited_password
liquibase:
contexts: dev, faker
user: THELIQUIBASEUSER
password: THELIQUIBASEPASSWORD
Also put these sql statements in the changelog to run so that the user I want is created and given proper access controls:
<sql>DROP USER IF EXISTS USERLIMITED</sql>
<sql>CREATE USER USERLIMITED PASSWORD 'user_limited_password'</sql>
<sql>GRANT ALL ON APP TO USERLIMITED</sql>
When trying to start up the app, I get the following error:
2020-10-21 14:41:18.532 DEBUG 8704 --- [ restartedMain] c.c.config.LiquibaseConfiguration : Configuring Liquibase
2020-10-21 14:41:18.617 WARN 8704 --- [ test-task-1] i.g.j.c.liquibase.AsyncSpringLiquibase : Starting Liquibase asynchronously, your database might not be
ready at startup!
2020-10-21 14:41:20.226 ERROR 8704 --- [ restartedMain] com.zaxxer.hikari.pool.HikariPool : Hikari - Exception during pool initialization.
org.h2.jdbc.JdbcSQLInvalidAuthorizationSpecException: Wrong user name or password [28000-200]
What is interesting is if I change the LiquibaseConfiguration file to use synchronous DB configuration vs. the async by default I do not get an error.
// If you don't want Liquibase to start asynchronously, substitute by this:
SpringLiquibase liquibase = SpringLiquibaseUtil.createSpringLiquibase(liquibaseDataSource.getIfAvailable(), liquibaseProperties, dataSource.getIfUnique(), dataSourceProperties);
// SpringLiquibase liquibase = SpringLiquibaseUtil.createAsyncSpringLiquibase(this.env, executor, liquibaseDataSource.getIfAvailable(), liquibaseProperties, dataSource.getIfUnique(), dataSourceProperties);
Then if I go to the H2 console and perform a query to see my users I only have the one admin user (which should be a non-admin).
Trying to log in as the liquibase user that I set up in the yml
user: THELIQUIBASEUSER
password: THELIQUIBASEPASSWORD
is not there and I get the Wrong user name or password [28000-200] error.
This leads me to believe that it is something with how the application starts up and asynchronous task execution priority.
Any help is very much appreciated!
When I run a command mvn clean test -Dspring.profiles.active=GITLAB-CI-TEST in the GitLab CI CD it not loading properties file application-gitlab-ci-test.properties. It is loading only application.properties.
As file application-gitlab-ci-test.properties contains the different value for spring.datasource.url the pipeline is failing in the remote runners with error
The last packet sent successfully to the server was 0 milliseconds ago.
The driver has not received any packets from the server.
Of course, this error is expected as properties file application.properties refers to the localhost database.
Code which loading application-gitlab-ci-test.properties:
#Profile("GITLAB-CI-TEST")
#PropertySource("classpath:application-gitlab-ci-test.properties")
#Configuration
public class GitLabCiTestProfile {
}
When I try to run the same command locally it's working as expected and in logs, I see the following records:
2020-03-30 19:23:00.609 DEBUG 604 --- [ main]
o.s.b.c.c.ConfigFileApplicationListener : Loaded config file
'file:/G:/****/****/****/****/target/classes/application.properties'
(classpath:/application.properties)
2020-03-30 19:23:00.609 DEBUG 604 --- [ main]
o.s.b.c.c.ConfigFileApplicationListener : Loaded config file
'file:/G:/****/****/****/****/target/classes/application-GITLAB-CI-TEST.properties' (classpath:/application-GITLAB-CI-TEST.properties) for profile
GITLAB-CI-TEST
I noticed that remote runners missing the second line. This one which loading application-GITLAB-CI-TEST.properties.
I also tried mvn clean test --batch-mode -PGITLAB-CI-TEST and this one too failing in the remote host but in local run working as expected.
I found the workaround for this issue by using the command
mvn clean test --batch-mode -Dspring.datasource.url=jdbc:mysql://mysql-db:3306/*******?useSSL=false&allowPublicKeyRetrieval=true
Can you please help me to solve this issue as this workaround is not satisfying me?
I found the solution to this issue.
I changed the name of the profile from the upper case (GITLAB-CI-TEST) to lower case (gitlab-ci-test), to match the lower case of profile name in properties file - application-gitlab-ci-test.properties.
Now in the remote runner, I'm using the following command:
mvn clean test -Dspring.profiles.active=gitlab-ci-test
Spring doc - link
I having troubles to set up my new dev environment.
I was working with flyway for an simple web app. The process worked well since now. I have a new work environment and I've used pg_dump and pg_sql to restore the base like the qualification environment to get back with the good set of datas.
even thought my public.schema_version on my local environment is well backed up (with all the line regarding previous migrations) my server won't start and keep saying this :
2017-11-27 15:48:55.476 INFO 12857 --- [ main] o.f.c.i.dbsupport.DbSupportFactory : Database: jdbc:postgresql://localhost:5432/volt (PostgreSQL 9.4)
2017-11-27 15:48:55.572 INFO 12857 --- [ main] o.f.core.internal.command.DbValidate : Successfully validated 110 migrations (execution time 00:00.050s)
2017-11-27 15:48:55.584 INFO 12857 --- [ main] o.f.core.internal.command.DbMigrate : Current version of schema "public": << Empty Schema >>
2017-11-27 15:48:55.587 INFO 12857 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "public" to version 1.2 - Changing object type report
2017-11-27 15:48:55.595 ERROR 12857 --- [ main] o.f.core.internal.command.DbMigrate : Migration of schema "public" to version 1.2 - Changing object type report failed! Changes successfully rolled back.
Here 1.2 is the first screep that i've created.. And if I'm looking up to my local base I've all my flyway lines with the success column set to true (included the 1.2 one).
Does flyway keep the current version elsewhere than in the table "schema_version" ?
How do I tell flyway that my schema version is up to date regarding the migrations ?
PS : I'm using a spring-boot environment with only the flyway-core dependency in my pom.xml and this line in my spring-boot properties file
flyway:
baseline-on-migrate: true
I found out that i've used another user existing only in qualification to restore my base.. So my user in my development environment was not on the good schema of my base.
I've re-created my base with the proper user and everything was working again.