JHipster liquibase not ready after choosing oracle db in dev & prod - spring

I generated my app using JHipster, I chose Oracle database in dev and prod. then in application-dev.yml, application-prod.yml and in pom.xml I set the username, the password and the name of my Oracle database. When I run mvnw I got this
2022-04-01 02:36:55.530 WARN 3020 --- [on-rd-vs-task-1] t.j.c.liquibase.AsyncSpringLiquibase : Starting Liquibase asynchronously, your database might not be ready at startup!
Thank you in advance!

You are using liquibase in async mode.
The goal of this message is to remind you that your application might have started whereas the database is not ready.
If you want your database be ready once your application is started, you have to run liquibase in sync mode.

The JHipster generate the LiquibaseConfiguration and by default the Liquibase start asynchronously:
SpringLiquibase liquibase = SpringLiquibaseUtil.createAsyncSpringLiquibase(...)
and there were also code left there to start it in sync mode:
// If you don't want Liquibase to start asynchronously, substitute by this:
SpringLiquibase liquibase = SpringLiquibaseUtil.createSpringLiquibase(...)
You can comment the async code and uncomment the sync one to run liquibase in sync mode.

Related

Flyway windows command line gives no output, at all

I have been using flyway for a while and have managed to successfully execute migrations. This past week or so whenever I try and use flyway the command executes successfully but I get no output at all!
This is not such a big issue for the migrate command, I can just go and check in the database what has been done from the flyway_schema_history table. It is quite frustrating when using the info command however. Previously I'd used this to sort of pre-view what was about to happen.
I've tried running the command in as many ways as I can think of.
I've now tried three versions of flyway too!
I am currently using flyway-9.7.0 on a windows 10 computer. I am trying to execute my migration scripts on an Oracle 19C database.
I am executing these commands:
flyway migrate -url=jdbc:oracle:thin:#//<URL>:<PORT>/<DB_NAME> -user=<DB_USER> -password=<Password>
-locations=filesystem:C:\flyway-9.7.0\migrations -outputFile=C:\flyway-9.7.0\migrations\out.log
flyway info -url=jdbc:oracle:thin:#//<URL>:<PORT>/<DB_NAME> -user=<DB_USER> -password=<Password>
-outputFile=C:\flyway-9.7.0\migrations\out.log
Both of these commands execute successfully (apparently).
For the "migrate" command the database objects are created in the database, an entry is added to the schemas flyway_schema_history table and a log file is created as specified (but is empty)
For the "info" command the log file is again created (empty)
I get nothing written to the windows CMD window that I am executing the command in however!
Please, what can I do to see some feedback from my commands?

Dockerizing Spring Boot db error : connection refused

an spring boot app can run from console A., but I get connection refused when it runs by docker run B.
A. from console it works
java -Dspring.profiles.active=loc -jar app.war
B. Dockerfile
docker run -e "SPRING_PROFILES_ACTIVE=loc" app
ENTRYPOINT java -jar $WDIR/app.war
Why I get this error ?
Thanks in advance.
Csaba
You need to check your application properties/yaml for network access. For example; if you have database connection in properties you need check access of database. If you have container database you need to access via container name or for external remote access you can explore docker network

Intellij, Spring dev tools remote, Docker, error Unexpected 404 response uploading class files

Im trying to use Spring Boot Dev tools (Spring Remote), and automatically upload recompiled files to my docker container.
I keep receiving
Unexpected 404 response uploading class files
This is my docker file:
FROM java:8
WORKDIR /first
ADD ./build/libs/first.jar /first/first.jar
EXPOSE 8080
RUN bash -c 'touch /first/first.jar'
ENTRYPOINT ["java","-Dspring.data.mongodb.uri=mongodb://mongodb/micros", "-Djava.security.egd", "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005","-jar", "first.jar"]
This is my configuration and the configuration
And this is the error I'm receiving:
As of Spring Boot 1.5.0, devtools defaults were changed to exclude the devtools from fat jars.
If you want to include them, you have to set the excludeDevtools flag to false.
However, the devtools documentation doesn't explain how to do this. The necessary documentation is actually in the spring-boot-gradle-plugin documentation.
To do it, you can put this snippet of code in your build.gradle file:
bootRepackage {
excludeDevtools = false
}
Unfortunately, this was buggy at first and had no effect as of Spring Boot 1.5.0. The workaround was to do this instead:
springBoot {
excludeDevtools = false
}
However, I have verified that the bootRepackage approach works for Spring Boot 1.5.8 .
I got the same issues as yours while using docker-compose to compose my application ( a web service + redis server + mongo server ).
As the Spring developer tools document point out "Developer tools are automatically disabled when running a fully packaged application. If your application is launched using java -jar or if it’s started using a special classloader, then it is considered a “production application”."
I think when we running Spring Web Application inside Docker container, the developer tool is disabled then we cant remotely restart it.
Currently, I'm running my web application on the host machine and set the redis server, mongo server inside containers so I can restart the web app quickly when the code is changing in development process.
In my case I had to put the application context on the argument of the IDE RemoteSpringApplication configuration.
For example, my application root context was /virtue so I had to configure it like so:

Why is my spring.datasource configuration not being picked up as expected

I have a batch job which runs perfectly well in standalone mode. I converted the same to a spring xd batch job. I am using spring xd version 1.0.0.M5.
Some issues I face:
(i) I do not want to use hsqldb as my spring.datasource. I wanted to switch to mysql. In order to do so I updated the xd-config.yml file to reflect the same. It did not work. I added a snippet (application.yml) to my job config folder with the relevant datasource information did not work.
I set the spring.datasource related environment variables on the command line. It works.
Q: Is there a way to have mysql be picked as the profile such that the relevant metadata is picked either from the application.yml snippet or the xd-config.yml snippet without me having to set the environment variable manually?
The database configuration is still a work-in-progress. The goal for M6 is to have what you specify in xd-config.yml to control both the Spring Batch repository tables and the default for your batch jobs using JDBC.
In M5 there are separate settings to control this. The Spring Batch repository uses what is in config/xd-config.yml while the batch jobs you launch depend on config/batch-jdbc.properties. To use MySQL for both I changed:
config/xd-config.yml
#Config for use with MySQL - uncomment and edit with relevant values for your environment
spring:
datasource:
url: jdbc:mysql://localhost:3306/xd
username: spring
password: password
driverClassName: com.mysql.jdbc.Driver
profiles:
active: default,mysql
config/batch-jdbc.properties
# Setting for the JDBC batch import job module
url=jdbc:mysql://localhost:3306/xd
username=spring
password=password
driverClass=com.mysql.jdbc.Driver
# Whether to initialize the database on job creation, and the script to
# run to do so if initializeDatabase is true.
initializeDatabase=false
initializerScript=init_batch_import.sql

Glassfish deploy command with createtables error

I have an application packaged as .war file. I want to deploy this web application to Glassfish v4.0 server using this command:
./asadmin deploy --force=true --createtables --contextroot test /tmp/test.war
Deployment without --createtables parameter works fine, however I want the tables to be generated/updated during the deployment. On my local server where I have only one JDBC resource defined in glassfish it works fine. But on the test server there are more JDBC resources defined with limited privilegies and one JDBC resource that I want to use just for this task. How do I tell glassfish to use this particular JDBC resource when creating and updating tables?
Thank you
You need to provide {true|false} to the --createtables option:
./asadmin deploy --force=true --createtables=true --contextroot test /tmp/test.war

Resources