I have a batch job which runs perfectly well in standalone mode. I converted the same to a spring xd batch job. I am using spring xd version 1.0.0.M5.
Some issues I face:
(i) I do not want to use hsqldb as my spring.datasource. I wanted to switch to mysql. In order to do so I updated the xd-config.yml file to reflect the same. It did not work. I added a snippet (application.yml) to my job config folder with the relevant datasource information did not work.
I set the spring.datasource related environment variables on the command line. It works.
Q: Is there a way to have mysql be picked as the profile such that the relevant metadata is picked either from the application.yml snippet or the xd-config.yml snippet without me having to set the environment variable manually?
The database configuration is still a work-in-progress. The goal for M6 is to have what you specify in xd-config.yml to control both the Spring Batch repository tables and the default for your batch jobs using JDBC.
In M5 there are separate settings to control this. The Spring Batch repository uses what is in config/xd-config.yml while the batch jobs you launch depend on config/batch-jdbc.properties. To use MySQL for both I changed:
config/xd-config.yml
#Config for use with MySQL - uncomment and edit with relevant values for your environment
spring:
datasource:
url: jdbc:mysql://localhost:3306/xd
username: spring
password: password
driverClassName: com.mysql.jdbc.Driver
profiles:
active: default,mysql
config/batch-jdbc.properties
# Setting for the JDBC batch import job module
url=jdbc:mysql://localhost:3306/xd
username=spring
password=password
driverClass=com.mysql.jdbc.Driver
# Whether to initialize the database on job creation, and the script to
# run to do so if initializeDatabase is true.
initializeDatabase=false
initializerScript=init_batch_import.sql
Related
Is there any way to locate, copy, or manipulate logs of task execution, in SCDF, in local?
I'm currently seeing logs whenever I execute batch (or not batch) task in cmdline of shell where I've started dataflow server locally. In both CentOS 7 and Windows 10, it says that it located their stdout/stderr logs inside
/tmp (temp in windows)/[SOME_NUMBER_I_DON'T_KNOW]/${task_name_i_defined}_${SOME_HEX_CODE_RELATED_TO_TASK_EXECUTION_ID}
I want to use that information whenever I need.
Passing properties to dataflow jar doesn't work. It just creates a file, writes that file over and over at each task execution, unlike storing each task execution at different folder.
Modifying properties like loggig.file.path at task lauching configurations doesn't work, either. Only stdout of task is made with the name of 'spring.log', at specific location i designated. Behavior is same as above case.
Spring Cloud Data Flow Task logs
I looked at this answer, but it does not work, either...
I know there are a lot of parameters that I could pass to dataflow or tasks. I don't think none of them could satisfy this condition. Please enlighten me.
The only configuration property available to effect the log location is the working-directories-root deployer property.
Because it is a deployer property, it can not simply be set as spring.cloud.deployer.local.working-directories-root.
It can set at task launch time and prefixed w/ deployer.*.local (details).
It can also be configured globally via the "task platform" properties (details).
When configured at the platform level, it can be done in yml:
spring:
cloud:
dataflow:
task:
platform:
local:
accounts:
default:
working-directories-root: /Users/foo/logz
or via an env var:
SPRING_CLOUD_DATAFLOW_TASK_PLATFORM_LOCAL_ACCOUNTS_DEFAULT_WORKING_DIRECTORIES_ROOT=/Users/foo/logz
Details
The STDOUT log location is created at <task-work-dir>/stdout.log (details).
The <task-work-dir> is defined as:
<working-directories-root> / System.nanoTime() / <taskLaunchId>
(details)
The <working-directories-root> is the value of the
working-directories-root local deployer property or the "java.io.tmpdir" system property when the local deployer property is not specified.
I generated my app using JHipster, I chose Oracle database in dev and prod. then in application-dev.yml, application-prod.yml and in pom.xml I set the username, the password and the name of my Oracle database. When I run mvnw I got this
2022-04-01 02:36:55.530 WARN 3020 --- [on-rd-vs-task-1] t.j.c.liquibase.AsyncSpringLiquibase : Starting Liquibase asynchronously, your database might not be ready at startup!
Thank you in advance!
You are using liquibase in async mode.
The goal of this message is to remind you that your application might have started whereas the database is not ready.
If you want your database be ready once your application is started, you have to run liquibase in sync mode.
The JHipster generate the LiquibaseConfiguration and by default the Liquibase start asynchronously:
SpringLiquibase liquibase = SpringLiquibaseUtil.createAsyncSpringLiquibase(...)
and there were also code left there to start it in sync mode:
// If you don't want Liquibase to start asynchronously, substitute by this:
SpringLiquibase liquibase = SpringLiquibaseUtil.createSpringLiquibase(...)
You can comment the async code and uncomment the sync one to run liquibase in sync mode.
I need, for performance reasons, get rid of org.jooq.tools.LoggerListener DEBUG log messages in Spring Boot application running inside Docker. None of the Spring Boot options like (Docker) env variable LOGGING_LEVEL_ORG_JOOQ=INFO in docker-compose.yml or Java system property -Dlogging.level.org.jooq=INFO passed to docker container in entry.sh do not remove these DEBUG messages reporting query execution details. Both option have been checked at Docker container level.
Even custom logback-perf.xml conf file, as in https://github.com/jOOQ/jOOQ/blob/master/jOOQ-examples/jOOQ-spring-boot-example/src/main/resources/logback.xml with DEBUG->INFO, pointed by LOGGING_CONFIG env var from docker-compose.yml does not prevent these debug messages. I have verified that the custom logback-perf.xml conf file is in use by changing appender patterns.
The best way to remove those messages in jOOQ directly, is to specify Settings.executeLogging = false, see here.
Obviously, there are also ways to set up loggers correctly, but I cannot see what you did from your description, or why that failed.
Need to know where can I specify the spring boot port aa when I check my service status I get this:
"Serverkt.log started at server-name with process id"
But I don't get the port number.
Also wanted to know where I can find the spring logs in the server
spring boot has its configuration file. I guess that you have it included in your jar/war file. So basically unzip that file and look inside it (try to search for application.properties or application.yaml|yml).
property server.port defines the port on which application is running. It defaults to 8080.
If you are using spring-boot 2 then with property logging.path you can change the path where output file will be placed. However I don't know if this works when you have logback/log4j/... configuration.
if you run your application you can override those properties specified in appplication.properties|yaml by providing command line properties. For example you can change the port with command java -jar your-boot.jar --server.port=9090
Since you're starting spring boot from a service file, you can set the port using command line arguments e.g.port 8083 would look like this:
ExecStart=/usr/bin/java -jar /opt/corda/spring.jar --server.port=8083
In our projects we have a strange problem with duplicate log entries in the log file.
We have multiple appenders but a single logger.
If the spring boot application is started on local machine using java -jar the problem is not reproducible.
The problem occurs only when the application started as a service.
How can i solve the problem.
The problem occurs only if a file appender configured and if the spring boot application started using /etc/init.d/ symlink.
The spring boot's default start script redirects all console logs into the configured log file.
As a result both the logback logger and start scripts writes in the same file, thus we see duplicate entries in the log file.
Using systemctl (or setting the LOG_FILE or LOG_FOLDER environment variables) will solve this problem.
If you cannot switch to systemd you can set the environment variables so that all stdout&stderr messages redirected to /dev/null:
export LOG_FOLDER=/dev
export LOG_FILENAME=null