How to fix "Failed to process #BeforeTask or #AfterTask annotation because: Task with name "application-1" is already running - spring-cloud-task

I have a Spring Cloud Task application setup to use spring.cloud.task.single-instance-enabled=true. When using this option, a lock record is created in the TASK_LOCK repository table and my task completes successfully. This lock record remains even though the task completed. Subsequent runs fail with "Failed to process #BeforeTask or #AfterTask annotation because: Task with name "application-1" is already running."
I've tried changing the parameters to make the task run unique but this did not work. If I specify a new task name, I can get it to run once but not twice. Removing the task lock record manually on the back-end will allow subsequent executions for the same task name.
Am I correct to assume that the task lock should be removed from the table upon completion of the task?
application.yml
spring:
cloud:
task:
single-instance-enabled: true
datasource:
url: ****
username: ****
password: ****
driver-class-name: oracle.jdbc.OracleDriver
jpa:
hibernate:
ddl-auto: create-drop
properties:
hibernate:
dialect: org.hibernate.dialect.Oracle12cDialect
Data Source Configurer class
import javax.sql.DataSource;
import org.springframework.cloud.task.configuration.DefaultTaskConfigurer;
public class DataSourceConfigurer extends DefaultTaskConfigurer {
public DataSourceConfigurer(DataSource dataSource)
{
super(dataSource);
}
}
Main Application class
. . .
#Autowired
private DataSource dataSource;
#Bean
public DataSourceConfigurer getTaskConfigurer() {
return new DataSourceConfigurer(dataSource);
}
. . .
I'm expecting the execution of a task with a given task name may only run at one time (in a running state). Once that task is completed, a task with the same name would be allowed to run.
The actual results are showing only one execution of a task with a given task name may run only one time. Task lock record remains and does not allow subsequent task executions even though the first run is complete.

Related

Task execution is not working after lunching the task in spring cloud data flow

I have created one Spring boot application with #EnablesTask annotation and try to print the arguments in log.
package com.custom.samplejob;
import org.springframework.boot.CommandLineRunner;
import org.springframework.cloud.task.configuration.EnableTask;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
#EnableTask
public class TaskConfiguration {
#Bean
public CommandLineRunner commandLineRunner() {
return args -> {
System.out.println(args);
};
}
}
After I have run that mvn clean install to have the jar in local maven repo.
com.custom:samplejob:0.0.1-SNAPSHOT
Using custom docker-compose to run spring cloud data flow locally on windows using the below parameters
set HOST_MOUNT_PATH=C:\Users\user\.m2 (Local maven repository mounting)
set DOCKER_MOUNT_PATH=/root/.m2/
set DATAFLOW_VERSION=2.7.1
set SKIPPER_VERSION=2.6.1
docker-compose up
Using the below commend to register the app
app register --type task --name custom-task-trail-1 --uri maven://com.custom:samplejob:0.0.1-SNAPSHOT
Created task using UI(below URL) and lunch the task. Task was successfully launched.
http://localhost:9393/dashboard/#/tasks-jobs/tasks
These are the logs I can see in the docker-compose up terminal,
dataflow-server | 2021-02-15 13:20:41.673 INFO 1 --- [nio-9393-exec-9] o.s.c.d.spi.local.LocalTaskLauncher : Preparing to run an application from com.custom:samplejob:jar:0.0.1-SNAPSHOT. This may take some time if the artifact must be downloaded from a remote host.
dataflow-server | 2021-02-15 13:20:41.693 INFO 1 --- [nio-9393-exec-9] o.s.c.d.spi.local.LocalTaskLauncher : Command to be executed: /usr/lib/jvm/jre-11.0.8/bin/java -jar /root/.m2/repository/com/custom/samplejob/0.0.1-SNAPSHOT/samplejob-0.0.1-SNAPSHOT.jar --name=dsdsds --spring.cloud.task.executionid=38
dataflow-server | 2021-02-15 13:20:41.702 INFO 1 --- [nio-9393-exec-9] o.s.c.d.spi.local.LocalTaskLauncher : launching task custom-task-trail-1-48794885-9a0a-4c46-a2a1-299bf91763ad
dataflow-server | Logs will be in /tmp/4921907601400/custom-task-trail-1-48794885-9a0a-4c46-a2a1-299bf91763ad
But in task execution list, it's doesn't show the status and start date and end date of that task executions,
can some one help me to resolve this or am I missing anything here in local installation or task spring boot implementation wise?
I have enabled kubernetes on docker desktop and installed spring data flow server top of that.
And I tried with docker uri to register app and generate docker image using the jib-maven-plugin.
Now its works the sample task application in my case.

JHipster H2 DB Non-admin User

I am trying to run my spring-boot/liquibase/H2 database with a non-admin user and am having some problems understanding how to do this.
First off, I have seen some information here and tried to set up my application.yml this way.
datasource:
type: com.zaxxer.hikari.HikariDataSource
url: jdbc:h2:mem:test
username: USERLIMITED
password: user_limited_password
liquibase:
contexts: dev, faker
user: THELIQUIBASEUSER
password: THELIQUIBASEPASSWORD
Also put these sql statements in the changelog to run so that the user I want is created and given proper access controls:
<sql>DROP USER IF EXISTS USERLIMITED</sql>
<sql>CREATE USER USERLIMITED PASSWORD 'user_limited_password'</sql>
<sql>GRANT ALL ON APP TO USERLIMITED</sql>
When trying to start up the app, I get the following error:
2020-10-21 14:41:18.532 DEBUG 8704 --- [ restartedMain] c.c.config.LiquibaseConfiguration : Configuring Liquibase
2020-10-21 14:41:18.617 WARN 8704 --- [ test-task-1] i.g.j.c.liquibase.AsyncSpringLiquibase : Starting Liquibase asynchronously, your database might not be
ready at startup!
2020-10-21 14:41:20.226 ERROR 8704 --- [ restartedMain] com.zaxxer.hikari.pool.HikariPool : Hikari - Exception during pool initialization.
org.h2.jdbc.JdbcSQLInvalidAuthorizationSpecException: Wrong user name or password [28000-200]
What is interesting is if I change the LiquibaseConfiguration file to use synchronous DB configuration vs. the async by default I do not get an error.
// If you don't want Liquibase to start asynchronously, substitute by this:
SpringLiquibase liquibase = SpringLiquibaseUtil.createSpringLiquibase(liquibaseDataSource.getIfAvailable(), liquibaseProperties, dataSource.getIfUnique(), dataSourceProperties);
// SpringLiquibase liquibase = SpringLiquibaseUtil.createAsyncSpringLiquibase(this.env, executor, liquibaseDataSource.getIfAvailable(), liquibaseProperties, dataSource.getIfUnique(), dataSourceProperties);
Then if I go to the H2 console and perform a query to see my users I only have the one admin user (which should be a non-admin).
Trying to log in as the liquibase user that I set up in the yml
user: THELIQUIBASEUSER
password: THELIQUIBASEPASSWORD
is not there and I get the Wrong user name or password [28000-200] error.
This leads me to believe that it is something with how the application starts up and asynchronous task execution priority.
Any help is very much appreciated!

SCDF: Restart and resume a composed task

SCDF Composed Task Runner gives us the option to turn on the --increment-instance-enabled. This option creates an artificial run.id parameter, which increments for every run. Therefore the task is unique for Spring Batch and will restart.
The problem with the IdIncrementer is when I mix it with execution without the IdIncrementer. In the event when a task does not finish, I want to resume the Task. The problem I encountered was when the task finishes without the IdIncrementer, I could not start the task again with the IdIncrementer.
I was wondering what would be the best way to restart with the option to resume?
My idea would be to create a new IdResumer, which uses the same run.id as the last execution.
We are run SCDF 2.2.1 on Openshift v3.11.98 and we use CTR 2.1.1.
The steps to reproduce this:
Create a new SCDF Task Definition with the following definition: dummy1:dummy && dummy2: dummy && dummy3: dummy. The dummy app is a docker container, that fails randomly with 50% chance.
Execute the SCDF Task with the --increment-instance-enabled=true and wait for one of the dummy task to fail (restart if needed).
To resume the same failed execution, execute the SCDF Task now --increment-instance-enabled=false. And let it finish successfully (Redo if needed).
Start the SCDF Task again with --increment-instance-enabled=true.
At step 4 the composed task throws the JobInstanceAlreadyCompleteException, even though the --increment-instance-enabled is enabled again.
Caused by:
org.springframework.batch.core.repository.JobInstanceAlreadyCompleteException:
A job instance already exists and is complete for
parameters={-spring.cloud.data.flow.taskappname=composed-task-runner,
-spring.cloud.task.executionid=3190, -spring.datasource.username=testuser, -graph=aaa-stackoverflow-dummy2 && aaa-stackoverflow-dummy3, -spring.cloud.data.flow.platformname=default, -spring.datasource.url=jdbc:postgresql://10.10.10.10:5432/tms_efa?ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory&currentSchema=dev,
-spring.datasource.driverClassName=org.postgresql.Driver, -spring.datasource.password=pass1234, -spring.cloud.task.name=aaa-stackoverflow, -dataflowServerUri=https://scdf-dev.company.com:443/ , -increment-instance-enabled=true}. If you want to run this job again, change the parameters.
Is there a better way to resume and restart the task?

Spring Batch Integration job instance already exists on start up

I am using spring batch integration to poll for a file and process it and was looking for some guidance on the job parameters aspect of it. I am using the following to create a job launch request and turn a file into the request
#Transformer
public JobLaunchRequest toRequest(Message<File> message) {
JobParametersBuilder jobParametersBuilder =
new JobParametersBuilder();
jobParametersBuilder.addString(fileParameterName,
message.getPayload().getAbsolutePath());
jobParametersBuilder.addLong("time", new Date().getTime());
return new JobLaunchRequest(job, jobParametersBuilder.toJobParameters());
}
On starting up the application for the first time there is only one parameter run.id. If i add a file to repository that the file poller is looking in it creates 2 parameters in the db: fileParameterName and time. If I start the application again it will use the previous values for parameters fileParameterName and time and add a new run.id. The message on the initial start up is :
Job: ... launched with the following parameters: [{run.id=1}]
If I add a file my application handles the file correctly:
Job: ... launched with the following parameters:[{input.file.name=C:\Temp\test.csv, time=1472051531556}]
but if I stop and start the application again I get the following message:
Job: ... launched with the following parameters: [{time=1472051531556, run.id=1, input.file.name=C:\Temp\test.csv}]
My question is why on this start up it is looking at the previous parameters? Is there a way to add the current time as a parameter on start up instead of the previous time so I dont get "A job instance already exists and is complete for parameters={}"? Or to stop the jobs running on start up?
Also if the application is running and I add a file it will enter the toRequest method but it does not on start up.
Any help would be great.
Thanks
We should have a parameter as 'run.id' with 'current timestamp' to where we kick off Spring Batch Job. This is how we kick off a Spring Batch job from shell script.
RUN_ID=$(date +"%Y-%m-%d %H:%M:%S") JOB_PARAMS="filename=XXX"
$JAVA_HOME
org.springframework.batch.core.launch.support.CommandLineJobRunner
springbatch_XXX.xml SpringBatchJob run.id="$RUN_ID" ${JOB_PARAMS}

grails database migration plugin problems after upgrade to grails 3

I was using a previous version grails-database-migration plugin for a while and never had any big issues with it. However, lately I upgraded the whole project to grails 3.0.9 and did some additional development, the behavior as follows:
Imported the current prod DB structure into local machine (that DB copy is without the latest changes and new entities)
Execute: grails -Dgrails.env=staging dbm-gorm-diff changlog.xml
What I expected at this point is new changlog.xml file with all changes of existing entities and new ones.
What I get:
Newly defined entities automatically got added into the DB.
The changes in changlog.xml only included the changes of already existing tables, such as:
also, If I try running grails -Dgrails.env=staging run-app
ERROR grails.boot.GrailsApp - Application startup failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'springLiquibase_dataSource': Invocation of init method failed; nested exception is java.lang.NoSuchMethodError: liquibase.integration.spring.SpringLiquibase.createDatabase(Ljava/sql/Connection;Lliquibase/resource/ResourceAccessor;)Lliquibase/database/Database;
FAILURE: Build failed with an exception.
What went wrong: Execution failed for task ':bootRun'.
Process 'command '/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/bin/java''
finished with non-zero exit value 1
...
...
Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. | Error Failed to start server (Use --stacktrace to see the full trace)
Here is the portion of my application.yml
dataSource:
pooled: true
url: jdbc:mysql://127.0.0.1:3306/triz?useUnicode=yes&characterEncoding=UTF-8
driverClassName: "com.mysql.jdbc.Driver"
jmxExport: true
username: root
password: password
dialect: org.hibernate.dialect.MySQL5InnoDBDialect
properties:
jmxEnabled: true
initialSize: 5
maxActive: 50
minIdle: 5
maxIdle: 25
maxWait: 10000
maxAge: 600000
timeBetweenEvictionRunsMillis: 5000
minEvictableIdleTimeMillis: 60000
validationQuery: SELECT 1
validationQueryTimeout: 3
validationInterval: 15000
testOnBorrow: true
testWhileIdle: true
testOnReturn: false
jdbcInterceptors: ConnectionState
defaultTransactionIsolation: 2
environments:
development:
dataSource:
dbCreate: create
# url: jdbc:h2:mem:devDb;MVCC=TRUE;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE
test:
dataSource:
dbCreate: update
url: jdbc:h2:mem:testDb;MVCC=TRUE;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE
staging:
dataSource:
url: jdbc:mysql://127.0.0.1:3306/triz_staging?useUnicode=yes&characterEncoding=UTF-8
and gradle.build
buildscript {
ext {
grailsVersion = project.grailsVersion
}
repositories {
mavenCentral()
mavenLocal()
maven { url "https://repo.grails.org/grails/core" }
}
dependencies {
classpath "org.grails:grails-gradle-plugin:$grailsVersion"
classpath 'com.bertramlabs.plugins:asset-pipeline-gradle:2.5.0'
// classpath 'com.bertramlabs.plugins:less-asset-pipeline:2.6.7'
classpath "org.grails.plugins:hibernate:4.3.10.5"
classpath 'org.grails.plugins:database-migration:2.0.0.RC4'
}
}
...
...
dependencies {
...
compile 'org.liquibase:liquibase-core:3.3.2'
runtime 'org.grails.plugins:database-migration:2.0.0.RC4'
}
UPDATE
I have another way to approach this problem:
My plan was to generate a changelog based on my current prod DB and then generate a diff for the changes I made. Sounds simple and straightforward; however, it didn't work out as expected. Here is what I did:
Dumped prod DB
Removed liquibase tables
Run: grails dbm-generate-changelog changelog-init.xml --add
At this point, I expected changelog-init.xml to contain the current state of DB. But, instead it applied the changes based on my models first, and then tried generating the diff. Eventually, I ended up with a changelog including my entire existing DB with changes applied from gorm.
What am I doing wrong here?
Additional Observations
It looks like, whenever I try to run ANY migration related commands, grails applies all the changes before that, even through my config says:
staging:
dataSource:
dbCreate: ~
url: jdbc:mysql://127.0.0.1:3306/triz_staging?useUnicode=yes&characterEncoding=UTF-8
properties:
jmxEnabled: true
Also tried completely removing dbCreate. Didn't change anything...
I am done, have no idea where to move next!!!
Well, here is a deal...
I am not sure if that was the real reason, but all I did is moved datasource config from application.yml to application.groovy and everything got back to normal.
I would be happy to hear thoughts.
Thanks.

Resources