I have gone through the grepcode for JdbcJobInstanceDao and find out this code snippet which I am trying hard to understand.
According to the Spring Batch Repository Schema,
CREATE TABLE BATCH_JOB_INSTANCE (
JOB_INSTANCE_ID BIGINT NOT NULL PRIMARY KEY ,
VERSION BIGINT ,
JOB_NAME VARCHAR(100) NOT NULL,
JOB_KEY VARCHAR(32) NOT NULL,
constraint JOB_INST_UN unique (JOB_NAME, JOB_KEY)
) ENGINE=InnoDB;
JOB_NAME is actually Unique. But however in the JdbcJobInstanceDao#getJobInstances(String jobName, int start,int count) method, it is treated as if a list of entries can be existed in the BATCH_JOB_INSTANCE table for the same JOB_NAME.
Is this a possibility? Please explain.
The JOB_NAME is not unique. The combination of JOB_NAMEand JOB_KEY (which is a hash of the job parameters) is unique.
So multiple instances of the same job can exist, as long as they have different job parameters.
Related
For an application based on Spring Boot and relying on a PostgreSQL 9.6 database, I'm using Spring Batch to schedule a few operations which must take place every n seconds (customizable but usually ranging between a few seconds and a few minutes); as a result, at the end of the day a lot of jobs are performed by the system and a lot of information are persisted by Spring Batch.
The fact is that I'm not really interested in historicizing those jobs so, at the beginning, I used the in-memory version of Spring Batch to avoid any kind of persistency on such (to me) useless information.
However, in case of configurations with small n running on environments with low resources, this approach led to performance issues so I decided to try with the database way.
Unfortunately, those tables grow quite fast and I would like to implement a cleanup procedure to get rid of all data older than, for instance, a day.
Here comes the pain: in fact, even if nothing is locking those tables (so, the main application is down and noone is interacting with the database) it takes forever to clean them and I really cannot understand the reason why.
Spring Batch (4.0.1) provides the following PG script to generate those tables:
CREATE TABLE BATCH_JOB_INSTANCE (
JOB_INSTANCE_ID BIGINT NOT NULL PRIMARY KEY ,
VERSION BIGINT ,
JOB_NAME VARCHAR(100) NOT NULL,
JOB_KEY VARCHAR(32) NOT NULL,
constraint JOB_INST_UN unique (JOB_NAME, JOB_KEY)
) ;
CREATE TABLE BATCH_JOB_EXECUTION (
JOB_EXECUTION_ID BIGINT NOT NULL PRIMARY KEY ,
VERSION BIGINT ,
JOB_INSTANCE_ID BIGINT NOT NULL,
CREATE_TIME TIMESTAMP NOT NULL,
START_TIME TIMESTAMP DEFAULT NULL ,
END_TIME TIMESTAMP DEFAULT NULL ,
STATUS VARCHAR(10) ,
EXIT_CODE VARCHAR(2500) ,
EXIT_MESSAGE VARCHAR(2500) ,
LAST_UPDATED TIMESTAMP,
JOB_CONFIGURATION_LOCATION VARCHAR(2500) NULL,
constraint JOB_INST_EXEC_FK foreign key (JOB_INSTANCE_ID)
references BATCH_JOB_INSTANCE(JOB_INSTANCE_ID)
) ;
CREATE TABLE BATCH_JOB_EXECUTION_PARAMS (
JOB_EXECUTION_ID BIGINT NOT NULL ,
TYPE_CD VARCHAR(6) NOT NULL ,
KEY_NAME VARCHAR(100) NOT NULL ,
STRING_VAL VARCHAR(250) ,
DATE_VAL TIMESTAMP DEFAULT NULL ,
LONG_VAL BIGINT ,
DOUBLE_VAL DOUBLE PRECISION ,
IDENTIFYING CHAR(1) NOT NULL ,
constraint JOB_EXEC_PARAMS_FK foreign key (JOB_EXECUTION_ID)
references BATCH_JOB_EXECUTION(JOB_EXECUTION_ID)
) ;
CREATE TABLE BATCH_STEP_EXECUTION (
STEP_EXECUTION_ID BIGINT NOT NULL PRIMARY KEY ,
VERSION BIGINT NOT NULL,
STEP_NAME VARCHAR(100) NOT NULL,
JOB_EXECUTION_ID BIGINT NOT NULL,
START_TIME TIMESTAMP NOT NULL ,
END_TIME TIMESTAMP DEFAULT NULL ,
STATUS VARCHAR(10) ,
COMMIT_COUNT BIGINT ,
READ_COUNT BIGINT ,
FILTER_COUNT BIGINT ,
WRITE_COUNT BIGINT ,
READ_SKIP_COUNT BIGINT ,
WRITE_SKIP_COUNT BIGINT ,
PROCESS_SKIP_COUNT BIGINT ,
ROLLBACK_COUNT BIGINT ,
EXIT_CODE VARCHAR(2500) ,
EXIT_MESSAGE VARCHAR(2500) ,
LAST_UPDATED TIMESTAMP,
constraint JOB_EXEC_STEP_FK foreign key (JOB_EXECUTION_ID)
references BATCH_JOB_EXECUTION(JOB_EXECUTION_ID)
) ;
CREATE TABLE BATCH_STEP_EXECUTION_CONTEXT (
STEP_EXECUTION_ID BIGINT NOT NULL PRIMARY KEY,
SHORT_CONTEXT VARCHAR(2500) NOT NULL,
SERIALIZED_CONTEXT TEXT ,
constraint STEP_EXEC_CTX_FK foreign key (STEP_EXECUTION_ID)
references BATCH_STEP_EXECUTION(STEP_EXECUTION_ID)
) ;
CREATE TABLE BATCH_JOB_EXECUTION_CONTEXT (
JOB_EXECUTION_ID BIGINT NOT NULL PRIMARY KEY,
SHORT_CONTEXT VARCHAR(2500) NOT NULL,
SERIALIZED_CONTEXT TEXT ,
constraint JOB_EXEC_CTX_FK foreign key (JOB_EXECUTION_ID)
references BATCH_JOB_EXECUTION(JOB_EXECUTION_ID)
) ;
CREATE SEQUENCE BATCH_STEP_EXECUTION_SEQ MAXVALUE 9223372036854775807 NO CYCLE;
CREATE SEQUENCE BATCH_JOB_EXECUTION_SEQ MAXVALUE 9223372036854775807 NO CYCLE;
CREATE SEQUENCE BATCH_JOB_SEQ MAXVALUE 9223372036854775807 NO CYCLE;
By respecting the references precedence, I try to cleanup those tables by executing the following deletions:
delete from BATCH_STEP_EXECUTION_CONTEXT;
delete from BATCH_STEP_EXECUTION;
delete from BATCH_JOB_EXECUTION_CONTEXT;
delete from BATCH_JOB_EXECUTION_PARAMS;
delete from BATCH_JOB_EXECUTION;
delete from BATCH_JOB_INSTANCE;
Everything is ok with the first 4 tables but, as soon as I reach the BATCH_JOB_EXECUTION one, it keeps like 30 minutes to remove a few hundred thousands of rows.
Even worse, after deleting everything from the first 5 tables, the last one (which is now linked to nothing) takes even more.
Can you see a reason why this simple operation takes so long to complete? I mean, of course it has to check for constraints violations but it seems however unreasonably slow.
Plus, is there a better way to use Spring Batch without wasting disk space with unnecessary jobs information?
I am facing an issue with my datastage job. I have to fill a table ttperiodeas in Oracle from a .csv file. The SQL query in Oracle connector is shown in this screenshot:
Oracle connector
And here is the oracle script
CREATE TABLE TTPERIODEAS
(
CDPARTITION VARCHAR2(5 BYTE) NOT NULL ENABLE,
CDCOMPAGNIE NUMBER(4,0) NOT NULL ENABLE,
CDAPPLI NUMBER(4,0) NOT NULL ENABLE,
NUCONTRA CHAR(15 BYTE) NOT NULL ENABLE,
DTDEBAS NUMBER(8,0) NOT NULL ENABLE,
DTFINAS NUMBER(8,0) NOT NULL ENABLE,
TAUXAS NUMBER(8,5) NOT NULL ENABLE,
CONSTRAINT PK_TTPERIODEAS
PRIMARY KEY (CDPARTITION, CDCOMPAGNIE, CDAPPLI, NUCONTRA, DTDEBAS)
)
PARTITION BY LIST(CDPARTITION)
(PARTITION P_PERIODEAS_13Q VALUES ('13Q'));
When running the job, I get the following message error and the table is not filled.:
The index 'USINODSD0.SYS_C00249007' its partition is unusable
Please I need help thanks
The index is global (i.e. not partitioned) because there is no using index local at the end of the definition. This is also true for the PK index shown above. (I'm assuming they are two different things, because by default the DDL above would create an index named PK_TTPERIODEAS, so I'm not sure what SYS_C00249007 is.) If you can drop and rebuild them as local indexes (i.e. partitioned to match the table) then truncating or dropping a partition will no longer invalidate indexes.
For example, you could rebuild the primary key as:
alter table ttperiodeas
drop primary key;
alter table ttperiodeas
add constraint pk_ttperiodeas primary key (cdpartition,cdcompagnie,cdappli,nucontra,dtdebas)
using index local;
I don't know how SYS_C00249007 is defined, but you could use something similar.
The create table command might be something like:
create table ttperiodeas
( cdpartition varchar2(5 byte) not null
, cdcompagnie number(4,0) not null
, cdappli number(4,0) not null
, nucontra varchar2(15 byte) not null
, dtdebas number(8,0) not null
, dtfinas number(8,0) not null
, tauxas number(8,5) not null
, constraint pk_ttperiodeas
primary key (cdpartition,cdcompagnie,cdappli,nucontra,dtdebas)
using index local
)
partition by list(cdpartition)
( partition p_periodeas_13q values ('13Q') );
Alternatively, you could add the update global indexes clause when dropping the partition:
alter table demo_temp drop partition p_periodeas_14q update global indexes;
(By the way, NUCONTRA should probably be a standard VARCHAR2 and not CHAR, which is intended for cross-platform compatibility and ANSI completeness, and in practice just wastes space and creates bugs.)
the message says that the index for the given partition is unusable: so you could try to rebuild the correponding index partition by the use of
create index [index_name] rebuild partition [partition_name]
(with the fitting values for [index_name] and [partition_nme].
Before you do that you should check the status of the index partitions in user_indexes - since your error message looks not like Oracle error messages usually do.
But since the index is global as William Robertson pointed out, this is not applicable for the given situation.
I wonder if someone could explain the behaviour of the H2 JDBC driver when deleting an entry from a rather simple table.
When using the following table definition, the method executeUpdate() for a PreparedStatement instance returns 1 if one entry has been deleted (expected behaviour).
CREATE TABLE IF NOT EXISTS "MATERIAL" (
"CODE" VARCHAR(5) NOT NULL,
"NAME" VARCHAR(100) NOT NULL
);
When adding a PRIMARY KEY constraint on the CODE column, the same method returns 0 although the entry gets deleted successfully (behaviour not expected).
CREATE TABLE IF NOT EXISTS "MATERIAL" (
"CODE" VARCHAR(5) NOT NULL,
"NAME" VARCHAR(100) NOT NULL,
PRIMARY KEY ("CODE")
);
Most interestingly, when adding an INT typed column to serve as PRIMARY KEY the return value is 1 again:
CREATE TABLE IF NOT EXISTS "MATERIAL" (
"ID" INT NOT NULL AUTO_INCREMENT,
"CODE" VARCHAR(5) NOT NULL,
"NAME" VARCHAR(100) NOT NULL,
PRIMARY KEY ("ID")
);
Is someone able to reconstruct this behaviour and probably somehow explain it to me?
I have included the current version of H2 DB using maven.
EDIT:
If I eventually add a UNIQUE constraint for the CODE column, the return value is 0 again ...
CREATE TABLE IF NOT EXISTS "MATERIAL" (
"ID" INT NOT NULL AUTO_INCREMENT,
"CODE" VARCHAR(5) NOT NULL UNIQUE,
"NAME" VARCHAR(100) NOT NULL,
PRIMARY KEY ("CODE")
);
EDIT 2:
The query used to delete an entry looks like the following (used in PreparedStatement):
DELETE FROM MATERIAL WHERE CODE = ?
SOLUTION:
I'm sorry to have you bothered with this. Actually, there was no problem with the table definition or the JDBC driver. It was my test data - from earlier testing I had wanted to INSERT two entries having the same CODE. It was a multiple row insert - obviously this failed, when CODE was the PK or having a UNIQUE index. Thus, in this cases executeUpdate() could only return 0 because there was no data in the table at all.
I have a spring boot application and I trying to initialize some data on application startup.
This is my application properties:
#Database connection
spring.datasource.url=jdbc:h2:mem:test_db
spring.datasource.username=...
spring.datasource.password=...
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.initialize=true
spring.datasource.schema=schema.sql
spring.datasource.data=schema.sql
#Hibernate configuration
#spring.jpa.hibernate.ddl-auto = none
This is schema.sql:
CREATE TABLE IF NOT EXISTS `Person` (
`id` INTEGER PRIMARY KEY AUTO_INCREMENT,
`first_name` VARCHAR(50) NOT NULL,
`age` INTEGER NOT NULL,
PRIMARY KEY(`id`)
);
and data.sql
INSERT INTO `Person` (
`id`,
`first_name`,
`age`
) VALUES (
1,
'John',
20
);
But I got 'Syntax error in SQL statement' on application startup:
19:08:45.642 6474 [main] INFO o.h.tool.hbm2ddl.SchemaExport - HHH000476: Executing import script '/import.sql'
19:08:45.643 6475 [main] ERROR o.h.tool.hbm2ddl.SchemaExport - HHH000388: Unsuccessful: CREATE TABLE Person (
19:08:45.643 6475 [main] ERROR o.h.tool.hbm2ddl.SchemaExport - Syntax error in SQL statement "CREATE TABLE PERSON ( [*]"; expected "identifier"
Syntax error in SQL statement "CREATE TABLE PERSON ( [*]"; expected "identifier"; SQL statement:
I can't understand, what's wrong with this SQL.
Try this code. Remove PRIMARY KEY(id) and execute it.
CREATE TABLE IF NOT EXISTS `Person` (
`id` INTEGER PRIMARY KEY AUTO_INCREMENT,
`first_name` VARCHAR(50) NOT NULL,
`age` INTEGER NOT NULL
);
This error results from the structure of the CREATE TABLE declaration.
It will be the result when you have an extra comma in the end of your SQL declaration--no column declaration following the comma. For example:
CREATE TABLE IF NOT EXISTS `Person` (
`id` INTEGER PRIMARY KEY AUTO_INCREMENT,
`first_name` VARCHAR(50) NOT NULL,
`age` INTEGER NOT NULL, --note this line has a comma in the end
);
That's because CREATE TABLE expects a list of the columns that will be created along with the table, and the first parameter of the column is the identifier. As you check here, the column declaration follows the structure:
identifier datatype <constraints> <autoincrement> <functions>
Thus, in your case, as #budthapa and #Vishwanath Mataphati have mentioned, you could simply remove the PRIMARY KEY(id) line from the CREATE TABLE declaration. Moreover, you have already stated that id is a primary key on the first line of the column definitions.
In case you do not have a statement as the PRIMARY KEY declaration, be sure to check for the extra comma following your last column declaration.
Try this, as you have used Table_name
CREATE TABLE IF NOT EXISTS Person (
id INTEGER PRIMARY KEY AUTO_INCREMENT,
first_name VARCHAR(50) NOT NULL,
age INTEGER NOT NULL
);
I was add below in to application.properties and it work for me
spring.jpa.properties.hibernate.globally_quoted_identifiers=true
spring.jpa.properties.hibernate.globally_quoted_identifiers_skip_column_definitions = true
What helped in my case was removing single quotes from the table name in my insert query
I had to change this:
INSERT INTO 'translator' (name, email) VALUES ('John Smith', 'john#mail.com');
to this:
INSERT INTO translator (name, email) VALUES ('John Smith', 'john#mail.com');
You set auto increment id, so you can't insert new record with id.
Try INSERT INTO `Person` (
`first_name`,
`age`
) VALUES (
'John',
20
);
I ran into same issue. I fixed that with these application.properties:
spring.jpa.properties.hibernate.connection.charSet=UTF-8
spring.jpa.properties.hibernate.hbm2ddl.import_files_sql_extractor=org.hibernate.tool.hbm2ddl.MultipleLinesSqlCommandExtractor
Some issue with multi-line and default encoding.
I am new to h2.I just using h2 in spring embedded mode with hibernate.I am trying to execute the following scripts using h2.
CREATE TABLE acct_authority (
id bigint(20) NOT NULL auto_increment,
name varchar(255) NOT NULL default '',
value varchar(255) NOT NULL,
PRIMARY KEY (id),
UNIQUE KEY name (name)
);
The table acct_authority is created without any error.But if i create another table with the following script.
CREATE TABLE acct_role (
id bigint(20) NOT NULL auto_increment,
name varchar(255) NOT NULL default '',
PRIMARY KEY (id),
UNIQUE KEY name (name)
);
It shows error as constraint name allready exists.What mistake i did.
You tried to create two constraints with same name. As you see, both CREATE TABLE statements contain following:
UNIQUE KEY name (name)
Result is that first one creates constraint named name, and second one fails because constraint name already exists. Problem can be solved by using unique names. Also in general it makes sense have little bit more descriptive names for database objects. Maybe you can use for example something like following:
UNIQUE KEY acct_authority_name_UNIQUE (name)
...
UNIQUE KEY acct_role_name_UNIQUE (name)