Null value in column primary_id when a spring boot session is created - spring-boot

I have been trying to add session data to my application. For most part I have been following the docs here for creating a jdbc http session. Accordingly, I have
HttpSessionConfig.java
#EnableJdbcHttpSession
public class HttpSessionConfig {}
and I have my username,password, database name and so forth in my application.properties file. Furthermore, I created the session tables using the script found here. However, I go to my login page I get the error
PreparedStatementCallback; SQL [INSERT INTO SPRING_SESSION(SESSION_ID,
CREATION_TIME, LAST_ACCESS_TIME, MAX_INACTIVE_INTERVAL,
PRINCIPAL_NAME) VALUES (?, ?, ?, ?, ?)]; ERROR: null value in column
"primary_id" violates not-null constraint Detail: Failing row contains
(null, 835a7b12-d171-4f77-bd22-7da7ed78ca12, 1510171280672,
1510171280672, 1800, null, null).; nested exception is
org.postgresql.util.PSQLException: ERROR: null value in column
"primary_id" violates not-null constraint Detail: Failing row contains
(null, 835a7b12-d171-4f77-bd22-7da7ed78ca12, 1510171280672,
1510171280672, 1800, null, null).
However, as far I can tell from the docs this is suppose to work. Is there a step that I overlooked or isn't in the docs?

For me the solution was as #user5960886 pointed in one of their own replies.
I was using the wrong schema (master) for the spring table spring_session (and spring_session_attributes) which requires prinary_id to not be null where my application was running and populating the schema that does not populate this column (namely 1.3.0.RELEASE).
Here are the links that have the 2 schemas:
Version 1.3.0.RELEASE
https://github.com/spring-projects/spring-session/blob/1.3.0.RELEASE/spring-session/src/main/resources/org/springframework/session/jdbc/schema-postgresql.sql
Latest Master
https://github.com/spring-projects/spring-session/blob/master/spring-session-jdbc/src/main/resources/org/springframework/session/jdbc/schema-postgresql.sql
You may need to drop and recreate your existing tables if you have created them using the latest schema. NOTE: Existing data will be lost. Here is the drop SQL statements you can use:
https://raw.githubusercontent.com/spring-projects/spring-session/1.3.0.RELEASE/spring-session/src/main/resources/org/springframework/session/jdbc/schema-drop-postgresql.sql

Related

Tests failing when upgrading to Spring Boot 2.7 - "CommandAcceptanceException: Error executing DDL"

After upgrading to Boot 2.7 the integration tests that were using an embedded H2 database started failing.
I see this WARN message in the logs, but it's not very clear the cause or the solution for this:
WARN 8053 ---[ main] o.h.t.s.i.ExceptionHandlerLoggedImpl :GenerationTarget encountered exception accepting command : Error executing DDL "create table user (id bigint generated by default as identity, email varchar(255) not null, name varchar(255), primary key (id))" via JDBC Statement
org.hibernate.tool.schema.spi.CommandAcceptanceException: Error executing DDL "create table user (id bigint generated by default as identity, email varchar(255) not null, name varchar(255), primary key (id))" via JDBC Statement
...
Caused by: org.h2.jdbc.JdbcSQLSyntaxErrorException: Syntax error in SQL statement "create table [*]user (id bigint generated by default as identity, email varchar(255) not null, name varchar(255), primary key (id))"; expected "identifier"; SQL statement:
create table user (id bigint generated by default as identity, email varchar(255) not null, name varchar(255), primary key (id)) [42001-212]
...
It seems my User table is not created after the upgrade, thus making my tests fail.
It seems Boot 2.7 upgraded to its H2 dependency to 2.x, which is not backward compatible and introduces several changes:
https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-2.7-Release-Notes#h2-21
The issue was that User is now a Keyword / reserved word (The H2 "migration-to-v2" guide was not very helpful in my case; it mentioned new keywords were added, but it didn't provide a link to ):
https://www.h2database.com/html/advanced.html#keywords
So, what I had to do is use "quoted names" to define the Table name of my Entity (it seems I can use backticks in the table annotation too instead of escaping double quotes):
#Table(name="\"user\"")
#Entity
public class User {
...
I also had to use double quotes on my data.sql files for this table:
INSERT INTO "user"(id, email, name) VALUES(1, 'test#user.com', 'Test User');
Note: the migration guide also mentions the possibility of using the SET NON_KEYWORDS command as a workaround, but it also discourages it.
Add the following to your src/test/resources/application-test.properties file (assuming your tests run with the test profile):
spring.jpa.properties.hibernate.globally_quoted_identifiers=true
spring.jpa.properties.hibernate.globally_quoted_identifiers_skip_column_definitions = true
If any of your JPA entities have UUID fields, ensure those fields are annotated with #Column and the annotation's columnDefinition defines the column as type UDID. (In its simplest form: #Column(columnDefinition="UDID").) This works around a Hibernate bug.
It could be because of Identity columns in schema.sql.Identity columns should be normally declared with GENERATED BY DEFAULT AS IDENTITY.
Ex: JOB_EXECUTION_ID BIGINT IDENTITY -- should be changed as JOB_EXECUTION_ID BIGINT GENERATED BY DEFAULT AS IDENTITY.
Refer http://www.h2database.com/html/migration-to-v2.html for more such changes occurred due to upgrade in H2 version.

Spring Boot: How do I specify execute order of different schema.sql files?

I have created a table that has a foreign key constraint on spring-session-jdbc's spring_session table. The main motivation is that spring-session would delete the rows so that it would cascade and delete entries associated with the actual session. It became a "only works on my machine" problem because only me have had the table already in place when I start the development server. It would only work if others comment out the table first, initialize the server, then revert and do it again. Otherwise, nested exception is java.sql.SQLException: Failed to open the referenced table 'spring_session'.
I think the solution is to specify the run order of (or dependencies between) the initialization sql files. I cannot find that setting after some searching, so I am here.
schema.sql:
drop table if exists foo;
create table if not exists foo (
sid char(36) not null,
foreign key (sid) references spring_session (session_id) on delete cascade,
-- other columns and constraints
);
Possible workarounds:
Workaround #1: put an alter table add constraint statement like this in data.sql.
Workaround #2: grab spring-session-jdbc's schema.sql and put it into my schema.sql, then set spring.session.jdbc.initialize-schema=never in application.properties.
U can try flyway,it can manage your init sql files by giving them a version number. And it can record which sql have been executed, so if add another sql files, it will excute the sql you added, pass the others that have been executed.

SQLSTATE[23505]: Unique violation In-spite of ID set to auto_increment

I have created migration from the PHP artisan command and it had created a table in my Postgres database, with id set to auto_increment.
I have made some seeder in laravel and three rows of data are fed to the previously created table through php artisan db:seed command.
When I am inserting data through some form in the same table, it is giving me an error.
error:SQLSTATE[23505]: Unique violation: 7 ERROR: duplicate key value violates unique constraint "roles_pkey" DETAIL: Key (id)=(1) already exists. (SQL: insert into "roles" ("name", "guard_name", "updated_at", "created_at") values (staff, web, 2019-07-03 07:38:37, 2019-07-03 07:38:37) returning "id")
Sequences are objects that return a value that is one greater on each request, regardless of which transaction it was called on. A sequence by default starts at 1 and can be applied to a table, or many tables, so it cannot know how many values there are in your table already. If you want your insert to work you will need to manually set it.
SELECT setval('roles_id_seq', (SELECT coalesce((SELECT max(id) from roles),1)))
This query is assuming that the sequence used was created on the column "id" on the "roles" table, if not the sequence name can be found by checking that columns DDL e.g. NOT NULL DEFAULT setval('the_sequence_name') and use that to set the value.

MSSQL - fetch Auto increment column value and save it in a another column

This is Regarding MSSQL auto increment feature and i have following table created
CREATE TABLE Order(
[order_id] [int] IDENTITY(1,1) NOT NULL,
[name] [varchar](50) NULL,
[original_order_id] [int] NOT NULL
)
In here i have a situation where i need to insert the auto generated value for original_id to original_order_id.
After googling few minutes following thing and it works fine for me
insert into Order values('Vegitable Order', IDENT_CURRENT('Order'))
I am using java application spring JDBC templates to execute the quires. can there be any issues? specially in multi threaded environment?
Using IDENT_CURRENT is not a good idea. If there are concurrent transactions, the returned value might not be your last inserted id, but the last inserted id of another transaction!
Use the JDBC getGeneratedKey facility to retrieve the generated id of the first INSERT and use the retrieved value in the second INSERT, or use SCOPE_IDENTITY() in the second INSERT. With SCOPE_IDENTITY() be sure that you are executing both statements in the same transaction.
See also: How to get the insert ID in JDBC?

Oracle BPEL DB Adapter constraint exception when writing in two tables - suddenly broken

I've got a Oracle BPEL process running on WebLogic which is writing in two tables using Database Adapters.
There are two tables, TableA and TableB. TableB has a foreign key to TableA.
I the process I create an entry in TableA with A_ID. When I create an entry in TableB using A_ID as a FK I get a constraint exception.
What is weird, this worked last week and now, using the same data, I get the error.
The datasource is set up as a standard - non-XA datasource.
This is the exception that is thrown:
<env:Fault>
<faultcode>env:Server</faultcode>
<faultstring>Exception occured when binding was invoked.
Exception occured during invocation of JCA binding: "JCA Binding execute of
Reference operation 'insert' failed due to: DBWriteInteractionSpec Execute
Failed Exception.
insert failed. Descriptor name: [Datawarehouse.TableB].
Caused by java.sql.BatchUpdateException: ORA-02291: integrity constraint
(DWH.TABLE_A_FK) violated - parent key not found
.
".
The invoked JCA adapter raised a resource exception.
Please examine the above error message carefully to determine a resolution.
</faultstring>
<faultactor/>
<detail>
<exception>ORA-02291: integrity constraint (DWH.TABLE_A_FK) violated - parent key not found</exception>
</detail>
</env:Fault>
This has nothing to do with Oracle BPEL. For some reason, either the insert onto Table A is not successful or the insert on to Table A is not visible to Table B (second insert being in a different transaction).

Resources