Multiple sql inserts in spring boot - spring-boot

I have a code as following :
Offre o = offreRepository.save(offre);
for(OffreCompetence offreCompetence : offre.getOffreCompetences()) {
offreCompetence.setOffre(o);
offreCompetenceRepository.save(offreCompetence);
}
So as you can see I'm calling the offreRepository for the first time to insert an Offre into database, then I call offreCompetenceRepository multiple times to insert each OffreCompetencein the Offre to the database.
The problem here is that I'm making a connection to the database multiple times.
Isn't there any other method to do these inserts in once ?
Edit :
I tried to add this line :
#OneToMany(cascade = CascadeType.ALL, mappedBy="offre")
private Set<OffreCompetences> offreCompetences;
But the offreCompetences are not added to the database, when I checked the log file I noticed that Hibernate doesn't INSERT them but I can't find them in the database, I think the problem is when he try to add offreCompetences he doesn't know the id of the Offre for them :
Hibernate: insert into offre (date_expiration, date_publication, duree_mission, email_sended, etat, niveau_experience, nombre_postulant, nombre_vue, poste, profil_recherche, titre, code_type_contrat, code_ville) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
2016-07-28 16:08:13.237 DEBUG 10736 --- [nio-8080-exec-4] org.hibernate.SQL : insert into offre_competence (competence, niveau_requis, offre) values (?, ?, ?)
Hibernate: insert into offre_competence (competence, niveau_requis, offre) values (?, ?, ?)
2016-07-28 16:08:13.304 DEBUG 10736 --- [nio-8080-exec-4] org.hibernate.SQL : insert into offre_competence (competence, niveau_requis, offre) values (?, ?, ?)
Hibernate: insert into offre_competence (competence, niveau_requis, offre) values (?, ?, ?)
2016-07-28 16:08:13.339 DEBUG 10736 --- [nio-8080-exec-4] org.hibernate.SQL : insert into offre_competence (competence, niveau_requis, offre) values (?, ?, ?)
Hibernate: insert into offre_competence (competence, niveau_requis, offre) values (?, ?, ?)

You need to ensure that the parent object is set in every object in the list. The Cascade persist will ensure that the object and its children are saved and linked correctly. Unfortunately, if it is not explicitly set it will not create the relationship in the database.
for(OffreCompetence offreCompetence : offre.getOffreCompetences()) {
offreCompetence.setOffre(offre);
}
Offre o = offreRepository.save(offre);

Set the CascadeType on the offre.offreCompetences relationship.
#Entity
public class Offre {
#OneToMany(cascade = CascadeType.ALL, mappedBy="offre")
private Set<OffreCompetences> offreCompetences
...
}
#Entity
public class OffreCompetences {
#ManyToOne
#JoinColumn(name = "..." )
private Offre offre;
}
Once you do this, offreCompetences will be saved when you save offre.
Look at the JPA docs for CascadeType.

Related

Laravel - how to handle not required/optional field in $fillable insert and update

I have table with below fields.
Only name is required and rest of them is optional
'name','no_of_color','offset_printing_rate','screen_printing_rate','positive_rate','regular_plate_rate','big_plate_rate',
My model
protected $fillable = [
'name',
'no_of_color',
'offset_printing_rate',
'screen_printing_rate',
'positive_rate',
'regular_plate_rate',
'big_plate_rate',
];
but when I don't fill optional field it returns error
SQL: insert into table_name (name, no_of_color, offset_printing_rate, screen_printing_rate, positive_rate, regular_plate_rate, big_plate_rate, updated_at, created_at) values (name, ?, ?, ?, ?, ?, ?, 2021-10-09 14:47:36, 2021-10-09 14:47:36)
MySQL always want a value to all cols so in laravel you have to set default for each col or set it nullable but for performance purpose, i recommended you to split optional in other table and connect them to main table by relation this will prevent increasing of null records so you don't have to waste your host space

Checking the row existence before insertion using Cassandra within a Go application

I am using gocql with my Go application and trying to solve the issue described below.
CREATE TABLE IF NOT EXISTS website.users (
id uuid,
email_address text,
first_name text,
last_name text,
created_at timestamp,
PRIMARY KEY (email_address)
);
This query is going to override matching record which is Cassandra's expected behaviour.
INSERT INTO users (id, email_address, first_name, last_name, created_at)
VALUES (?, ?, ?, ?, ?)
In order to prevent overriding the existing record we can use IF NOT EXISTS at the end of the query.
INSERT INTO users (id, email_address, first_name, last_name, created_at)
VALUES (?, ?, ?, ?, ?)
IF NOT EXISTS
However, there is no way for me to know if the query affected any rows in DB or not. Somehow I need to return something like "Record exists" message back to caller but it is currently not possible. If there was something specific with session.Query(...).Exec() it would be useful but there isn't as far as I know.
I was thinking to SELECT by email_address before proceeding with INSERT if there was no matching record but as you can guess this is not feasible because by the time I INSERTed a new record after SELECT, some other operation could have INSERTed a new record with same email address.
How do we handle such scenario?
The solution is to use ScanCAS and the test case example from the library is here.
NOTE:
Order of the fields in ScanCAS() should match cqlsh> DESCRIBE keyspace.users; output for the CREATE TABLE ... block.
If you don't care about the scanned fields, prefer MapScanCAS instead.
func (r Repository) Insert(ctx context.Context, user User) error {
var (
emailAddressCAS, firstNameCAS, idCAS, lastNameCAS string
createdAtCAS time.Time
)
query := `
INSERT INTO users (email_address, created_at, first_name, id, last_name)
VALUES (?, ?, ?, ?, ?) IF NOT EXISTS
`
applied, err := r.session.Query(
query,
user.EmailAddress,
user.CreatedAt,
user.FirstName,
user.LastName,
user.CreateAt,
).
WithContext(ctx).
ScanCAS(&emailAddressCAS, &createdAtCAS, &firstNameCAS, &idCAS, &lastNameCAS)
if err != nil {
return err
}
if !applied {
// Check CAS vars here if you want.
return // your custom error implying a duplication
}
return nil
}
If you're using INSERT with IF NOT EXISTS, then in contrast to "normal" inserts that doesn't return anything, such query returns a single row result consisting of:
field with name [applied], and true value - if there was no record before, and new row was inserted
field with name [applied], and false value + all columns of existing row.
So you just need to get result of your insert query, and analyze it. See documentation for more details.

Spring Batch Admin Integration to DB2 throwing SqlIntegrityConstraintViolationException

I am trying to integrate Spring batch admin to existing spring batch program. This runs fine with hsqldb, but when we configure it to DB2 it throws SqlIntegrityConstraintViolationException. Db2 tables were already created in the DB2 with the default script provided in the admin jar.
We are using quartz scheduler to trigger the jobs.
Here is the exception trace
Caused by: org.springframework.dao.DataIntegrityViolationException: PreparedStatementCallback; SQL [INSERT into OD1.ABC_BATCH_JOB_EXECUTION(JOB_EXECUTION_ID, JOB_INSTANCE_ID, START_TIME, END_TIME, STATUS, EXIT_CODE, EXIT_MESSAGE, VERSION, CREATE_TIME, LAST_UPDATED, JOB_CONFIGURATION_LOCATION) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)]; AN UPDATE, INSERT, OR SET VALUE IS NULL, BUT THE OBJECT COLUMN *N CANNOT CONTAIN NULL VALUES. SQLCODE=-407, SQLSTATE=23502, DRIVER=3.62.56; nested exception is com.ibm.db2.jcc.am.SqlIntegrityConstraintViolationException: AN UPDATE, INSERT, OR SET VALUE IS NULL, BUT THE OBJECT COLUMN *N CANNOT CONTAIN NULL VALUES. SQLCODE=-407, SQLSTATE=23502, DRIVER=3.62.56
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:249)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:72)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:605)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:818)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:874)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:878)
at org.springframework.batch.core.repository.dao.JdbcJobExecutionDao.saveJobExecution(JdbcJobExecutionDao.java:157)
Configuration is as follows
#DB2 configuration
batch.job.jndi=jdbc/DBOMS
batch.tableprefix=OD1.ABC_BATCH_
batch.schema.script=
batch.drop.script=
batch.business.schema.script=
batch.database.incrementer.class=org.springframework.jdbc.support.incrementer.DB2SequenceMaxValueIncrementer
batch.job.configuration.file.dir=target/config
batch.data.source.init=false
batch.job.service.reaper.interval=60000
batch.isolationlevel=ISOLATION_READ_COMMITTED
batch.jdbc.testWhileIdle=false
batch.jdbc.validationQuery=
batch.database.incrementer.parent=sequenceIncrementerParent
batch.table.prefix=OD1.ABC_BATCH_
We found the issue , We gave the table scripts to DBA. But DBA created the table to match the standards they follow and made it
START_TIME TIMESTAMP NOT NULL WITH DEFAULT instead of START_TIME TIMESTAMP DEFAULT NULL. This is causing the issue, hope this helps someone.

Magento 1062 Duplicate entry '100000001' for key 'UNQ_SALES_FLAT_INVOICE_INCREMENT_ID'

I recently updated customer and order information on my dev site and then pushed it live (without updating the increment_last_id). Our checkout is no longer processing credit card orders and when I check the exception log I get these 2 related errors:
exception 'PDOException' with message 'SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry '100000001' for key 'UNQ_SALES_FLAT_INVOICE_INCREMENT_ID'' in /home/tebostorefixture/public_html/lib/Zend/Db/Statement/Pdo.php:228
Next exception 'Zend_Db_Statement_Exception' with message 'SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry '100000001' for key 'UNQ_SALES_FLAT_INVOICE_INCREMENT_ID', query was: INSERT INTO `sales_flat_invoice` (`store_id`, `base_grand_total`, `shipping_tax_amount`, `tax_amount`, `base_tax_amount`, `store_to_order_rate`, `base_shipping_tax_amount`, `base_discount_amount`, `base_to_order_rate`, `grand_total`, `shipping_amount`, `subtotal_incl_tax`, `base_subtotal_incl_tax`, `store_to_base_rate`, `base_shipping_amount`, `total_qty`, `base_to_global_rate`, `subtotal`, `base_subtotal`, `discount_amount`, `billing_address_id`, `order_id`, `state`, `shipping_address_id`, `store_currency_code`, `transaction_id`, `order_currency_code`, `base_currency_code`, `global_currency_code`, `increment_id`, `created_at`, `updated_at`, `hidden_tax_amount`, `base_hidden_tax_amount`, `shipping_hidden_tax_amount`, `discount_description`) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, '2016-12-12 18:48:11', '2016-12-12 18:48:11', ?, ?, ?, ?)' in /home/mystore/public_html/lib/Zend/Db/Statement/Pdo.php:235
At first, the error had been attempting to duplicate entries around 100000027 and was going up each time we tried an order. So I went into eav_entity_store and changed the increment_last_id to 1 higher than our last order (100000117).
I re-indexed and cleared the cache. But now I'm getting this same error except its trying to duplicate 100000001. No matter how many times I try it, it keeps trying to duplicate that first order number. I went back and checked and the increment_last_id is going up correctly with each transaction that we try, but this error of duplicating 100000001 continues.
Problem Solved.
For some reason I was missing 3 rows in eav_entity_store. One of those rows was for invoices and was necessary.
I wasn't able to export/import the rows successfully from our old database so I copied down their numbers and manually created the tables with the appropriate numbers.
Use the following code to resolve your error:
TRUNCATE dataflow_batch_export ;
TRUNCATE dataflow_batch_import ;
TRUNCATE log_customer ;
TRUNCATE log_quote ;
TRUNCATE log_summary ;
TRUNCATE log_summary_type ;
TRUNCATE log_url ;
TRUNCATE log_url_info ;
TRUNCATE log_visitor ;
TRUNCATE log_visitor_info ;
TRUNCATE log_visitor_online ;
TRUNCATE report_event ;
Or
You can try following.
In app/code/core/Mage/Sales/Model/Resource/Quote.php
Search for isOrderIncrementIdUsed method
In that method,
replace
$bind = array(':increment_id' => (int)$orderIncrementId);
with
$bind = array(':increment_id' => $orderIncrementId);
Hope, one of these two solutions will resolve your problem.
Feel free to ask for any other help!

Weblogic 10.3.1 + Oracle DB 10g : Invalid username / password on LOB insert

i am working on a project using Hibernate 3.3.SP1 + Spring 1.2.6 on Weblogic 10.3.1 with Oracle DB 10g. Recently, we migrated Hibernate from v3.0.5 to 3.3.SP1.A strange error occurs (that did not happen before) when trying to insert LOB (BLOB or CLOB). I get the following error :
189202 [[ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)']
WARN org.hibernate.util.JDBCExceptionReporter - SQL Error: 0, SQLState: null
189202 [[ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)']
ERROR org.hibernate.util.JDBCExceptionReporter - Pool connect failed :
weblogic.common.ResourceException:
my.wls.datasource(dtJndiName): 0:
Could not connect to 'oracle.jdbc.OracleDriver'.
The returned message is:
ORA-01017: invalid username/password; logon denied
It is likely that the login or password is not valid.
It is also possible that something else is invalid in
the configuration or that the database is not available.
After that, the datasource gets "corrupted" and after 10 consecutive false connection attempts Oracle locks the account.
I must notice that the application has absolutely no code for connecting to the database, other than the pre-configured datasource in Weblogic. Since the application works just fine until a LOB is inserted in the DB it is safe to assume that the datasource is properly configured.
A sample mapping (I cannot post the exact hbm.xml) is :
<?xml version="1.0"?>
<!DOCTYPE hibernate-mapping PUBLIC
"-//Hibernate/Hibernate Mapping DTD 3.0//EN"
"http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd">
<hibernate-mapping package="my.model.persist">
<class name="LobTable" table="LOB_TABLE">
<id name="id" type="long" column="RS_ID" unsaved-value="null" length="10">
<generator class="native"></generator>
</id>
<property name="blob1" type="org.springframework.orm.hibernate3.support.BlobByteArrayType" column="BLOB1"></property>
<property name="blob2" type="org.springframework.orm.hibernate3.support.BlobByteArrayType" column="BLOB2"></property>
</class>
</hibernate-mapping>
The code tries to persist some LOB values in three tables. The error appears when trying to save to the first. If I remove the code for saving to the first, the error appears on the second and so on.
The only solution I have found up to now, is to set the Initial Capacity of the datasource connections to the max connections (15). In this case, the system seems stable. However this solution is not acceptable since we do not understand the nature of the problem.
I have tried this in four different environments (Weblogic + Oracle). The error does not appear always with the same frequency (in some systems it works for a while before failing to insert a LOB). Also, while debugging I noticed that if I increase the log output (I simply added more debug messages in log4j) the error stop appearing. This made me think it could be a sync problem between WLS and DB.
Do you have any ideas? Please let me know if you need more clarifications.
The result after enabling Hibernate query output and changing the hbm.xml to have LOBs as the last fields declared is still the same error:
Hibernate: select hibernate_sequence.nextval from dual
Hibernate: select hibernate_sequence.nextval from dual
Hibernate: insert into LOB_TABLE_1 (field20, field21, field22, field23, field24, field25, field26, field27, field28, field29, field30, field31, LOB_FIELD_3, LOB_FIELD_4, ID) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
Hibernate: insert into LOB_TABLE_2 (field1, field2, field3, field4, field5, field6, field7, field8, field9, field10, field11, field12, field13, field14, field15, field16, LOB_FIELD_1, LOB_FIELD_2, ID) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
105039 [[ACTIVE] ExecuteThread: '11' for queue: 'weblogic.kernel.Default (self-tuning)'] WARN org.hibernate.util.JDBCExceptionReporter - SQL Error: 0, SQLState: null
105039 [[ACTIVE] ExecuteThread: '11' for queue: 'weblogic.kernel.Default (self-tuning)'] ERROR org.hibernate.util.JDBCExceptionReporter - Pool connect failed : weblogic.common.ResourceException:
my.wls.datasource(dtJndiName): 0:
Could not connect to 'oracle.jdbc.OracleDriver'.
The returned message is: ORA-01017: invalid username/password; logon denied
It is likely that the login or password is not valid.
It is also possible that something else is invalid in
the configuration or that the database is not available.

Resources