Tests with DBunit and Oracle 10g autogenerated primary key identifiers not synchronized (JPA/Hibernate) - oracle

I'm testing a JPA/Hibernate application with DBunit and Oracle 10g. When I start my test I load to the database 25 rows with an identifier.
That's the xml where I have my data, that I insert with DBUnit
<entity entityId="1" ....
<entity entityId="2" ....
<entity entityId="3" ....
<entity entityId="4" ....
That's my entity class with JPA annotations (not hibernate specific)
#Entity
#Table(name = "entity")
public class Entity{
#Id
#GeneratedValue(strategy=GenerationType.Auto)
private Integer entityId;
...}
Those are the parameter values of the database connection with Oracle10g
jdbc.driverClassName=oracle.jdbc.OracleDriver
jdbc.url=jdbc:oracle:thin:#192.168.208.131:1521:database
jdbc.username=hr
jdbc.password=root
hibernate.dialect=org.hibernate.dialect.Oracle10gDialect
dbunit.dataTypeFactoryName=org.dbunit.ext.oracle.Oracle10DataTypeFactory
After insert this data in Oracle I run a test where I make Entity entity = new Entity() (I don't have to set manually the identifier because it's autogenerated)
#Test
public void testInsert(){
Entity entity = new Entity();
//other stuff
entityTransaction.begin();
database.insertEntity(entity);//DAO call
entityTransaction.commit();
}
and when the test makes the commit of the transaction I get the following error
javax.persistence.RollbackException: Error while commiting the transaction
at org.hibernate.ejb.TransactionImpl.commit(TransactionImpl.java:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
...
Caused by: org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update
at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:94)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:275)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:266)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:167)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1027)
at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:365)
at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:137)
at org.hibernate.ejb.TransactionImpl.commit(TransactionImpl.java:54)
... 26 more
Caused by: java.sql.BatchUpdateException: ORA-00001: restricción única (HR.SYS_C0058306) violada
at oracle.jdbc.driver.DatabaseError.throwBatchUpdateException(DatabaseError.java:345)
at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:10844)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:268)
... 34 more
I have debugged it and the problem is that the entityId of the new object is 1, and already exists a entity with that Id. So, I don't know who is the responsable DBunit ? Oracle ? Why are not synchronized the identifiers of Oracle database and the identifier that JPA/hibernate gives to my entity in my testing code?
Thanks for your time

I think the AUTO generation type, in Oracle, is in fact a sequence generator. If you don't specify which sequence it must use, Hibernate is probably creating one for you and using it, and its default start value is 1.
Using AUTO is useful for quick prototyping. For a real application, use a concrete generation type (SEQUENCE, for Oracle), and create your sequences yourself with an appropriate start value to avoid duplicate keys.

You could use ids < 0 in your test data sets. Not only will your sequences never come in conflict with test records, but also you'll easily distinguish records that were inserted by the tests.

The AUTO sequencing strategy usually defaults to the TABLE sequencing strategy, but in the case of Oracle, the sequencing strategy uses an Oracle sequence named hibernate_sequence (which is the default, unless you specify a sequence name in the strategy). The starting value of the sequence happens to be 1, which conflicts with the existing entity that is loaded using DbUnit, hence resulting in the ConstraintViolationException exception being thrown.
For the purpose of unit tests, you could perform either of the following:
Issue a ALTER SEQUENCE... command to set the next value of the sequence, after loading data into the database. This will ensure that the JPA provider will use a sequence value that does not conflict with the existing Ids of the entities populated from your current XML file, by DbUnit.
Specify the name of the sequence in XML file loaded as the IDataSet eventually used by DbUnit. The actual sequence values will have to be replaced in the IDataSet using a SELECT <sequence_name>.nextval FROM DUAL. The following section is reproduced as is and is credited to this site:
I spend a couple of hours reading the dbUnit docs/facs/wikis and
source code trying to figure out how to use Oracle sequences, but
unless I overlooked something, I think this is not possible with the
current implementation.
So I took some extra time to find a workaround to insert Oracle
sequence generated IDs into dbUnit's datasets, muchlike what
ReplacementDataSet does. I subclassed DatabaseTestCase already earlier
in a abstract class (AbstractDatabaseTestCase) to be able to use a
common connection in case of insertion of my testcases in a testsuite.
But I added the following code just now. It looks up the first row of
each table in the dataset to determine which columns need sequence
replacement. The replacement is done on the "${…}" expression value.
This code is "quick and dirty" and surely needs some cleanup and
tuning.
Anyways, this is just a first try. I'll post further improvements as I
go, if this can be of any help to anyone.
Stephane Vandenbussche
private void replaceSequence(IDataSet ds) throws Exception {
ITableIterator iter = ds.iterator();
// iterate all tables
while (iter.next()) {
ITable table = iter.getTable();
Column[] cols = table.getTableMetaData().getColumns();
ArrayList al = new ArrayList(cols.length);
// filter columns containing expression "${...}"
for (int i = 0; i < cols.length; i++) {
Object o = table.getValue(0, cols[i].getColumnName());
if (o != null) {
String val = o.toString();
if ((val.indexOf("${") == 0) && (val.indexOf("}") == val.length() - 1)) {
// associate column name and sequence name
al.add(new String[]{cols[i].getColumnName(), val.substring(2, val.length()-1)});
}
}
}
cols = null;
int maxi = table.getRowCount();
int maxj = al.size();
if ((maxi > 0) && (maxj > 0)) {
// replace each value "${xxxxx}" by the next sequence value
// for each row
for (int i = 0; i < maxi; i++) {
// for each selected column
for (int j = 0; j < maxj; j++) {
String[] field = (String[])al.get(j);
Integer nextVal = getSequenceNextVal(field[1]);
((DefaultTable) table).setValue(i, field[0], nextVal);
}
}
}
}
}
private Integer getSequenceNextVal(String sequenceName) throws SQLException, Exception {
Statement st = this.getConnection().getConnection().createStatement();
ResultSet rs = st.executeQuery("SELECT " + sequenceName + ".nextval FROM dual");
rs.next();
st = null;
return new Integer(rs.getInt(1));
}
My AbstractDatabaseTestCase class has a boolean flag
"useOracleSequence" which tells the getDataSet callback method to call
replaceSequence.
I can now write my xml dataset as follows :
<dataset>
<MYTABLE FOO="Hello" ID="${MYTABLE_SEQ}"/>
<MYTABLE FOO="World" ID="${MYTABLE_SEQ}"/>
<OTHERTABLE BAR="Hello" ID="${OTHERTABLE_SEQ}"/>
<OTHERTABLE BAR="World" ID="${OTHERTABLE_SEQ}"/>
</dataset>
where MYTABLE_SEQ is the name of Oracle sequence to be used.

Related

Criteria API with ODER BY CASE expression throws SQLException: ORA-12704 "character set mismatch"

I am using criteria API to create my query. Because of special sorting algorithm I use an "order by case" expression. My Unit-Tests using in memory H2 DB and are working. In the development stage we are using Oracle DB and there I get an "SQLException: ORA-12704" when executing the query.
Assume my root entity 'Foo' has a Set of 'Bar's. Bar has an attribute 'myOrderByColumn'
public class Bar {
...
#NotBlank
#javax.validation.constraints.Size(max = 255)
#Column(name = "MYORDERBYCOL")
private java.lang.String myOrderByColumn;
...
}
Here is the code which produces the exception. It creates the Order object later used in CriteriaQuery.orderBy(..)
private Order buildOrderBy(final CriteriaBuilder cb,
final Root<Foo> rootEntity,
final List<String> somehowSpecialOrderedList) {
final Expression<String> orderByColumn =
rootEntity.join(Foo_.bars, JoinType.LEFT).get(Bars.myOrderByColumn);
CriteriaBuilder.SimpleCase<String, Integer> caseRoot = cb.selectCase(orderByColumn);
IntStream.range(0, somehowSpecialOrderedList.size())
.forEach(i -> caseRoot.when(somehowSpecialOrderedList.get(i), i));
final Expression<Integer> selectCase = caseRoot.otherwise(Integer.MAX_VALUE);
return cb.asc(selectCase);
}
I took a look into the Oracle DB. The type of the column 'myOrderByColumn' ist NVARCHAR2(255).
I guess the problem here ist that the "when" part in the SQL query must match with the type of the 'MYORDERBYCOL' database column, which is NVARCHAR2. In Java I use Strings. Probably Hibernate is not casting this correctly!?
I can produce the database ORA-12704 error by
SELECT FOO.id
FROM FOO
LEFT OUTER JOIN BAR ON FOO.id = BAR.fk_id
ORDER BY
CASE FOO.myorderbycol
WHEN '20' THEN 1
ELSE 2
END ASC;
This SQL works
SELECT FOO.id
FROM FOO
LEFT OUTER JOIN BAR ON FOO.id = BAR.fk_id
ORDER BY
CASE FOO.myorderbycol
WHEN cast('20' as NVARCHAR2(255))
THEN 1ELSE 2
END ASC;
How do I have to adjust my oder by case expression with criteria API so that the query is working with any database? (must later work with at least H2, Oracle, MS SQL, PostgreSQL)
Looks like an Oracle issue to me. Which version are you using? You can try a different approach which might work.
CriteriaBuilder.SearchedCase<Integer> caseRoot = cb.selectCase();
IntStream.range(0, somehowSpecialOrderedList.size())
.forEach(i -> caseRoot.when(cb.equal(orderByColumn, somehowSpecialOrderedList.get(i)), i));
final Expression<Integer> selectCase = caseRoot.otherwise(Integer.MAX_VALUE);

Spring Boot + Hibernate - Insert query getting slow down

I am working on one spring boot application. Here I have 100,000 records that are inserted into db by different process. and its inserting one by one. I can't do batch insert.
So in starting some of the task performing well and not taking too much time ,but application process some and database is growing slowly - 2, insert time is increasing.
How can I speed up the process or avoid to get it slow?
The quickest way for inserts would be to use a prepared Statement.
Inject the jdbcTemplate and use its batchUpdate method and set the batch size. It's lightning fast.
If you think you cannot use the batch insert, which is hard for me to understand, then set the batch size to 1.
However, the most optimal batch size is certainly larger than that and depends on the insert statement. You have to experiment a bit with it.
Here an example for you with a class called LogEntry. Substitute class, table, columns and attributes by your class, table, columns and attributes and place it into your repository implementation.
Also make sure you set the application properties as mentioned here https://stackoverflow.com/a/62414315/12918872
Regarding the id Generator either set a sequence id generator (also shown in that link) or like in my case, just generate it on your own by asking for the maxId of your table at the beginning and then counting up.
#Autowired
private JdbcTemplate jdbcTemplate;
public void saveAllPreparedStatement2(List<LogEntry> logEntries) {
int batchSize = 2000;
int loops = logEntries.size() / batchSize;
for (int j = 0; j <= loops; j++) {
final int x = j;
jdbcTemplate.batchUpdate("INSERT INTO public.logentries(\r\n"
+ " id, col1, col2, col3, col4, col5, col6)\r\n"
+ " VALUES (?, ?, ?, ?, ?, ?, ?);\r\n", new BatchPreparedStatementSetter() {
public void setValues(PreparedStatement ps, int i) throws SQLException {
int counter = x * batchSize + i;
if (counter < logEntries.size()) {
LogEntry logEntry = logEntries.get(counter);
ps.setLong(1, (long) logEntry.getId());
ps.setString(2, (String) logEntry.getAttr1());
ps.setInt(3, (int) logEntry.getAttr2());
ps.setObject(4, logEntry.getAttr3(), Types.INTEGER);
ps.setLong(5, (long) logEntry.getAttr4());
ps.setString(6, (String) logEntry.getAttr5());
ps.setObject(7, logEntry.getAttr6(), Types.VARCHAR);
}
}
public int getBatchSize() {
if (x * batchSize == (logEntries.size() / batchSize) * batchSize) {
return logEntries.size() - (x * batchSize);
}
return batchSize;
}
});
}
}
Some advices for you :
It is not normal if you say the inserting time is getting increasing if more records are inserted. From my experience , most probably it is due to some logic bugs in your program such that you are processing more unnecessary data when you are inserting more records. So please revise your inserting logic first.
Hibernate cannot batch insert entities if the entity are using IDENTITY to generate is ID . You have to change it to use SEQUENCE to generate the ID with the pooled or pooled-lo algorithm.
Make sure you enable the JDBC batching feature in the hibernate configuration
If you are using PostgreSQL , you can add reWriteBatchedInserts=true in the JDBC connection string which will provide 2-3x performance gain.
Make sure each transaction will insert a batch of entities and then commit but not each transaction only insert one entity.
For more details about points (2), (3) and (4) , you can refer to my previous answers at this.

EF + ODP.NET + CLOB = Value Cannot be Null - Parameter name: byteArray?

Our project recently updated to the newer Oracle.ManagedDataAccess DLL's (v 4.121.2.0) and this error has been cropping up intermittently. We've fixed it a few times without really knowing what we did to fix it.
I'm fairly certain it's caused by CLOB fields being mapped to strings in Entity Framework, and then being selected in LINQ statements that pull entire entities instead of just a limited set of properties.
Error:
Value cannot be null.
Parameter name: byteArray
Stack Trace:
at System.BitConverter.ToString(Byte[] value, Int32 startIndex, Int32 length)
at OracleInternal.TTC.TTCLob.GetLobIdString(Byte[] lobLocator)
at OracleInternal.ServiceObjects.OracleDataReaderImpl.CollectTempLOBsToBeFreed(Int32 rowNumber)
at Oracle.ManagedDataAccess.Client.OracleDataReader.ProcessAnyTempLOBs(Int32 rowNumber)
at Oracle.ManagedDataAccess.Client.OracleDataReader.Read()
at System.Data.Entity.Core.Common.Internal.Materialization.Shaper`1.StoreRead()
Suspect Entity Properties:
'Mapped to Oracle CLOB Column'
<Column("LARGEFIELD")>
Public Property LargeField As String
But I'm confident this is the proper way to map the fields per Oracle's matrix:
ODP.NET Types Overview
There is nothing obviously wrong with the generated SQL statement either:
SELECT
...
"Extent1"."LARGEFIELD" AS "LARGEFIELD",
...
FROM ... "Extent1"
WHERE ...
I have also tried this Fluent code per Ozkan's suggestion, but it does not seem to affect my case.
modelBuilder.Entity(Of [CLASS])().Property(
Function(x) x.LargeField
).IsOptional()
Troubleshooting Update:
After extensive testing, we are quite certain this is actually a bug, not a configuration problem. It appears to be the contents of the CLOB that cause the problem under a very specific set of circumstances. I've cross-posted this on the Oracle Forums, hoping for more information.
After installation of Oracle12 client we ran into the same problem.
In machine.config (C:\Windows\Microsoft.NET\Framework\v4.0.30319\Config) I removed all entries with Oracle.ManagedDataAccess.
In directory C:\Windows\Microsoft.NET\assembly\GAC_MSIL I removed both Oracle.ManagedDataAccess and Policy.4.121.Oracle.ManagedDataAccess.
Then my C# program started working as usually, using the Oracle.ManagedDataAccess dll in it's own directory.
We met this problem in our project an hour ago and found a solution. It is generating this error because of null values in CLOB caolumn. We have a CLOB column and it is Nullable in database. In EntityFramework model it is String but not Nullable. We changed column's Nullable property to True in EF model and it fixed problem.
We have this problem as well on some computers, and we are running the latest Oracle.ManagedDataAccess.dll (4.121.2.20150926 ODAC RELEASE 4).
We found a solution to our problem, and I just wanted to share.
This was our problem that occurred some computers.
Using connection As New OracleConnection(yourConnectionString)
Dim command As New OracleCommand(yourQuery, connection)
connection.Open()
Using reader As OracleDataReader = command.ExecuteReader()
Dim clobField As String = CStr(reader.Item("CLOB_FIELD"))
End Using
connection.Close()
End Using
And here's the solution that made it work on all computers.
Using connection As New OracleConnection(yourConnectionString)
Dim command As New OracleCommand(yourQuery, connection)
connection.Open()
Using reader As OracleDataReader = command.ExecuteReader()
Dim clobField As String = reader.GetOracleClob(0).Value
End Using
connection.Close()
End Using
I spent a lot of time trying to decipher this and found bits of this and that here and there on the internet, but no where had everything in one place, so I'd like to post what I've learned, and how I resolved it, which is much like Ragowit's answer, but I've got the C# code for it.
Background
The error: So I had this error on my while (dr.Read()) line:
Value cannot be null. \r\nParmeter name: byteArray
I ran into very little on the internet about this, except that it was an error with the CLOB field when it was null, and was supposedly fixed in the latest ODAC release, according to this: https://community.oracle.com/thread/3944924
My take on this -- NOT TRUE! It hasn't been updated since October 5, 2015 (http://www.oracle.com/technetwork/topics/dotnet/utilsoft-086879.html), and the 12c package I'm using was downloaded in April 2016.
Full stack trace by someone else with the error that pretty much mirrored mine: http://pastebin.com/24AfFDnq
Value cannot be null.
Parameter name: byteArray
at System.BitConverter.ToString(Byte[] value, Int32 startIndex, Int32 length)
at OracleInternal.TTC.TTCLob.GetLobIdString(Byte[] lobLocator)
at OracleInternal.ServiceObjects.OracleDataReaderImpl.CollectTempLOBsToBeFreed(Int32 rowNumber)
at Oracle.ManagedDataAccess.Client.OracleDataReader.ProcessAnyTempLOBs(Int32 rowNumber)
at Oracle.ManagedDataAccess.Client.OracleDataReader.Read()
at System.Data.Entity.Core.Common.Internal.Materialization.Shaper`1.StoreRead()
'Mapped to Oracle CLOB Column'
<Column("LARGEFIELD")>
Public Property LargeField As String
'Mapped to Oracle BLOB Column'
<Column("IMAGE")>
Public Property FileContents As Byte()
How I encountered it: It was while reading an 11-column table of about 3000 rows. One of the columns was actually an NCLOB (so apparently this is just as susceptible as CLOB), which allowed nulls in the database, and some of its values were empty - it was an optional "Notes" field, after all. It's funny that I didn't get this error on the first or even second row that had an empty Notes field. It didn't error until row 768 finished and it was about to start row 769, according to an int counter variable that started at 0 that I set up and saw after checking how many rows my DataTable had thus far. I found I got the error if I used:
DataSet ds = new DataSet();
OracleDataAdapter adapter = new OracleDataAdapter(cmd);
adapter.Fill(ds);
as well if I used:
DataTable dt = new DataTable();
OracleDataReader dr = cmd.ExecuteReader();
dt.Load(dr);
or if I used:
OracleDataReader dr = cmd.ExecuteReader();
if (dr.HasRows)
{
while (dr.Read())
{
....
}
}
where cmd is the OracleCommand, so it made no difference.
Resolution
The following is basically the code I used to parse through an OracleDataReader values in order to assign them to a DataTable. It's actually not as refined as it could be - I am using it to just return dr[i] to datarow in all cases except when the value is null, and when it is the eleventh column (index = 10, because it starts at 0) and a particular query has been executed so that I know where my NCLOB column is.
public static DataTable GetDataTableManually(string query)
{
OracleConnection conn = null;
try
{
string connString = ConfigurationManager.ConnectionStrings["MyConn"].ConnectionString;
conn = new OracleConnection(connString);
OracleCommand cmd = new OracleCommand(query, conn);
conn.Open();
OracleDataReader dr = cmd.ExecuteReader(CommandBehavior.CloseConnection);
DataTable dtSchema = dr.GetSchemaTable();
DataTable dt = new DataTable();
List<DataColumn> listCols = new List<DataColumn>();
List<DataColumn> listTypes = new List<DataColumn>();
if (dtSchema != null)
{
foreach (DataRow drow in dtSchema.Rows)
{
string columnName = System.Convert.ToString(drow["ColumnName"]);
DataColumn column = new DataColumn(columnName, (Type)(drow["DataType"]));
listCols.Add(column);
listTypes.Add(drow["DataType"].ToString()); // necessary in order to record nulls
dt.Columns.Add(column);
}
}
// Read rows from DataReader and populate the DataTable
if (dr.HasRows)
{
int rowCount = 0;
while (dr.Read())
{
string fieldType = String.Empty;
DataRow dataRow = dt.NewRow();
for (int i = 0; i < dr.FieldCount; i++)
{
if (!dr.IsDBNull[i])
{
fieldType = dr.GetFieldType(i).ToString(); // example only, this is the same as listTypes[i], and neither help us distinguish NCLOB from NVARCHAR2 - both will say System.String
// This is the magic
if (query == "SELECT * FROM Orders" && i == 10)
dataRow[((DataColumn)listCols[i])] = dr.GetOracleClob(i); // <-- our new check!!!!
// Found if you have null Decimal fields, this is
// also needed, and GetOracleDecimal and GetDecimal
// will not help you - only GetFloat does
else if (listTypes[i] == "System.Decimal")
dataRow[((DataColumn)listCols[i])] = dr.GetFloat(i);
else
dataRow[((DataColumn)listCols[i])] = dr[i];
}
else // value was null; we can't always assign dr[i] if DBNull, such as when it is a number or decimal field
{
byte[] nullArray = new byte[0];
switch (listTypes[i])
{
case "System.String": // includes NVARCHAR2, CLOB, NCLOB, etc.
dataRow[((DataColumn)listCols[i])] = String.Empty;
break;
case "System.Decimal":
case "System.Int16": // Boolean
case "System.Int32": // Number
dataRow[((DataColumn)listCols[i])] = 0;
break;
case "System.DateTime":
dataRow[((DataColumn)listCols[i])] = DBNull.Value;
break;
case "System.Byte[]": // Blob
dataRow[((DataColumn)listCols[i])] = nullArray;
break;
default:
dataRow[((DataColumn)listCols[i])] = String.Empty;
break;
}
}
}
dt.Rows.Add(dataRow);
}
ds.Tables.Add(dt);
}
}
catch (Exception ex)
{
// handle error
}
finally
{
conn.Close();
}
// After everything is closed
if (ds.Tables.Count > 0)
return ds.Tables[0]; // there should only be one table if we got results
else
return null;
}
In the same way that I have it assigning specific types of null based on the column type found in the schema table loop, you could add the conditions to the "not null" side of the if...then and do various GetOracle... statements there. I found it was only necessary for this NCLOB instance, though.
To give credit where credit is due, the original codebase is based on the answer given by sarathkumar at Populate data table from data reader .
For me it was simple! I had this error with odac v 4.121.1.0. I have just updated Oracle.ManagedDataAccess to 4.121.2.0 with Nuget and now it is working.
Have you have tried to uninstall and reinstall Oracle.ManagedDataAccess with Nugget?
upgrade Oracle.ManagedDataAccess.dll to version 4.122.1.0 solved.
If you are using vs 2017, we can update via NuGet.

JpaRepository: using repositories in #PostPersist

I have two entities
class A {
...
Integer totalSum;
Set<B> b;
}
class B {
...
Integer value;
A container;
}
I want a.totalSum to be sum of each a.b.value;
Maybe best solution is view in db, but I want listen changes in BRepository and update A-records.
I do:
#PostPersist
#PostUpdate
#PostRemove
private void recalculateSums(B b) {
AutowireHelper.autowire(this, this.aRepository);
AutowireHelper.autowire(this, this.bRepository);
A a = b.getContainer();
a.setTotalSum(bRepository.sumByA(s));
aRepository.save(s);
}
And in BRepository:
#Query("select sum(b.value) from B b where b.container = :a")
Long sumByA(#Param("a") A a);
And I have error: org.springframework.dao.DataIntegrityViolationException: could not execute statement; SQL [n/a]; constraint ["PRIMARY KEY ON PUBLIC.B(ID)"; SQL statement:
insert into b (VALUE, CONTAINER_ID, ID) values (?, ?, ?)
What I'm doing wrong?
If I do
a.setTotalSum(a.getTotalSum()+1);
aRepository.save(s);
All works, but if I do
a.setTotalSum(a.getTotalSum()+1);
aRepository.saveAndFlush(s);
I have the same error.
I can see why you might want to do this but I think it creates a whole load of potential issues around data integrity.
If you want to have the sum of B available for an A without having to load and iterate all B there are three other options which you could implement all of which would be more robust than your proposal.
As you noted, create a view. You can then Map an Entity say, SummaryData, to this view and map a one-to-one from A to SummaryData so that you can do a.getSummaryData().getTotalSum();
Alternatively, you could use the Hibernate specific #Formula annotation which will issue an inline select when the entity is loaded.
#Formula("(select sum(value) from B b inner join A a where b.a_id= a.id and a.id =id
private int totalSum;
Finally, and depending on the capabilities of your database, you could create a Virtual Column and Map a property as you would for any other field.
1 and 3 obviously require schema changes but your app would remain JPA compliant. 2 does not require any schema change but breaks strict JPA compliance if that was important.

Failed to batch insert in Subsonic3 with error "Must declare the scalar variable..."

I have met a problem about inserting multiple rows in a batch with Subsonic3. My development environment includes:
1. Visual Studio 2010, but use .NET 3.5
2. Active Record Mode in SubSonic 3.0.0.4
3. SQL Server 2005 express
4. Northwind sample database
I am using Active Reecord mode to insert mutiple "Product" into table "Products". If I insert the rows one by one, either call "aProduct.Add()" or call "Insert.Execute()" mutiple times (just like the codes below), it works fine.
private static Product[] CreateProducts(int count)
{
Product[] products = new Product[count];
for (int index = 0; index < products.Length; ++index)
{
products[index] = new Product
{
ProductName = string.Format("cheka-test-{0}", index.ToString()),
Discontinued = (index % 2 == 0),
};
}
return products;
}
private static void SucceedByMultiExecuteInsert()
{
Product[] products = CreateProducts(2);
// -------------------------------- prepare batch
NorthwindDB db = new NorthwindDB();
var inserts = from prod in products
select db.Insert.Into<Product>(x => x.ProductName, x => x.Discontinued).Values(prod.ProductName, prod.Discontinued);
// -------------------------------- batch insert
var selectAll = Product.All();
Console.WriteLine("--- before total rows = {0}", selectAll.Count().ToString());
foreach (Insert insert in inserts)
insert.Execute();
Console.WriteLine("+++ after inserting {0} rows, now total rows = {1}",
products.Length.ToString(), selectAll.Count().ToString());
}
but if I use "BatchQuery" like the codes below,
private static void FailByBatchInsert()
{
Product[] products = CreateProducts(2);
// -------------------------------- prepare batch
NorthwindDB db = new NorthwindDB();
BatchQuery batchquery = new BatchQuery(db.Provider, db.QueryProvider);
var inserts = from prod in products
select db.Insert.Into<Product>(x => x.ProductName, x => x.Discontinued).Values(prod.ProductName, prod.Discontinued);
foreach (Insert insert in inserts)
batchquery.Queue(insert);
// -------------------------------- batch insert
var selectAll = Product.All();
Console.WriteLine("--- before total rows = {0}", selectAll.Count().ToString());
batchquery.Execute();
Console.WriteLine("+++ after inserting {0} rows, now total rows = {1}",
products.Length.ToString(), selectAll.Count().ToString());
}
then it failed with the exception :
"
Unhandled Exception: System.Data.SqlClient.SqlException: Must declare the scalar variable "#ins_ProductName".
Must declare the scalar variable "#ins_ProductName".
"
Please give me some help to solve this problem. Many thanks.
I ran into this problem as well. If you look at the query it's attempting to run, you'll see it doing something like this (this isn't actual code but you'll get the point):
exec_sql N'insert into MyTable (SomeField) Values (#ins_SomeField)',N'#0 varchar(32)','#0=SomeValue'
For some reason it defines the parameters in the query with "#ins_"+FieldName but then passes the parameters as ordinals. I have yet to determine the pattern for why/when it does this but I've lost enough time during this dev cycle futzing with SubSonic to try and diagnose the problem properly.
The work-around I implemented will involve you downloading the 3.0.0.4 source from github and making a change on line 179 of Insert.cs.
Where it reads
ParameterName = _provider.ParameterPrefix + "ins_" + columnName.ToAlphaNumericOnly(),
Changing it to
ParameterName = _provider.ParameterPrefix + Inserts.Count.ToString(),
seemed to do the trick for me. I make no warranties about this solution for you, expressed or implied. It did work for me but your mileage may vary.
I should also note that there's similar logic around the "update" statements as well in Update.cs on lines 181 and 194 but I haven't had these give me problems... yet.
Honestly, I don't think SubSonic is ready for primetime and that's a shame because I really like how Rob set it up. That said, it's in my product for better or worse now so you make the best with what you got.

Resources