I'm quite new to the Spring Data JDBC library, but so far I'm really impressed.
Unfortunately JDBC driver for my database (SAP Hana) doesn't support retrieving of generated keys after INSERT (implementation of method PreparedStatement.getGeneratedKeys() throws UnsupportedOperationException).
Therefore I decided, that I'll not use the generated keys and will define the PK values before saving (+ implement Persistable.isNew()). However even if the PK values are defined before saving, whenever an INSERT operation is triggered, it fails on the error, that the generated keys can't be retrieved.
After investigating source code of the affected method (DefaultDataAccessStrategy.insert) I've recognized, that there is every time executed the JDBC operations update method with the KeyHolder parameter.
I've (naively) tweaked the code with following changes and it started to work:
if the PK is already defined, the JDBC operations update method without the KeyHolder is invoked
such PK is then immediately returned from the method
Following code snippet from the tweaked insert method illustrates the changes.
Object idValue = getIdValueOrNull(instance, persistentEntity);
if (idValue != null) {
RelationalPersistentProperty idProperty = persistentEntity.getRequiredIdProperty();
addConvertedPropertyValue(parameterSource, idProperty, idValue, idProperty.getColumnName());
/* --- tweak start --- */
String insertSql = sqlGenerator.getInsert(new HashSet<>(parameterSource.getIdentifiers()));
operations.update(insertSql, parameterSource);
return idValue;
/* --- tweak end --- */
}
So, the question is, if similar change can be implemented in the Spring Data JDBC to support also such use case as mine.
The question can be considered as closed as the related feature request is registered in the issue tracker.
Related
I am tryint to reuse the code in following documentation : https://geode.apache.org/docs/guide/11/developing/region_options/dynamic_region_creation.html
The first problem that i met is that
Cache cache = CacheFactory.getAnyInstance();
Region<String,RegionAttributes<?,?>> regionAttributesMetadataRegion = createRegionAttributesMetadataRegion(cache);
should not be executed in constructor. In case it is , the code is executed in client instance , it is failed on not server error.When this fixed i receive
[fatal 2021/02/15 16:38:24.915 EET <ServerConnection on port 40527 Thread 1> tid=81] Serialization filter is rejecting class org.restcomm.cache.geode.CreateRegionFunction
java.lang.Exception:
at org.apache.geode.internal.ObjectInputStreamFilterWrapper.lambda$createSerializationFilter$0(ObjectInputStreamFilterWrapper.java:233)
The problem is that code is getting executed on dunit MemberVM and the required class is actually the part of the package under which the test is getting executed.
So i guess i should somehow register the classes ( or may be jar ) separately to dunit MemberVM. How it can be done?
Another question is: currently the code is checking if the region exists and if not it calls the method. In both cases it also tries to create the clientRegion. The question is whether this is a correct approach?
Region<?,?> cache = instance.getRegion(name);
if(cache==null) {
Execution execution = FunctionService.onServers(instance);
ArrayList argList = new ArrayList();
argList.add(name);
Function function = new CreateRegionFunction();
execution.setArguments(argList).execute(function).getResult();
}
ClientRegionFactory<Object, Object> cf=this.instance.createClientRegionFactory(ClientRegionShortcut.CACHING_PROXY).addCacheListener(new ExtendedCacheListener());
this.cache = cf.create(name);
BR
Yulian Oifa
The first problem that i met is that
Cache cache = CacheFactory.getAnyInstance();
should not be executed in constructor. In case it is , the code is executed in client instance , it is failed on not server error.When this fixed i receive
Once the Function is registered on server side, you can execute it by ID instead of sending the object across the wire (so you won't need to instantiate the function on the client), in which case you'll also avoid the Serialization filter error. As an example, FunctionService.onServers(instance).execute(CreateRegionFunction.ID).
The problem is that code is getting executed on dunit MemberVM and the required class is actually the part of the package under which the test is getting executed. So i guess i should somehow register the classes ( or may be jar ) separately to dunit MemberVM. How it can be done?
Indeed, for security reasons Geode doesn't allow serializing / deserializing arbitrary classes. Internal Geode distributed tests use the MemberVM and set a special property (serializable-object-filter) to circumvent this problem. Here's an example of how you can achieve that within your own tests.
Another question is: currently the code is checking if the region exists and if not it calls the method. In both cases it also tries to create the clientRegion. The question is whether this is a correct approach?
If the dynamically created region is used by the client application then yes, you should create it, otherwise you won't be able to use it.
As a side note, there's a lot of internal logic implemented by Geode when creating a Region so I wouldn't advice to dynamically create regions on your own. Instead, it would be advisable to use the gfsh create region command directly, or look at how it works internally (see here) and try to re-use that.
I have a SecTrustRef object from the system that I'd like to evaluate myself. Just calling SecTrustEvaluateAsync will be sufficient for this job. The problem is, I must evaluate it in a different process as only this other process has access to the keychains where the CA certificates are stored that may cause evaluation to succeed.
The two processes have an IPC link that allows me to exchange arbitrary byte data between them but I don't see any way to easily serialize a SecTrustRef into byte data and deserialize that data back to an object at the other process. There doesn't seem to be a persistent storage mechanism for SecTrustRef.
So am I overlooking something important here, or do I really have to get all the certs (SecTrustGetCertificateAtIndex) and all the policies (SecTrustCopyPolicies) and serialize these myself?
And if so, how would I serialize a policy?
For the certificate (SecCertificateRef) it's rather easy, I just call SecCertificateCopyData and later on SecCertificateCreateWithData.
But for policies I can only call SecPolicyCopyProperties on one side and later on SecPolicyCreateWithProperties, however the later one requires a 2nd parameter, a policyIdentifier and I see no way to get that value from an existing policy. What am I missing?
Reading through the source of the Security framework, I finally figured it out how to copy a SecPolicyRef:
SecPolicyCreateWithProperties wants what it calls a "policyIdentifier". It's a constant like kSecPolicyAppleIPsec.
This does not get stored directly by the function, it's comparing the value and calling dedicated internal "initializers" (like SecPolicyCreateIPsec).
These in turn call SecPolicyCreate (which is private). They end up passing the same identifier value that you passed to SecPolicyCreateWithProperties.
And this value then gets stored as-is in the _oid field!
The identifier is actually the OID. You can get it either via SecPolicyCopyProperties(policy) (stored in the dictionary with key kSecPolicyOid) or via SecPolicyGetOID (but that returns it as an inconvenient CSSM_OID). Some of those specialized initializers also use values from the properties dictionary passed to SecPolicyCreateWithProperties, those should be present in the copied properties dictionary already.
So this gives us:
Serialization:
CFDictionaryRef createSerializedPolicy(SecPolicyRef policy) {
// Already contains all the information needed.
return SecPolicyCopyProperties(policy);
}
Deserialization:
SecPolicyRef createDeserializedPolicy (CFDictionaryRef serialized) {
CFStringRef oid = CFDictionaryGetValue(serialized, kSecPolicyOid);
if (oid == NULL || CFGetTypeID(oid) != CFStringGetTypeID()) {
return NULL;
}
return SecPolicyCreateWithProperties(oid, serialized);
}
To reproduce the original SecTrustRef as closely as possible, the anchors need to be copied as well. There is an internal variable _anchorsOnly which is set to true once you set anchors. Unfortunately, there is no way to query this value and I've seen it being false in trusts passed by NSURLSession, for example. No idea yet on how to get this value in a public way.
Another problematic bit are the exceptions: if _exceptions is NULL but you query them via SecTrustCopyExceptions(trust), you do get data! And if you assign that to the deserialized trust via SecTrustSetExceptions(trust, exceptions) you suddenly end up with exceptions that were not there before and can change the evaluation result! (I've seen those suddenly appearing exceptions lead to an evaluation result of "proceed" instead of "recoverable trust failure").
I have a simple web application based on the Play Framework 2.3 (scala), which currently uses sqlite3 for the database. I'm sometimes, but not always, getting exceptions caused by inserting rows into the DB:
java.sql.SQLException: statement is not executing
at org.sqlite.Stmt.checkOpen(Stmt.java:49) ~[sqlite-jdbc-3.7.2.jar:na]
at org.sqlite.PrepStmt.executeQuery(PrepStmt.java:70) ~[sqlite-jdbc-3.7.2.jar:na]
...
The problem occurs in a few different contexts, all originating from SQL(statement).executeInsert()
For example:
val statementStr = "insert into session_record (condition_id, participant_id, client_timestamp, server_timestamp) values (%d,'%s',%d,%d)".format(conditionId,participantId,clientTime,serverTime)
DB.withConnection( implicit c => {
val ps = SQL(statement)
val pKey = populatedStatement.executeInsert()
// ...
}
When an exception is not thrown, pKey contains an option with the table's auto-incremented primary key. When an exception is thrown, the database's state indicate that the basic statement was executed, and if I take the logged SQL statement and try it by hand, it also executes without a problem.
Insert statements that aren't executed with "executeInsert" also work. At this point, I could just use ".execute()" and get the max primary key separately, but I'm concerned there might be some deeper problem I'm missing.
Some configuration details:
In application.conf:
db.default.driver=org.sqlite.JDBC
db.default.url="jdbc:sqlite:database/mySqliteDb.db"
My sqlite version is 3.7.13 2012-07-17
The JDBC driver I'm using is "org.xerial" % "sqlite-jdbc" % "3.7.2" (via build.sbt).
I ran into this same issue today with the latest driver, and using execute() was the closest thing to a solution I found.
For the sake of completion, the comment on Stmt.java for getGeneratedKeys():
/**
* As SQLite's last_insert_rowid() function is DB-specific not statement
* specific, this function introduces a race condition if the same
* connection is used by two threads and both insert.
* #see java.sql.Statement#getGeneratedKeys()
*/
Most certainly confirms that this is a hard to fix bug in the driver, due to SQLite's design, that makes executeInsert() not thread safe.
First it would be better not to use format for passing parameter to the statement, but using either SQL("INSERT ... {aParam}").on('aParam -> value) or SQL"INSERT ... $value" (with Anorm interpolation). Then if exception is still there I would suggest you to test connection/statement in a plain vanilla standalone Java test app.
I am running the BIRT 4.3.2 report engine on an IBM WAS 8.5.5 server.
The reports load fine when there are actually results for the given criteria. I receive the following exception when there are no results:
org.eclipse.birt.data.engine.odaconsumer.ResultSet fetch
SEVERE: Cannot fetch the next data row.
Throwable occurred: org.eclipse.birt.report.data.oda.jdbc.JDBCException: Cannot move down to next row in the result set.
SQL error #1:Invalid operation: result set closed
com.ibm.db2.jcc.c.SqlException: Invalid operation: result set closed
at org.eclipse.birt.report.data.oda.jdbc.ResultSet.next(ResultSet.java:198)
I am aware of this topic:
developer.actuate.com/community/forum/index.php?/topic/25148-exception-is-thrown-when-there-is-no-data-retreived-from-query/
Since my report data source simply defines the JDBC class com.ibm.db2.jcc.DB2Driver and JNDI URL, it uses the WAS data source. I did try adding the allowNextOnExhaustedResultSet custom property to the data source as an integer value 1, but this did not fix anything for me. This was stated to only be a work-around anyways.
I asked on the BIRT forum if this would be fixed, no response? As suggested by IBM- Modify the application code to avoid calling ResultSet.next if there are no records or no more records. This is in the BIRT data engine code, for ResultSet class.
Are there any more work-arounds in the mean time?
It's probably also worth mentioning that this was working without issue on BIRT 4.2.0
Here is the fragment of code involved, in org.eclipse.birt.report.data.oda.jdbc.ResultSet.next
try
{
/* redirect the call to JDBC ResultSet.next() */
if ( currentRow < maxRows && rs.next( ) )
{
currentRow++;
return true;
}
return false;
}
catch ( SQLException e )
{
throw new JDBCException(ResourceConstants.RESULTSET_CURSOR_DOWN_ERROR , e );
}
As suggested in the link you provide, a "quick" workaround would be to check the state of the resultset in this "if" statement:
if ( currentRow < maxRows && !rs.isClosed() && rs.next( ) )
{
currentRow++;
return true;
}
return false;
Of course it requires to download the source code of BIRT in Eclipse, there is an automated process for this. I know this is probably not the kind of solution you expect, but it might be the only one. You don't have to compile the whole BIRT project, just export this class as a .jar and replace the old class in Eclipse and in your BIRT runtime environment.
it might be valuable to submit this as a patch in bugzilla of birt
The answer is, this is probably a bug in the BIRT code and can be fixed hopefully by upgrading to a future version.
The work around I suggest, is based on my discovery of the root cause of the exception.
In 4.2.0, I used an aggregative expression inside of a grid element. Updated to 4.3.2, this is where and only when the exception occurs. To fix, I created the same layout and resulting report using a table element and same aggregation expression and no longer receive the exception on no results returned.
Consider the following stored procedure:
CREATE OR REPLACE FUNCTION get_supported_locales()
RETURNS TABLE(
code character varying(10)
) AS
...
And the following method that call's it:
def self.supported_locales
query = "SELECT code FROM get_supported_locales();"
res = ActiveRecord::Base.connection.execute(query)
res.values.flatten
end
I'm trying to write a test for this method but I'm getting some problems while mocking:
it "should list an intersection of locales available on the app and on last fm" do
res = mock(PG::Result)
res.should_receive(:values).and_return(['en', 'pt'])
ActiveRecord::Base.connection.stub(:execute).and_return(res)
Language.supported_locales.should =~ ['pt', 'en']
end
This test succeds but any test that runs after this one gives the following message:
WARNING: there is already a transaction in progress
Why does this happen? Am I doing the mocking
The database is postgres 9.1.
Your test is running using database level transactions. When the test completes, the transaction is rolled back so that none of the changes made in the test are actually saved to the database. In your case, this rollback can't happen because you have stubbed out the execute method on the ActiveRecord connection.
You can disable transactions globally and switch to using DatabaseCleaner to enable/disable transactions for various tests. You could then set up to use transactions through DatabaseCleaner by default so your existing tests don't change, and then in this one test choose to disable transactions in favor of some other strategy (such as the null strategy since there is no cleaning to be done for this test).
This other SO post indicates you may be able to avoid disabling transactions globally and turn them off on a per test basis as well, I have not tried that myself though.