How to execute multiple inserts in batch in r2dbc? - spring

I need to insert multiple rows into one table in one batch.
In DatabaseClient i found insert() statement and using(Publisher objectToInsert) method which has multiple objects as argument. But would it insert them in one batch or not?
Another possible solution is connection.createBatch(), but it has a drowback : I cannot pass my Entity object there and i cannot generate sql query from the entity.
So, is it possible to create batch insert in r2dbc?

There are two questions:
Would DatabaseClient.insert() insert them in one batch or not?
Not a batch.
Is it possible to create batch insert in r2dbc? (except Connection.createBatch())
No, use Connection.createBatch() is only one way to create a Batch for now.
See also issues:
spring-data-r2dbc#259
spring-framework#27229

There is no direct support till now, but I found it is possible to use Connection to overcome this barrier simply, check out this issue, spring-data-r2dbc#259
The statement has a add to repeat to bind parameters.
The complete codes of my solution can be found here.
return this.databaseClient.inConnectionMany(connection -> {
var statement = connection.createStatement("INSERT INTO posts (title, content) VALUES ($1, $2)")
.returnGeneratedValues("id");
for (var p : data) {
statement.bind(0, p.getTitle()).bind(1, p.getContent()).add();
}
return Flux.from(statement.execute()).flatMap(result -> result.map((row, rowMetadata) -> row.get("id", UUID.class)));
});
A test for this method.
#Test
public void testSaveAll() {
var data = Post.builder().title("test").content("content").build();
var data1 = Post.builder().title("test1").content("content1").build();
var result = posts.saveAll(List.of(data, data1)).log("[Generated result]")
.doOnNext(id->log.info("generated id: {}", id));
assertThat(result).isNotNull();
result.as(StepVerifier::create)
.expectNextCount(2)
.verifyComplete();
}
The generated ids are printed as expected in the console.
...
2020-10-08 11:29:19,662 INFO [reactor-tcp-nio-2] reactor.util.Loggers$Slf4JLogger:274 onNext(a3105647-a4bc-4986-9ad4-1e6de901449f)
2020-10-08 11:29:19,664 INFO [reactor-tcp-nio-2] com.example.demo.PostRepositoryTest:31 generated id: a3105647-a4bc-4986-9ad4-1e6de901449f
//.....
2020-10-08 11:29:19,671 INFO [reactor-tcp-nio-2] reactor.util.Loggers$Slf4JLogger:274 onNext(a611d766-f983-4c8e-9dc9-fc78775911e5)
2020-10-08 11:29:19,671 INFO [reactor-tcp-nio-2] com.example.demo.PostRepositoryTest:31 generated id: a611d766-f983-4c8e-9dc9-fc78775911e5
//......
Process finished with exit code 0

Related

KTable & LogAndContinueExceptionHandler

I have a very simple consumer from which I create a materialized view. I have enabled validation on my value object (throwing Constraintviolationexception for invalid json data). When I receive a value on which the validation fails, I exepct the value to logged & consumer should read the next offset as I have LogAndContinueExceptionHandler enabled.
However LogAndContinueExceptionHandler is never invoked and consumePojo State transition from PENDING_ERROR to ERROR
Code
#Bean
public Consumer<KTable<String, Pojo>> consume() {
return values->
values
.filter((key, value) -> Objects.nonNull(key))
.mapValues(value-> value, Materialized.<String, Pojo>as(Stores.inMemoryKeyValueStore("POJO_STORE_NAME"))
.withKeySerde(Serdes.String())
.withValueSerde(SerdeUtil.pojoSerde())
.withLoggingDisabled())
.toStream()
.peek((key, value) -> log.debug("Receiving Pojo from topic with key: {}, and UUID: {}", key, value == null ? 0 : value.getUuid()));
}
Why is it that LogAndContinueExceptionHandler is not invoked in case of KTable?
Note: If code is changed to KStreams then I see logging and records being skipped but with KTable not !!
In order to handle exceptions not handled by Kafka Streams use the KafkaStreams.setUncaughtExceptionHandler method and StreamsUncaughtExceptionHandler implementation, this needs to return one of 3 available enumerations:
REPLACE_THREAD
SHUTDOWN_CLIENT
SHUTDOWN_APPLICATION
and in your case REPLACE_THREAD is the best option, as you can see in KIP-671:
REPLACE_THREAD:
The current thread is shutdown and transits to state DEAD.
A new thread is started if the Kafka Streams client is in state RUNNING or REBALANCING.
For the Global thread this option will log an error and revert to shutting down the client until the option had been added
In Spring Kafka you can replace default StreamsUncaughtExceptionHandler by StreamsBuilderFactoryBean:
#Autowired
void setMyStreamsUncaughtExceptionHandler(StreamsBuilderFactoryBean streamsBuilderFactoryBean) {
streamsBuilderFactoryBean.setStreamsUncaughtExceptionHandler(exception -> StreamsUncaughtExceptionHandler.StreamThreadExceptionResponse.REPLACE_THREAD);
}
I was able to solve the problem after looking at the logs carefully, I found that valueSerde for the Pojo, was showing useNativeDecoding (default being JsonSerde) due to this DeserializationExceptionHandler wasn't invoked and thread terminated.
Problem went away when I fixed the valueSerde in application.properties

Jdbi transaction - multiple methods - Resources should be closed

Suppose I want to run two sql queries in a transaction I have code like the below:
jdbi.useHandle(handle -> handle.useTransaction(h -> {
var id = handle.createUpdate("some query")
.executeAndReturnGeneratedKeys()
.mapTo(Long.class).findOne().orElseThrow(() -> new IllegalStateException("No id"));
handle.createUpdate("INSERT INTO SOMETABLE (id) " +
"VALUES (:id , xxx);")
.bind("id")
.execute();
}
));
Now as the complexity grows I want to extract each update in into it's own method:
jdbi.useHandle(handle -> handle.useTransaction(h -> {
var id = someQuery1(h);
someQuery2(id, h);
}
));
...with someQuery1 looking like:
private Long someQuery1(Handle handle) {
return handle.createUpdate("some query")
.executeAndReturnGeneratedKeys()
.mapTo(Long.class).findOne().orElseThrow(() -> new IllegalStateException("No id"));
}
Now when I refactor to the latter I get a SonarQube blocker bug on the someQuery1 handle.createUpdate stating:
Resources should be closed
Connections, streams, files, and other
classes that implement the Closeable interface or its super-interface,
AutoCloseable, needs to be closed after use....*
I was under the impression, that because I'm using jdbi.useHandle (and passing the same handle to the called methods) that a callback would be used and immediately release the handle upon return. As per the jdbi docs:
Both withHandle and useHandle open a temporary handle, call your
callback, and immediately release the handle when your callback
returns.
Any help / suggestions appreciated.
TIA
SonarQube doesn't know any specifics regarding JDBI implementation and just triggers by AutoCloseable/Closable not being closed. Just suppress sonar issue and/or file a feature-request to SonarQube team to improve this behavior.

Is there any way to view the physical SQLs executed by Calcite JDBC?

Recently I am studying Apache Calcite, by now I can use explain plan for via JDBC to view the logical plan, and I am wondering how can I view the physical sql in the plan execution? Since there may be bugs in the physical sql generation so I need to make sure the correctness.
val connection = DriverManager.getConnection("jdbc:calcite:")
val calciteConnection = connection.asInstanceOf[CalciteConnection]
val rootSchema = calciteConnection.getRootSchema()
val dsInsightUser = JdbcSchema.dataSource("jdbc:mysql://localhost:13306/insight?useSSL=false&serverTimezone=UTC", "com.mysql.jdbc.Driver", "insight_admin","xxxxxx")
val dsPerm = JdbcSchema.dataSource("jdbc:mysql://localhost:13307/permission?useSSL=false&serverTimezone=UTC", "com.mysql.jdbc.Driver", "perm_admin", "xxxxxx")
rootSchema.add("insight_user", JdbcSchema.create(rootSchema, "insight_user", dsInsightUser, null, null))
rootSchema.add("perm", JdbcSchema.create(rootSchema, "perm", dsPerm, null, null))
val stmt = connection.createStatement()
val rs = stmt.executeQuery("""explain plan for select "perm"."user_table".* from "perm"."user_table" join "insight_user"."user_tab" on "perm"."user_table"."id"="insight_user"."user_tab"."id" """)
val metaData = rs.getMetaData()
while(rs.next()) {
for(i <- 1 to metaData.getColumnCount) printf("%s ", rs.getObject(i))
println()
}
result is
EnumerableCalc(expr#0..3=[{inputs}], proj#0..2=[{exprs}])
EnumerableHashJoin(condition=[=($0, $3)], joinType=[inner])
JdbcToEnumerableConverter
JdbcTableScan(table=[[perm, user_table]])
JdbcToEnumerableConverter
JdbcProject(id=[$0])
JdbcTableScan(table=[[insight_user, user_tab]])
There is a Calcite Hook, Hook.QUERY_PLAN that is triggered with the JDBC query strings. From the source:
/** Called with a query that has been generated to send to a back-end system.
* The query might be a SQL string (for the JDBC adapter), a list of Mongo
* pipeline expressions (for the MongoDB adapter), et cetera. */
QUERY_PLAN;
You can register a listener to log any query strings, like this in Java:
Hook.QUERY_PLAN.add((Consumer<String>) s -> LOG.info("Query sent over JDBC:\n" + s));
It is possible to see the generated SQL query by setting calcite.debug=true system property. The exact place where this is happening is in JdbcToEnumerableConverter. As this is happening during the execution of the query you will have to remove the "explain plan for"
from stmt.executeQuery.
Note that by setting debug mode to true you will get a lot of other messages as well as other information regarding generated code.

Getting #rid while Update-Upsert in OrientDB without searching again

I am currently using OrientDB to build a graph model. I am using PyOrient to send commands for creating the nodes and edges.
Whenever I use INSERT command I get a list of things which includes #rid in return.
result = db.command("INSERT INTO CNID SET connected_id {0}".format(somevalue))
print result
OUTPUT: {'#CNID':{'connected_id': '10000'},'version':1,'rid':'#12:1221'}
However if I use the Update-Upsert command I only get one value as return which is not the #rid.
result = db.command("UPDATE CNID SET connected_id={0} UPSERT WHERE connected_id={0}".format(cn_value))
print result
OUTPUT: 1
I want to know is it possible to get #rid as well while doing UPDATE-UPSERT operation.
I created the following example in PyOrient:
Structure:
A useful method to retrieve the #rid from an UPDATE / UPSERT operation could be the usage of the RETURN AFTER $current syntax in your SQL command.
PyOrient Code:
import pyorient
db_name = 'Stack37308500'
print("Connecting to the server...")
client = pyorient.OrientDB("localhost",2424)
session_id = client.connect("root","root")
print("OK - sessionID: ",session_id,"\n")
if client.db_exists( db_name, pyorient.STORAGE_TYPE_PLOCAL ):
client.db_open(db_name, "root", "root")
result = client.command("UPDATE CNID SET connected_id = 20000 UPSERT RETURN AFTER $current.#rid WHERE connected_id = 20000")
for idx, val in enumerate(result):
print(val)
client.db_close()
By specifying $current.#rid you'll be able to retrieve the #rid of the resulting record (in this case a new record).
Code Output:
Connecting to the server...
OK - sessionID: 25
##12:1
Studio:
You can also modify the query to retrieve the whole resulting record by use only $current without specifying #rid (in this case I updated the record #12:1).
Query:
UPDATE CNID SET connected_id = 30000 UPSERT RETURN AFTER $current WHERE connected_id = 20000
Code Output:
Connecting to the server...
OK - sessionID: 26
{'#CNID':{'connected_id': 30000},'version':2,'rid':'#12:1'}
Studio:
Hope it helps

Worklight 5.0.6 : Ajax request exception: Form too large while sending large data to data adapter

My question is relatively the same than the one posted on developerworks forum (forum is read only due to migration) wich is :
I have a http adapter that interfaces with external web services. Part
of the payload is audio, and images. We're hitting a form size limit.
Please see attached exception at end of this post. I've read on
previous posts that jetty configurations need to be adjusted to
accommodate the larger payload. We want to control this size limit at
the server side application layer, and thought that of creating a
jetty-web.xml to define the max form size:
400000
In Worklight is this the proper approach to resolve this issue?
If this is the proper approach can you provide details whether the
jetty-web.xml should be placed under server/conf, or does it need to
be under WEB-INF of the application war?
If the file needs to be placed under WEB-INF can you explain how to
accomplish this file being placed under WEB-INF during the WL project
build.
Thanks E: Ajax request exception: Form too large802600>200000
2013-02-06 11:39:48 FWLSE0117E: Error code: 1, error description:
INTERNAL_ERROR, error message: FWLSE0069E: An internal error occurred
during gadget request Form too large802600>200000, User Identity
{wl_authenticityRealm=null, GersServiceAdapterRealm=(name:USAEMP4,
loginModule:GersServiceAdapterLoginModule),
wl_remoteDisableRealm=(name:NullLoginModule,
loginModule:NullLoginModule), SampleAppRealm=null,
wl_antiXSRFRealm=(name:antiXSRF, loginModule:WLAntiXSRFLoginModule),
wl_deviceAutoProvisioningRealm=null, WorklightConsole=null,
wl_deviceNoProvisioningRealm=(name:device,
loginModule:WLDeviceNoProvisioningLoginModule),
myserver=(name:3e857b6a-d2f6-40d1-8c9c-10ca1b96c8df,
loginModule:WeakDummy),
wl_anonymousUserRealm=(name:3e857b6a-d2f6-40d1-8c9c-10ca1b96c8df,
loginModule:WeakDummy)}.
I have exactly the same problem :
I send a large amount of data to a worklight adapter and my application fail with the following error message into the log :
2013-08-21 09:48:17] FWLSE0020E: Ajax request exception: Form too large202534>200000
[2013-08-21 09:48:18] FWLSE0117E: Error code: 1, error description: INTERNAL_ERROR, error message: FWLSE0069E: An internal error occurred during gadget request Form too large202534>200000, User Identity {wl_authenticityRealm=null, wl_remoteDisableRealm=(name:null, loginModule:NullLoginModule), SampleAppRealm=null, wl_antiXSRFRealm=(name:b2isf3704k2fl8hovpa6lv9mig, loginModule:WLAntiXSRFLoginModule), wl_deviceAutoProvisioningRealm=null, WorklightConsole=null, wl_deviceNoProvisioningRealm=(name:40a24da9-0a32-464a-8dec-2ab402c683ae, loginModule:WLDeviceNoProvisioningLoginModule), myserver=(name:2b1a7864-37c4-47f0-9f5c-49621b6915b5, loginModule:WeakDummy), wl_anonymousUserRealm=(name:2b1a7864-37c4-47f0-9f5c-49621b6915b5, loginModule:WeakDummy)}.
This occurs on calling an adapter procedure by calling WL.Client.invokeProcedure(...) and before the first line of the called procedure... If I try to log the start of the called procedure I have nothing written in my debug log...
I can give you my source code :
This one is called by a dhtml user event (onclick) :
// Construct the param to pass to the WL adapter insert procedure
var paramObject = {
QCDART: machine, // machine is a javascript variable as long int
QTITRE: title, // title is a javascript variable as string(255)
QDESC: desc, // desc is a javascript variable as string(255)
QHODAT: todayDateDb2IntFormat, // todayDateDb2IntFormat is a javascript variable as long int
QACTIF: active, // active is a javascript variable as int
SSRCFIC: currentPdfFileDataBase64, // currentPdfFileDataBase64 is a javascript variable as base64 encoded string from a binary file > 150 ko approx.
SMIMFIC: 'application/pdf',
SSIZFIC: currentPdfFileSize // currentPdfFileSize is a javascript variable as long int
};
// Construct adapter invocation data
var invocationData = {
adapter : 'IseriesDB2Backend', // adapter name
procedure : 'addModeleReleves', // procedure name
parameters : [paramObject] // parameters if any
};
WL.Client.invokeProcedure(invocationData, {
timeout: 60000,
onSuccess: function() {
// Notify success
alert('OK');
}, // invokeProcedure success callback
onFailure: function(invocationResult) {
alert('ERROR');
} // invokeProcedure failure callback
});
This one is my adapter source code :
var addModeleReleveStatement = WL.Server.createSQLStatement("select QCDDOC from FINAL TABLE (insert into ERIHACFICH.DOCENTQ (QCDART, QTITRE, QDESC, QHODAT, QACTIF) values (?, ?, ?, ?, ?))");
function addModeleReleves(params) {
WL.Logger.debug('Starting adapter procedure...');
var modeleReleveResult = WL.Server.invokeSQLStatement({
preparedStatement : addModeleReleveStatement,
parameters : [params.QCDART, params.QTITRE, params.QDESC, params.QHODAT, params.QACTIF]
});
if(modeleReleveResult.isSuccessful) {
WL.Logger.debug('Success !');
}
WL.Logger.debug('Adapter procedure ended !');
// Return result (with the last id inside)
return modeleReleveResult;
}
if the javascript varable called currentPdfFileDataBase64 is small, all is working normally but if it exceeds approximatively 200000 chars length it fails...
Last, I ca say that the problem occurs in development environment (WL Studio 5.0.6 + WL Server 5.0.6), I didn't test it on the production environment based on SLES + Websphere application server 7 + worklight.
Thanks for any help
I understand you are using the test server provided by Worklight.
It looks like this is a Jetty limitation so you could try any of these:
1) Set the system property org.eclipse.jetty.server.Request.maxFormContentSize to a bigger value (i.e. adding -Dorg.eclipse.jetty.server.Request.maxFormContentSize=25000000) to the end of eclipse.ini before launching Worklight.
or
2) Instead, set this other system property -Dorg.mortbay.jetty.Request.maxFormContentSize=25000000 to the same place.
Another way to solve the problem was to use WL Studio version 6 that doesn't use Jetty anymore as test environment

Resources