Managing Database Calls For Every Socket Message Spring Boot - spring-boot

I have a spring boot application web-socket server up and running as a WEBRTC signaling server.
This server has to log some data to the database, based on the messages rotated from/to the server, via different client sockets.
While trying a conference call from client side using more than 2 peers, and running the signaling as debugger with breakpoints, the scenario was successfully completed and the database was updated as expected and the conference call took place.
But, if I run the server without debug and breakpoints, I get an sql error
Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1; statement executed: HikariProxyPreparedStatement#1623454983 wrapping com.mysql.cj.jdbc.ClientPreparedStatement:
delete from my_table where my_table_id='601cbe2b-6af2-4e82-a69c-52b92d30686c'; nested exception is org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1; statement executed: HikariProxyPreparedStatement#1623454983 wrapping com.mysql.cj.jdbc.ClientPreparedStatement:
delete from my_table where my_table_id='601cbe2b-6af2-4e82-a69c-52b92d30686c'
I am using Spring JPA and calling the save function on every message received by the web-socket after populating the entity data and all its nested lists and relational entity object, in order to keep data of the call flow.
conferenceRepository.save(conference);
I think the error is because of queries running concurrently on the database, in random order, while the data is still not there in the database to act upon.
As in debug mode, I am taking time to move from one breakpoint to another, and this is assuring the data persistence.
But I am not totally sure of the problem.
Is there an optimal way to apply concurrent database calls and updates and make sure the data is preserved and persisted properly in the database for concurrent web-socket related messages?

Related

What is the best approach while pooling data from DB and query DB again to fetch additional information?

The spring boot application that I am working on
pools 1000 messages from table X [ This table X is populated by another service s1]
From each message get the account number and query table Y to get additional information about account.
I am using spring integrating to pool messages from table X and reading additional information for account, I am planning to use Spring JDBC.
We are expecting about 10k messages very day.
Is above approach, to query table Y for each message, a good approach ?
No, that indeed not. If all of that data is in the same database, consider to write a proper SELECT to join those tables in a single query performed by that source polling channel adapter.
Another approach is to implement a stored procedure which will do that job for you and will return the whole needed data: https://docs.spring.io/spring-integration/reference/html/jdbc.html#stored-procedures.
Although if the memory for that number of records to handle at once is a limit in your environment or you don't care how fast all of them are processed, then indeed an integration flow with parallel processing of splitted polling result is OK. For that goal you can use a JdbcOutboundGateway as a service in your flow instead of playing with plain JdbcTemplate: https://docs.spring.io/spring-integration/reference/html/jdbc.html#jdbc-outbound-gateway

Oracle wait for table update in procedure

I've two databases (for example let's call them A and B) with one-way dblink - security restriction on one base (A) doesn't allow to connect from outside.
I need to put some data from B to A, process it and return a response. This task was done in this way:
On database B I made 2 tables - REQUESTS_IN (req_id number, data clob) and REQUESTS_OUT (req_id number, data clob). When I need to send data to db A I put it into REQUESTS_IN and start job which is checking any new rows in REQUESTS_OUT.
On database A there is a job which checks for new rows in REQUESTS_IN using dblink, gets data, processes it and put answer into REQUESTS_OUT.
Based on data process business logic, jobs delay - it can take up to 1 minute to get response in REQUESTS_OUT. It's ok when application is async and can wait for response.
Now i need to make sync version of this solution. On database B application will call some function to send data to database A and it needs to return response in same call.
I tried to find a solution in oracle db functions, but the only thing that comes to mind is to use dbms_pipe. i.e. on database B use dbms_pipe.receive_message to wait for message and on database A use dbms_pipe.send_message. But i'm not sure if this is correct solution.

BizTalk send port returns ORA-01013: user requested cancel of current operation

I have an application that inserts many rows into an Oracle database. The send port returns "ORA-01013: user requested cancel of current operation"
The send port is "WCF-Custom" using OracleDbBinding for the connection to the database.
I have the same issue in the past. My problem was using "WCF-Custom" port with OracleDBBinding to invoke an Oracle PL. That PL was very slow in their response and finally I received the error "ORA-01013: user requested cancel of current operation".
My issue was resolved changing the PL. I think that the error was produced by the "ReceiveTimeout" property of the Send Port. This property says that "Specifies the WCF message receive timeout. Essentially, this means the maximum amount of time the adapter waits for an inbound message.", I suspect that when ReceiveTimeout is accomplished, the WCF-Custom cancels the operation and then Oracle sends the error.
What's happening:
When inserting large numbers of records WCF makes several parallel requests to insert the data. The default 'UseAmbientTransaction' setting wraps all the inserts within a single transaction. If one of the inserted rows breaks a database constraint it tries to rollback the transaction for all the inserts. The transactions all return the oracle 1013 exception and the real cause of the failure is lost.
Solution:
On the send port 'Transport Advanced Options' tab set the 'Ordered Delivery' check box. This prevents the parallel insert operations and the real cause of the error will be logged.

How to handle Oracle bulk inserts with transactions if a transaction state unknown error is recived

In my application I have used Oracle (OCI) bulk executions using the following function.
OCIStmtExecute
For all the normal conditions it works as expected. Once Oracle node failover happened it gives rejections like "ORA-25405" in commits.
ORA-25405: transaction status unknown
According to the guide lines available all says "The user must determine the transaction's status manually".
My Question is will there be a scenario where my bulk insert/update works partially giving the above error?
From http://docs.oracle.com/cd/B10500_01/appdev.920/a96584/oci16m89.htm
With global transactions, it is possible that the transaction is now in-doubt, meaning that it is neither committed nor aborted.
This is exactly your case. This means the transaction isn't committed.
OCITransCommit() attempts to retrieve the status of the transaction from the server. The status is returned.
The solution then is to try again to commit the transaction with OCITransCommit(), then get the return value and check it.

BPEL for data synchronization

I am trying to use Oracle SOA BPEL to synch data of about 1000 employees between an HR service and our local db. I get IDs of all employees with a findEmp call and loop through it empCount times to getEmp(empID) from the same HR service and update/insert into our db in every loop. This times out after about 60 odd employees, though this process is an asynch process. How should I redesign the process flow?
The timeout is occuring because you don't have any dehydration points in your BPEL code. Oracle BPEL needs to dehydrate before the Java transaction times out.
If you are using the Oracle BPEL DB Adapter, you can actually submit many objects at once for processing to the database, simply put more than one in the element from the DB Adapter. This may help a lot, since you can fetch all your data at once, then write it all at once.
Additionally, you can extend the transaction timeout for Oracle BPEL- it's a configuration parameter in transaction-manager.xml (there's also some tweaks to the EJB timeouts you need to do on 10.1.3.3.x & 10.1.3.4.x). The Oracle BPEL docs tell you how to change this variable.

Resources