I have created a query to insert data into an Oracle Database Table (12c), and I have also created an operation on WSO2 ESB DSS to expose this query via WebServices. I am getting back the following response from my WSO2 call:
{
"REQUEST_STATUS": "SUCCESSFUL"
}
However, when I look in the database, my data has not been inserted. How do I do a commit after the insert so that the data is written to the database?
Please use this property in the configuration for the data service.
<property name="autoCommit">true</property>
Related
I have created a simple service using Spring Boot. There is a table Message_Queue as shown in this image link -
Message_Queue Table
I am using oracle database. Here, msisdn is a number which is primary key, and it has some other fields like sim_number, activation_date, subscription_id.
There is another Spring Boot service which sends rest API request to add the activation_date and subscription_id details for the respective record in message_queue table. Table message_queue holds data upto 2 million records.
I want to create a scheduled task in my Spring Boot application, which will send rest API request to some other 3rd service only for those records which has activation_date and subscription_id details filled and will also delete that record from the table.
What is the best way to achieve that using Spring Boot framework? Please try to answer on the enterprise level standards.
Is it a good approach to fetch like 1000 records with pagination until all the records are not checked from the table and then check for each record if it has activation_date and subscription_id or not, if it has then send a rest request for the record and also a delete request to the DB for the same record?
I am using Spring Data JPA and MongoDb in my rest application. My database structure is such that for each Customer type, we have a separate oracle and mongodb. Whenever a Customer makes an http request to my rest server, based on some request header parameter i determine the customer type.
For instance, for Customer type A, there will be an "Oracle database A" and "Mongo database A". Similarly there will be "Oracle database B" and "Mongo database B" for customer type B and so on. The number of Customer types are fixed.
Now what i want is, suppose if Customer B makes an http request, then for this particular thread, all oracle hits should go to Oracle Database B and all mongo hits should go to Mongo database B.
I am aware of AbstractRoutingDataSource for making JPA multi-tenant but cannot think of a way to make both MongoDb and Oracle multi-tenant at the same time.
I know to write a Kafka consumer and insert/update each record into Oracle database but I want to leverage Kafka Connect API and JDBC Sink Connector for this purpose. Except the property file, in my search I couldn't find a complete executable example with detailed steps to configure and write relevant code in Java to consume a Kafka topic with json message and insert/update (merge) a table in Oracle database using Kafka connect API with JDBC Sink Connector. Can someone point demonstrate an example including configuration and dependencies? Are there any disadvantages with this approach? Do we anticipate any potential issues when table data increases to millions?
Thanks in advance.
There won't be an example for your specific use-case becuase the JDBC connector is meant to be generic.
Here is one configuration example with an Oracle database
All you need is
A topic of some format
key.converter and value.converter to be set to deserialize that topic
Your JDBC string and database schema (tables, projection fields, etc)
Any other JDBC Sink Specific Options
All this goes in a Java properties / JSON file, not Java source code
If you have a specific issue creating this configuration, please comment.
Do we anticipate any potential issues when table data increases to millions?
Well, those issues would be database server related, not with Kafka Connect. For example, disk filling up or increased load while accepting continuous writes.
Are there any disadvantages with this approach?
You'd have to handle de-deduplication or record expiration (e.g. GDPR) separately, if you did want that.
I am using Spring transaction management using Hibernate Transaction Manager. I have declarative transactions configured on a class. Transaction starts and commits. But while commiting its not inserting any records to table. Not able to see any Insert table log message. Is transaction uses some other hibernate session object. How to make it to use session object currently in use.
I am currently integrating transaction management into my code.
I am trying to set it up against stored procedures that already exist.
The stored procedures have a commit at the end of them.
In my eclipse console, I can see the transaction management code being invoked
datasource.DataSourceTransactionManager Initiating transaction rollback
datasource.DataSourceTransactionManager Rolling back JDBC transaction on Connection [oracle.jdbc.driver.LogicalConnection#1544055]
datasource.DataSourceTransactionManager Releasing JDBC Connection [oracle.jdbc.driver.LogicalConnection#1544055] after transaction
But I can still see remains of the record that should have been rolled back in my database tables.
If we use spring transaction management, should we remove all commits from our stored procedures?
Thanks
Damien