enter image description hereI am using chunk based processing using Spring Batch to read data in chunks from DB using JdbcPagingItemReader.
Now , I am killing the task in between the write stage of a chunk. Ideally the previous records in the chunk should have got rolled back but that did not happen.
The DB used here is DB2 .
The approach which I used was- to set Autocommit false for the connection and then after the write steps were complete I used the commit statement.This approach worked fine for a small set of data. But in real time there would be millions of records.
So , is this the right approach and if not then what can be the other solutions.
Thanks!
If I'm understanding you correctly: do you want to do thousands of processings without committing. And when everything is done committing?
Don't do so. You will have serious problems, both in the system and in the data base.
Things like that you better use another strategy, for example:
A temporary table. It goes on committing and when it finishes, it analyzes the result and if everything is right, it executes an update from the temporary to the final table.
You have to divide and conquer:
Run the process that generates the thousands of rows in a temporary table.
Analyze the result, it can be an analysis that results in the deletion of unsatisfactory lines or even an analysis that excludes the entire process.
Perform what should be done based on the analysis process.
I would create three or more Spring Batch Steps for each step of the one described above.
Related
When I am trying to disable Spring Batch Metadata creation with the option spring.batch.initialize-schema=never and then I launch batch, nothing happen and the batch terminate immediately without running the related jobs.
In the other hand when I am trying to enable the Metadata creation, the batch work fine, I am getting the classic SERIALIZED_CONTEXT field size error. I can't always save 4GB of data in the table when I execute the batch.
How to disable definitively the Metadata creation, and have my batch still working?
Edit : I think I found a kind of solution to avoid this issue, and I would like to have your point of view. I am finally working with Metadata generation. The issue occurs when you have large set of data stored in your ExecutionContext you pass between Tasklets (we all know this is the reason). In my case it is an ArrayList of elements (POJO), retrieved from a CSV file with OpenCSV. To overcome this issue I have :
reduced the number of columns and lines in the ArrayList (because Spring Batch will serialize this ArrayList in the SERIALIZED_CONTEXT field. The more columns and lines you have the more you are sure to get this issue)
changed the type of the SERIALIZED_CONTEXT from TEXT to LONGTEXT
deleted the toString() method defined in the POJO (not sure it really helps)
But I am still wondering, what if you have no choice and you have to load all your columns, what is the best way to prevent this issue?
So this is not an issue with metadata generation but with passing a large amount of data between two steps.
what if you have no choice and you have to load all your columns, what is the best way to prevent this issue?
You can still load all columns but you have to reduce the chunk size. The whole point of chunk processing in Spring Batch is to not load all data in memory. What you can do in your case is to carefully choose a chunk size that fits your requirement. There is no recipe for choosing the correct chunk size (since it depends on the number of columns, the size of each column, etc), so you need to proceed in an empirical way.
We are trying to fetch millions of record from database and processing in ERP system per day and we are facing performance issue, is there any solution regarding this in Community?
What is the best way to process the records in mule? So should we use batch or is there any alternate to it? And if we use batch or any other solution, how can we use it so as not to face any performance issue?
Since we don't have details on your specific situation, here are some general ideas. You will definitely need to do performance testing when dealing with large data sets to make sure your flow design is performing well.
Just to clarify, I'm giving options below that show streaming, which are slightly less performant, but will allow you to process large datasets. If you can handle the dataset in memory and you want faster processing, then turn off streaming.
Test your db queries outside of mule to make sure they are performant and tables are properly indexed.
Use streaming db connection. Tweak chunk size for performance testing. (Using this with batch scope is a good combo)
If using on-premise runtime, do performance tuning.
Use batch scope (enterprise edition)
Batch sounds like what you want to do. For each batch step Mule creates a batch job instance and each instance contains a persistent queue with the batched records. However, it does a deep copy of the MuleEvent containing the flow variables, flow construct, message, processing time, session and exchange pattern so beware, make sure you keep a light footprint before going into your batch job. If you have to set the payload with millions of records to flow variables to do some manipulation, make sure you delete them before you start executing the batch. It will load these batch steps in memory and execute them concurrently so the amount of memory you will need will be the size of the batch job instance (in particular the MuleEvent) by the number of batch steps.
I have a Spring Boot application that is using a Postgres database. When the application is deployed I need to run a transactional operation that uploads a zip file that is used to populate the database. The application is checking for duplicate rows before inserting them (because users can upload duplicate data that should just be ignored).
The problem I am having is that the first time I upload the file, even thought the indexes are created, they are not being used when checking for the existence of a row. My theory is that this happens because the query plan is deciding not to use the index because it is checking the original statistics, which show that the tables are empty. If I upload a small zip file first, then the problem goes away because the tables now have data.
I have two questions. First, is my theory correct or is there some other reason for this behaviour? Also, if so, is there a way to force Postgres to update the query plan it uses at some predefined interval within the same transaction and can this be done using JPA? Any ideas are appreciated.
Just in case someone runs into this issue, I'll post the solution I found. It appears my theory was correct. The queries will not use the indexes until some statistics are collected. One way to force this is to call ANALYZE after a number of rows have been written to the database. You can do this using a native query like this:
entityManager.createNativeQuery("ANALYZE " + tbl).executeUpdate();
You can wrap this call in a try catch and ignore any exceptions that might occur if you change the database engine. I couldn't find a way of doing this in a database-independent way but this approach works fine and now the initial upload performs as expected.
Now I have an Oracle Database with 8 millions records and I need to move them to MongoDB.
I know how to import some data to MongoDB with JSON file using import command but I want to know that is there a better way to achieve this regarding these issues.
Due to the limit of execution time, how to handle it?
The database is going up every seconds so what's the plan to make sure that every records have been moved.
Due to the limit of execution time, how to handle it?
Don't do it with the JSON export / import. Instead you should write a script that reads the data, transforms into the correct format for MongoDB and then inserts it there.
There are a few reasons for this:
Your tables / collections will not be organized the same way. (If they are, then why are you using MongoDB?)
This will allow you to monitor progress of the operation. In particular you can output to log files every 1000th entry or so to get some progress and be able to recover from failures.
This will test your new MongoDB code.
The database is going up every seconds so what's the plan to make sure that every records have been moved.
There are two strategies here.
Track the entries that are updated and re-run your script on newly updated records until you are caught up.
Write to both databases while you run the script to copy data. Then once you've done the script and everything it up to date, you can cut over to just using MongoDB.
I personally suggest #2, this is the easiest method to manage and test while maintaining up-time. It's still going to be a lot of work, but this will allow the transition to happen.
Right now the process that we're using for inserting sets of records is something like this:
(and note that "set of records" means something like a person's record along with their addresses, phone numbers, or any other joined tables).
Start a transaction.
Insert a set of records that are related.
Commit if everything was successful, roll back otherwise.
Go back to step 1 for the next set of records.
Should we be doing something more like this?
Start a transaction at the beginning of the script
Start a save point for each set of records.
Insert a set of related records.
Roll back to the savepoint if there is an error, go on if everything is successful.
Commit the transaction at the beginning of the script.
After having some issues with ORA-01555 and reading a few Ask Tom articles (like this one), I'm thinking about trying out the second process. Of course, as Tom points out, starting a new transaction is something that should be defined by business needs. Is the second process worth trying out, or is it a bad idea?
A transaction should be a meaningful Unit Of Work. But what constitutes a Unit Of Work depends upon context. In an OLTP system a Unit Of Work would be a single Person, along with their address information, etc. But it sounds as if you are implementing some form of batch processing, which is loading lots of Persons.
If you are having problems with ORA-1555 it is almost certainly because you are have a long running query supplying data which is being updated by other transactions. Committing inside your loop contributes to the cyclical use of UNDO segments, and so will tend to increase the likelihood that the segments you are relying on to provide read consistency will have been reused. So, not doing that is probably a good idea.
Whether using SAVEPOINTs is the solution is a different matter. I'm not sure what advantage that would give you in your situation. As you are working with Oracle10g perhaps you should consider using bulk DML error logging instead.
Alternatively you might wish to rewrite the driving query so that it works with smaller chunks of data. Without knowing more about the specifics of your process I can't give specific advice. But in general, instead of opening one cursor for 10000 records it might be better to open it twenty times for 500 rows a pop. The other thing to consider is whether the insertion process can be made more efficient, say by using bulk collection and FORALL.
Some thoughts...
Seems to me one of the points of the asktom link was to size your rollback/undo appropriately to avoid the 1555's. Is there some reason this is not possible? As he points out, it's far cheaper to buy disk than it is to write/maintain code to handle getting around rollback limitations (although I had to do a double-take after reading the $250 pricetag for a 36Gb drive - that thread started in 2002! Good illustration of Moore's Law!)
This link (Burleson) shows one possible issue with savepoints.
Is your transaction in actuality steps 2,3, and 5 in your second scenario? If so, that's what I'd do - commit each transaction. Sounds a bit to me like scenario 1 is a collection of transactions rolled into one?