Nested table in BIRT opens too meny db cursors, causing cursor error - birt

Im designing a report in BIRT, where I have 2 data sets and one parent table and under that a child table. For each parent row, I'm firing child table, so it's hitting database everytime. For example if I'm having 100 rows of parent records, for each I'm processing child table. So child table query is executed 100 times. Thus 100 cursors are opened, and after 4/5 runs I'm getting maximum cursor open error. Is there any better approach ?

Which version of BIRT are you using?
Maybe you are hitting a bug:
https://github.com/eclipse/birt/issues/875
This was fixed in March 2022.

Related

MySQL Workbench shows running query but query not in process list, eventually times out at 7200 seconds

Purpose: Remove duplicate records from large table.
My Process:
Create table 2 with 9 fields. No indexes. Same data_types per field as Table 1.
insert 9 fields, all records into table 2 from existing table 1
Table 1 contains 71+ Million rows and 232 columns and many duplicate records.
No joins. No Where Clause.
Table 1 contains several indexes.
8 fields are required to get unique records.
I'm trying to set up a process to de-dup large tables, using dense_rank partitioning to identify the most recently entered duplicate. Thus, those 8 required fields from Table 1 plus the auto-increment from Table 1 are loaded into Table 2.
Version: 10.5.17 MariaDB
The next steps would be:
Create new Table 3 identical to table 1 but with no indexes.
Load all data from Table 1 into Table 3, joining Table 1 to Table 2 on the auto-increment fields, where table 2.Dense_Rank field value = 1. This inserts ~17 Million unique records
Drop any existing Foreign_Keys related to Table 1
Truncate Table 1
Insert all records from Table 3 into Table 1
Nullify columns in related tables where the foreign key values in Table 1 no longer exist
re-create Foreign Keys that had been dropped.
Creating a test instance of an existing system I accomplish everything I need to once - only the first time. But If I then drop table 2 before refreshing Table 1 as outlined immediately above, re-create and try to reload, workbench shows query running until 7200 second timeout.
While the insert into Table 2 query is running, opening a second instance of Workbench and selecting count of records in table 2 after 15 minutes gives me the 71+ Million records I'm looking for, but Workbench continues running until timeout.
The query shows up in Show Processlist for those 15 minutes, but disappears around the 15 minute mark - presumably once all records are loaded.
I have tried running with timeouts set to 0 as well as 86,400 seconds, indicating no read timeout and 24 hours timeout, respectively, but query still times out at 7200.0xx seconds, or 2 hours, every time.
The exact error message I get is: Error Code: 2013. Lost connection to MySQL server during query 7200.125 sec
I have tried running the insert statement with COMMIT and without.
This is being done in a Test Instance set up for this development where I am the only user, and only a single table is in use during the insert process.
Finding one idea on line I ran the following suggested query to identify locked tables but got the error message that the table does not exist:
SELECT TRX_ID, TRX_REQUESTED_LOCK_ID, TRX_MYSQL_THREAD_ID, TRX_QUERY
FROM INNODB_TRX
and, of course, with only a single table being called by a single user in the system nothing should be locked.
As noted above I can fulfill the entire process a single time but, when trying to run up to the point of stopping just before truncating Table 1 so I can start over, I am consistently unable to succeed. Table 2 never gets released after being loaded again.
The reason it is important for me test a second iteration is that once this process is successful it will be applied to several database instances that were not just set up for the purpose of testing this process, and if this only works on a newly created database instance that has had no other processing performed it may not be dependable.

Loaded query tables either shows error in refresh or leave many blank rows between tables

Dears,
I really need your help with this one as it's driving me crazy, I have 3 queries loaded as tables under each other in my excel workbook.
when I choose the properties option: insert cells for new data, delete unused cells from properties >> it shows an error message in refresh (it would move cells in a table on your worksheet).
and when I go for the second option: insert entire rows for new data, it refreshes but leaves many blank rows between each table and the second one.
What shall I do to refresh all queries in the workbook without messing up the form .. as I just need to leave 1 empty row between each loaded table?

Oracle deadlock output when caused by foreign keys

We have a multi-threaded batch job ending up in deadlock. I am getting conflicting answers from our dba's as to what actually causes the deadlock.
Caused by: java.sql.SQLException: ORA-00060: deadlock detected while waiting for resource
The error output references the sql for inserting into table A. Every row going into table A should be unique. Table A has foreign keys on two other tables, both of which are indexed and primary keys made up of two columns. Many rows in Table A can point to the same FK in the parent tables. Our code handles FK errors by trying to insert into parent tables and then trying into Table A again.
The sql in the trace log refers to the Table A insert sql (does not show param binding values). Does this mean definitively that there are two identical sql statements trying to be inserted into Table A in which case our prior logic is not thread-safe somewhere? Or could it really be that there are two inserts both referencing an unsatisfied FK? And the deadlock occurs from our error handling in trying to insert into the parent table. If so, would the sql in the trace not then reference the parent table sql?
Or perversely, does the original insert attempt put a lock on the row and then after handling the error, does the second attempt of the insert cause the deadlock? Any further debugging assistance?
There's not much info to work with, but my guess would be that two threads are trying to insert the same rows into one of the 'two other tables' at the same time. Idea on debugging below
Use a trigger on table A and on the other two tables ( 3 triggers in total) that write the inserted data to logging tables in an autonomous transaction that commits. This way you can see the uncommitted inserts that were rolled back due to the deadlock (the rows that exists in the logging tables but not in the actual tables are the ones that were rolled back).
HTH, KR

Informatica: Delete rows from multiple tables sequentially. Then Insert

Consider the following scenario:
Main Control Table: 100 rows (Denormalized table with multiple processing ID's).
Set of 10 Parent Tables populated based on Control table.
Set of 10 Child Tables populated based on the Parent tables.
For daily processing:
We need to delete the data from Child tables first.
Parent Tables next.
Control table last.
Then insert data into Control table using multiple Insert Statements as it is denormalized.
Is this possible in one mapping?
One suggestion is to use SQL Transform and just execute the SQL's one after the other.
Is there an alternative way of Handling this?

Oracle, 2 procedures avoid deadlock

I have two procedures that I want to run on the same table, one uses the birth date and the other updates the name and last name taken from a third table.
The one that uses the birthday to update the age field runs all over the table, and the one that updates the names and last name only updates the rows that appear on the third table based on a key.
So I launched both and got deadlocked! Is there a way to prioritize any of them? I read about the nowait and skip locked for the update but then, how would I return to the ones skipped?
Hope you can help me on this!!
One possibility is to lock all rows you will update at once. Doing all updates in a single update statment will accomplish this. Or
select whatever from T
where ...
for update;
Another solution is to create what I call a "Gatekeeper" table. Both procedures need to lock the Gatekeeper table in exclusive mode before updating the table in question. The second procedure will block until the first commits but won't deadlock. In 11g you can create a table with no space allocated.
A variation is to insert a row in the Gatekeeper. Then lock only that row with select for update. Then you can use the Gatekeeper in other situations.
I would guess that you got locked because the update for all the rows and the update for a small set of rows accessed rows in different orders.
The former used a full scan and reached Raw A first, then went on to other rows, eventually trying to lock Row B. However, the other query was driven from an index or a join and already had Row B locked, and was off to lock Row A when it found it was already locked.
So, the fix: firstly, having an age column that needs to be constantly modified is a really bad idea. Perhaps it was done to allow indexing of age, but with a correctly written query an index on date of birth will let you find the same records just as quickly. You've broken normalisation rules and ended up coding yourself a deadlocking application. Hopefully you are only updating the rows that need to be updated, not all of them regardless -- I mean, that would just be insane.
The best solution is to get rid of that design flaw.
The not so good solution is to deconflict your queries by running them at different times or by using DBMS_Lock so that only one of them can run at any time.

Resources