Understanding #Transactional when table used as a Queue - spring

I have a table which looks something like the following:
id | task | status
--------------------------
1 | xxxx | done
2 | xxxx | in_progress
3 | xxxx | in_progress
4 | xxxx | in_progress
5 | xxxx | todo
we have handlers which are built in spring-boot/JPA.
The first handler pulls a few records(which are in todo) from the above table and marks them as in_progress.
a second handler then pulls a few more records(which are in todo) from the above table and marks them as in_progress.
a second handler could potentially pull todo records even when the first handler hasn't completed marking them as in_progress.
I have used Transaction for all or nothing approach.
would a #Transactional annotation(or transactions) solve such an issue? if not what are the ways in which this could be tackled using Spring/JPA (and not using an external Queue).

Related

How struct spring batch multiple reader

I new in springbatch user.
Below data structure is some of sample structre.
user table
| id | name| age|
| -------- | -------------- |-------------- |
| 1 | park |12 |
| 2 | kim |13|
user_service_history table
| id | user_id| status|
| -------- | -------------- |-------------- |
| 1 | 1 |create |
| 2 | 1 |conenct|
| 3 | 1 |delete |
| 4 | 2 |conenct|
Can spring batch do this flow?
I tried step in two reader, But spring batch only allow step in one reader... :(
Job{
step{
reader{
//this reader read to id and name
List<user> userList=select id , name from user
},
processor{/* do some process userList */},
writer{/* do nothing */}
},
step{
reader(List<user> userList){ // this parameter is delivered from above processor
//this reader read to user_service_history
select user_id,status from user_service_history joined user where in userList
},
processor{/* do some process */}
writer{/* final write wanted data */}
}
}
You don't need two readers for that. You have a few options:
use a single reader with a query that joins data from both tables
create a custom reader that reads data as needed from both tables
use the driving query pattern, where a reader reads items from one table and a processor enriches those items from the other table. Items here would be users.

Oracle's V$LOGMNR_CONTENTS for a table with a UDT - connecting UPDATE and INTERNAL operations

I'm updating an oracle table with a UDT (user defined type), and then querying the V$LOGMNR_CONTENTS view. I'm seeing that for each row updated there are 2 records - UPDATE and INTERNAL. I need to figure out how to link between them as the UPDATE operation has a temporary value in the ROW_ID and the correct value appears only in the INTERNAL operation, and I'm not sure how do their SCN numbers relate. The way I'm thinking about is to make a queue of UPDATEs per DATA_OBJ#, and link them to the INTERNALs FIFO. Is there something nicer I'm missing?
Script:
CREATE TYPE srulon AS OBJECT (name VARCHAR2(30),phone VARCHAR2(20) );
create table root.udt_table (myrowid rowid, myudt srulon);
BEGIN rdsadmin.rdsadmin_util.switch_logfile;END;
insert into root.udt_table values (null, srulon('small', '1234'));
commit;
BEGIN rdsadmin.rdsadmin_util.switch_logfile;END;
insert into root.udt_table values (null, srulon('small', '1234'));
update root.udt_table set myrowid=rowid, myudt = srulon('smaller', rowid);
commit;
BEGIN rdsadmin.rdsadmin_util.switch_logfile;END;
Query (after START_LOGMNR for the last log):
select scn, SEQUENCE#,operation, SQL_REDO, ROW_ID from V$LOGMNR_CONTENTS
where session# = 6366 and not operation like '%XML%'
order by scn, SEQUENCE#;
Results:
| SCN | SEQUENCE# | OPERATION | ROW\_ID | SQL\_REDO |
| :--- | :--- | :--- | :--- | :--- |
| 240676056 | 1 | INTERNAL | AAB1avAAAAAAwT7AAA | NULL |
| 240676056 | 1 | UPDATE | AAAAAAAAAAAAAAAAAA | update "ROOT"."UDT\_TABLE" a set a."MYROWID" = 'AAB1avAAAAAAwT7AAA' where a."MYROWID" IS NULL; |
| 240676057 | 5 | INTERNAL | AAB1avAAAAAAwT7AAA | NULL |
| 240676058 | 1 | UPDATE | AAAAAAAAAAAAAAAAAA | update "ROOT"."UDT\_TABLE" a set a."MYROWID" = 'AAB1avAAAAAAwT7AAB' where a."MYROWID" IS NULL; |
| 240676059 | 5 | INTERNAL | AAB1avAAAAAAwT7AAB | NULL |
| 240676069 | 1 | COMMIT | AAAAAAAAAAAAAAAAAA | commit; |
System Change Number (SCN) is the main controlling function that is used to keep track of database transactional activity. SCN is a stamp that defines a committed version of a database at a particular point in time. Every committed transaction gets a unique SCN assigned. DB keeps records of all database changes with the help of SCN numbers. SCN is a running number for the database changes
To get current SCN use
DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER()
So there is no other Connection between the UPDATE and INTERNAL Operation then the Fact that UPDATE SCN is lower then INTERNAL SCN - but no calculated or logical connection
the mistake was to order by scn, SEQUENCE#.
once you remove the order by clause, each INTERNAL statement follows its corresponding UPDATE.
credit goes to srulon.

Ordering/Reordering the list based on a specific column which holds an order number in JPA

I'm using JPA and also I have a table like below structure:
|--------------|--------------------------------|
| ID | Title | OrderNumber |
|--------------|--------------|-----------------|
| 1 | Test | 0 |
|--------------|--------------|-----------------|
| 2 | Test2 | 1 |
|--------------|--------------|-----------------|
So, It's easy to order the list by the 'OrderNumber' in queries, but I've not found an appropriate way to update/set its value in reordering operation yet (Because I have an option in user-side to change the order of the list by drag and drop).
What is a suitable way to solve this problem without any store procedure in the database?

Cucumber - run same feature a number of times depending on records in a database

I have a cucumber feature that checks a website has processed payment files correctly (BACS,SEPA, FPS etc). The first stage of the process is to create the payment files which in-turn create expected result data in a database. This data is then used to validate against the payment processing website.
If I process one file, my feature works perfectly validating the expected results. Where I'm stuck is how I get the feature to run (n) number of times depending on the number of records/files that were originally processed.
I've tried an 'Around' hook using a record count iteration with no joy, can't see how I can fit it into an outline scenario and now think that perhaps a rake task to call the feature might work.
Any ideas would be greatly appreciated.
Here's a sample of the feature:
Feature: Processing SEPA Credit Transfer Files. Same Day Value Payments.
Background:
Given we want to test the "SEPA_Regression" scenario suite
And that we have processed a "SEPA" file from the "LDN" branch
And we plan to use the "ITA1" environment
Then we log in to "OPF" as a "SEPA Department" user
#feature #find_and_check_sepa_interchange #all_rows
Scenario: Receive SEPA Credit Transfer Files for branch
Given that we are on the "Payment Management > Interchanges" page
When I search for our Interchange with the following search parameters:
| Field Name |
| Transport Date From |
| Bank |
| Interchange Reference |
Then I can check the following fields for the given file in the "Interchanges" table:
| Field Name|
| Interchange Reference |
| Transport Date |
| File Name |
| File Format |
| Clearing Participant |
| Status |
| Direction |
| Bank |
When I select the associated "Interchange Id" link
Then the "Interchange Details" page is displayed
Update I've implemented nested steps for the feature so that I can call the database records first and feed each set of records (or at least the row id) into the main feature like so:
Feature
#trial_feature
Scenario: Validate multiple Files
Given we have one or more records in the database to process for the "SEPA_Regression" scenario
Then we can validate each file against the system
Feature steps:
Then(/^we can validate each file against the system$/) do
x = 0
while x <= $interchangeHash.count - 1
$db_row = x
# Get the other sets of data using the file name in the query
id = $interchangeHash[x]['id']
file_name = $interchangeHash[x]['CMS_Unique_Reference_Id']
Background.get_data_for_scenario(scenario, file_name)
steps %{
Given that we are on the "Payment Management > Interchanges" page
When I search for our Interchange with the following search parameters:
| Field Name |
| Transport Date From |
| Bank |
| Interchange Reference |
Then I can check the following fields for the given file in the "Interchanges" table:
| Field Name|
| Interchange Reference |
| Transport Date |
| File Name |
| File Format |
| Clearing Participant |
| Status |
| Direction |
| Bank |
When I select the associated "Interchange Id" link
Then the "Interchange Details" page is displayed
Seems a bit of a 'hack' but it works.
If you have batch processing software, then you should have several Given (setup) steps, 1 When (trigger) step, several Then (criteria) steps.
Given I have these SEPA bills
| sepa bill 1 |
| sepa bill 2 |
And I have these BAC bills
| bac bill 1 |
| bac bill 2 |
When the payments are processed
Then these sepa bills are completed
| sepa bill 1 |
| sepa bill 2 |
And I these bac bills are completed
| bac bill 1 |
| bac bill 2 |
It's simpler, easier to read what is supposed to be done, and can be expanded to more. The works should be done in the step definitions of setting up and verifying.

pragma autonomous_transaction in a trigger

I have written a trigger on one table which deletes data from other table upon a condition.
The trigger has pragma autonomous_transaction, and trigger works as intended. However, I do wonder if there can be any problems in future, say if data is inserted by multiple users/sources at the same time etc...Any suggestions?
Source table t1:
--------------------------------------------
| user_id | auth_name1 | auth_name2 | data |
--------------------------------------------
| 1 | Name1 | Name2 | d1 |
| 2 | Name3 | Name4 | d2 |
| 3 | Name5 | Name1 | d3 |
--------------------------------------------
Target table t2:
------------------------------------------------
| record_id | identifier | status | data1 |
------------------------------------------------
| 100 | Broken | 11 | Name1 |
| 101 | Reminder | 99 | Name1 |
| 102 | Broken | 99 | Name2 |
| 103 | Broken | 11 | Name4 |
------------------------------------------------
Trigger code:
create or replace trigger "ca"."t$t1"
after update of auth_name1, auth_name2 on ca.t1
for each row
declare
pragma autonomous_transaction;
begin
if :new.auth_name1 is not null and :new.auth_name2 is not null then
delete from ca.t2 ml
where ml.identifier = 'Broken'
and data1 = regexp_substr(:new.auth_name1, '\S+$')||' '||regexp_substr(:new.auth_name1, '^\S+')
and status = 11;
commit;
end if;
end t$t1;
Using an autonomous transaction for anything other than logging that you want to be preserved when the parent transaction rolls back is almost certainly an error. This is not a good use of an autonomous transaction.
What happens, for example, if I update a row in t1 but my transaction rolls back. The t2 changes have already been made and committed so they don't roll back. That generally means that the t2 data is now incorrect. The whole point of transactions is to ensure that a set of changes is atomic and is either completely successful or completely reverted. Allowing code to be partially successful is almost never a good idea.
I'm hard-pressed to see what using an autonomous transaction buys you here. You'll often see people incorrectly using autonomous transactions to incorrectly work around mutating trigger errors. But the code you posted wouldn't generate a mutating trigger error unless there was a row-level trigger on t2 that was also trying to update t1 or some similar mechanism that was introducing a mutating table. If that's the case, though, using an autonomous transaction is generally even worse because the autonomous transaction then cannot see the changes being made in the parent transaction which almost certainly causes the code to behave differently than you would desire.

Resources