Oracle Forms UNIQUE ID generation - uniqueidentifier

I am creating an Oracle Form. This form has a field by the name FILE_NUM. There is one more field by the name CLIENT_ID in the form. I have to generate unique FILE_NUM. The process is:
If the CLIENT_ID already exists in the table, get the FILE_NUM and assign it to the new record
ELSE, take the maximum of FILE_NUM, add 1 and assign it to the new record.
This should be taken care when multiple users are working on the form. Hence I did the following:
In Key-Commit trigger, I check if there is a lock on the table.
If the table is locked, I make form wait for 3 second and check again.
If the table is not locked, I am locking the table and inserting the records with the above check.
My query is: is this the right way to do? Is there any other way to generate the FILE_NUM (maybe via trigger?). The problem with key-commit trigger is that if the form closes forcefully, the lock is not removed. This will cause more issues, hence I want to remove the lock feature.
Please advice.

This is the correct way, but indeed the lock can stay in some cases.
If the number doesn't have to follow each other you can use a sequence instead.
This will give you a number when needed and it will be unique.

Related

Model-driven PowerApp: Best practice to display subgrid of records with no appropriate primary column name

Background
Each Dataverse table contains a primary name column. When displayed in a subgrid, clicking on the primary name column will navigate to the form so that the user can edit that row. Most subgrids in my application work this way.
The Problem
I have a Course form with a list of participants displayed in a subgrid. The subgrid displays each student's name (as a link) and the grade received in the course. There is no appropriate primary name column for this Participant table. To edit the participant record, the user must select the row in the subgrid, then click the subgrid's Edit button. As a result, this UI is different from all other subgrids in the application and I know that user's will click the student name to try to edit the participant record and be confused when they are presented with the student record.
Am I missing something? Is there a better way to handle this?
It's a common problem I face quite often. Here is usually what I would do.
Make sure the Primary Name Column always contains relevant information to the user to be able to quickly identify a record. Sometimes it requires copying information from one or multiple other columns into the primary column.
In your case that would probably means concatenating the student's name and grade.
How to do that?
Common to all solutions below
Use one of the following solution to copy the content of one or several fields into the primary column.
Make sure the solution you select also updates the content of the primary name column when one of the copied field is updated.
Remove or hide the primary column from the form, the name of the record will be displayed at the top of the form anyway and you probably don't want users to play with it.
Display the primary name column in every subgrid.
I would recommend not adding the fields copied into the primary column in the subgrids to avoid confusion.
Solution 1 - Classic Workflow
Create a classic workflow that runs when a record is created / updated
Pros:
Very quick to put in place
Runs synchronously (users will see the name updated in real-time)
Cons:
Not very practical if you need to add business logic (using different fields as source depending on a certain condition for example)
Solution 2 - Power Automate
Create a Flow that runs when a record is created / updated
Pros:
You can implement complex business logic in your Flow
Cons:
Runs asynchronously (users will have to refresh the page after the creation of a record to see the record's name)
According to Power Automate licensing that flow would certainly be considered as an "enterprise flow" and you are supposed to pay 100$ / month. That specific point must be taken with a grain of salt. I had several discussions with Microsoft about it and they haven't given me a clear answer about what would be considered an enterprise flow.
Solution 3 - Plugin
Create a plugin that executes when a record is created / updated
Pros:
You can implement very complex business logic in your Flow
It can run synchronously
Cons:
Pro-code (I put it as a con since Model-Driven App is a low-code / no-code approach but there is nothing wrong about pro-code per say)
Developing a new plugin for each entity where you need this logic is kind of overkill in my opinion. I would consider developing something very generic that would only require some sort of configuration when the logic needs to be applied to a new table.

Form WHERE clause

I have an APEX form I'm developing for "user settings". I have a table with a sequence as a primary key and the users ID in another column...in addition to a few columns where each users saved settings are stored (things like "N" for do not receive notices).
I haven't used Oracle APEX in a while so excuse this likely newbie question...The insert works fine, but I'm having trouble with making the form only show the current users values. In my Form Region the source is set to my Table, and I have a WHERE clause like this:
USER_ID = 813309
But that's not working (813309 is my id and I'm just hard-coding it for now). The form always comes up with a "New" record.
For a form to load a specific record you can set the primary key page item to the value you need. You can do so in the url using the link builder from another page or you can set a computation on the item. That is what I would try in your case: add a computation to your item P_USER_ID of type "Static Value" with value 813309. Make sure the computation happens before the "Fetch Row" - the value obviously needs to be set before the process runs.
In such cases, I prefer creating a Report + Form combination (using the Wizard, of course): it creates an interactive report (so that you can review data in a table), and a form which is used to add new records or update/delete existing ones.
Doing so, when you pick a user in interactive report and click the icon at the beginning of a row, Apex redirects you to the form page, passing primary key column value to the form which then fetches appropriate data from the table.
Not that it won't work the way you're trying to do it, it's just simpler if you let Apex do everything for you.
So: did you create an automatic row fetch pre-rendering process? If not, do so because - without it - Apex doesn't know what to fetch. Also, if you hardcoded user_id, it won't do much good. Consider storing username into the table so that you could reference it via :APP_USER Apex variable.

Item validation both at item and record level

I'm new to Oracle Forms.
I want to be sure that an item considered "valid" as soon as inputed, at time T0 (thru When-validate-item trigger), is still valid when the relevant row is inserted (or updated) and committed at time T1, where T1-T0 could be, say, a long coffee time, during which the system status may have changed so as to invalidate the item.
I thought about coding specific item-level program unit to be called both by WVI trigger and by a higher level trigger. Which one would be best?
Is this double check a common practice in Oracle Forms?
Note: I need to double-ckeck both in case of form layouts and in master-detail layouts.
Thank you.
You don't need to double-check on normal validation. Then a when-validate-item trigger will be enough to make sure it is valid. If you need to check a field that needs also a value in another field, for example a person is a firm it needs only a lastname otherwise it needs a first and a lastname then you could use the when-validate-record trigger to check this.
If your data depends on the system date or other data that might be inserted/updated in the time between validate and commit. You should place your validation on the pre-insert and pre-update trigger. Then it will always fire just before the insert or update.

Determine new record in PreWriteRecord event handler and check value of joined field

There is custom field "Lock Flag" in Account BC, namely in S_ORG_EXT_X table. This field is made available in Opportunity BC using join to above table. The join specification is as follows: Opportunity.Account Id = Account.Id. Account Id is always populated when creating new opportunity. The requirement is that for newly created records in Opportunity BC if "Lock Flag" is equal to 'Y', then we should not allow to create the record and we should show custom error message.
My initial proposal was to use a Runtime Event that is calling Data Validation Manager business service where validation rule is evaluated and error message shown. Assuming that we have to decide whether to write record or not, the logic should be placed in PreWriteRecord event handler as long as WriteRecord have row already commited to database.
The main problem was how to determine if it is new record or updated one. We have WriteRecordNew and WriteRecordUpdated runtime events but they are fired after record is actually written so it doesn't prevent user from saving record. My next approach was to use eScript: write custom code in BusComp_PreWriteRecord server script and call BC's method IsNewRecordPending to determine if it is new record, then check the flag and show error message if needed.
But unfortunately I am faced with another problem. That joined field "Lock Flag" is not populated for newly created opportunity records. Remember we are talking about BC Opportunity and field is placed in S_ORG_EXT_X table. When we create new opportunity we pick account that it belongs to. So it reproduceable: OpportunityBC.GetFieldValue("Lock Flag") returns null for newly created record and returns correct value for the records that was saved previously. For newly created opportunities we have to re-query BC to see "Lock Flag" populated. I have found several documents including Oracle's recomendation to use PreDefaultValue property if we want to display joined field value immediately after record creation. The most suitable expression that I've found was Parent: BCName.FieldName but it is not the case, because active BO is Opportunity and Opportunity BC is the primary one.
Thanks for your patience if you read up to here and finally come my questions:
Is there any way to handle PreWrite event and determine if it is new record or not, without using eScript and BC.IsNewRecordPending method?
How to get value of joined field for newly created record especially in PreWriteRecord event handler?
It is Siebel 8.1
UPDATE: I have found an answer for the first part of my question. Now it seems so simple to me that I am wondering how I haven't done it initially. Here is the solution.
Create Runtime Event triggered on PreWriteRecord. Specify call to Data Validation Manager business service.
In DVM create a ruleset and a rule where condition is
NOT(BCHasRows("Opportunity", "Opportunity", "[Id]='"+[Id]+"'", "AllView"))
That's it. We are searching for record wth the same Row Id. If it is new record there should't be anything in database yet (remember that we are in PreWriteRecord handler) and function returns FALSE. If we are updating some row then we get TRUE. Reversing result with NOT we make DVM raise an error for new records.
As for second part of my question credits goes to #RanjithR who proposed to use PickMap to populate joined field (see below). I have checked that method and it works fine at least when you have appropriate PickMap.
We Siebel developers have used scripting to correctly determine if record is new. One non scripting way you could try is to use RuntimeEvents to set a profileattribute during the BusComp NewRecord event, then check that in the PreWrite event to see if the record is new. However, there is always a chance that user might undo a record, those scenarios are tricky.
Another option, try invokine the BC Method:IsNewRecordPending from RunTime event. I havent tried this.
For the second part of the query, I think you could easily solve your problem using a PickMap.
On Opportunity BC, when your pick Account, just add one more pickmap to pick the Locked flag from Account and set it to the corresponding field on Opportunity BC. When the user picks the Account, he will also pick the lock flag, and your script will work in PreWriteRecord.
May I suggest another solution, again, I haven't tried it.
When new records are created, the field ModificationNumber will be set to 0. Every time you modify it, the ModificationNumber will increment by 1.
Set a DataValidationManager ruleset, trigger it from PreSetFieldValue event of Account field on Opportunity BC. Check for the LockFlag = Y AND (ModificationNumber IS NULL OR ModificationNumber = 0)) and throw error. DVM should throw error when new records are created.
Again, best practices say don't use the ModNumbers. You could set a ProfileAttribute to signal NewRecord, then use that attribute in the DVM. But please remember to clear the value of ProfileAttribute in WriteRecord and UndoRecord.
Let us know how it went !

Maintaining uniqueness of records

There are almost 3 million financial transaction records in my database. These records are loaded from external files containing following fields which are mapped to the table's columns.
Account, Date, Amount, Particulars/Description/Details/Narration
Now There is a need to maintain uniqueness of already loaded and future records.
Since there was no uniqueness in external files which are already loaded so, I think, we have to update existing records by making unique key using given fields, but, it is quite clear that fields in the external file may duplicate.
How to maintain such uniqueness that we can identify a transactions from the file is already loaded. All type of suggestions are welcome.
Edit 1
Currently loaded records are confirmed to be valid, the need to maintain uniqueness has just came up due to loading of some missing records from older files or missing files
Edit 2
Existing records may have duplicate records based on given 4 fields i.e. same values for Account, Date, Amount and Particulars for two or more valid transactions, but it is sure that these records are valid even with duplicate values.
Now for loading missing records we need to identify if a record is already loaded or not so that we don't load a record which is already loaded. So, to me, it looks very hard to know if a record is already loaded based on these fields. I see it as beyond the limits of these fields
Edit 3
Situation has changed now and this is no more a valid question but it would be better to keep it here for others. It has been agreed to add a unique key in records and hence check against this key for duplication
Note - following some clarification from the OP this answer is not relevant to their scenario. The problem is a political or business problem rather than a technical one. I will leave this answer as a solution to a hypothetical question because it may still be of use to some future seekers.
My other response addresses the OP's actual situation.
It seems like you need a compound unique key:
alter table your_table add constraint your_table_uk
unique (Account, Date, Amount, Particulars)
using index
particulars seems a bit woolly as a source of uniqueness, but presumably an account can have more than one transaction for the same amount on any given day, so you need all four columns to guarantee uniqueness of the row.
Or perhaps, as #ypercube suggests, only (Account, Date, Particulars) are necessary.
I have suggested a unique key rather than a primary key constraint because composite primary keys are bad news when it comes to enforcing foreign keys. In this case I would suggest you add a synthetic primary key, populated with a sequence.
You say the loaded records have a proven validity, but if that is not the case change the ALTER TABLE statement to use the EXCEPTIONS INTO clause to find the duplicated rows. You will a special table to capture the constraint violations. Find out more.
"Existing records may have duplicate records based on given 4 fields
i.e. same values for Account, Date, Amount and Particulars for two or
more valid transactions, but it is sure that these records are valid
even with duplicate values."
But how can anybody tell, if there is no token of uniqueness in the loaded data or the source files? What does validity even mean?
"Now for loading missing records we need to identify if a record is
already loaded or not so that we don't load a record which is already
loaded."
Without an existing source of uniqueness you cannot do this. Because it you have two rows for a given combination of (Account, Date, Amount, Particulars) and that's okay, what are the rules for determining that a third instance of (Account, Date, Amount, Particulars) is a record which which has already been loaded, hence invalid, or record which has not been loaded, hence valid.
"So, to me, it looks very hard to know if a record is already loaded
based on these fields. I see it as beyond the limits of these fields"
You're right to say that the solution cannot be found in the data as you describe it. But the solution is actually very simple. You go to the people who have asserted the validity of the loaded records and present them with a list of these additional records. They'll be able to use their skill and judgement to tell you which records are valid, and you load those.
" it is my duty to find the solution"
No it is not your duty. Right now the duty lies on the shoulders of the data owner to define their data set accurately, and that includes identifying a business key. They are the ones abrogating their responsibilities.
Under the circumstances you have three choices:
Refuse to load any further records until the data owner does their duty.
Load all the records presented to you for loading, without any validation.
Use the horrible NOVALIDATE syntax.
NOVALIDATE is a way of enforcing validation rules for future rows but ignoring violations in the existing data. Basically it's a technical kludge for a political problem.
SQL> select * from t23
/
COL1 COL2
---------- --------------------
1 MR KNOX
1 MR KNOX
2 FOX IN SOCKS
2 FOX IN SOCKS
SQL> create index t23_idx on t23(col1,col2)
/
Index created.
SQL> alter table t23 add constraint t23_uk
unique (col1,col2) novalidate
/
Table altered.
SQL> insert into t23 values (2, 'FOX IN SOCKS')
/
insert into t23 values (2, 'FOX IN SOCKS')
*
ERROR at line 1:
ORA-00001: unique constraint (APC.T23_UK) violated
SQL>
Note that you need to pre-create a non-unique index before adding the constraint. If you don't do that the database will build a unique index and that will override the NOVALIDATE clause.
I describe the NOVALIDATE as horrible because it is. It bakes data corruption into the database. But it is the closest thing you'll get to a solution.
This approach completely ignores the notion of "validity". So it will reject records which perhaps should have loaded because they represent a "valid" nth occurrence of (Account, Date, Amount, Particulars). This is unavoidable. The good news is, nobody will be able to tell, because there are no defined rules for establishing validity.
Whatever option you choose, it is crucial that you explain it clearly to your boss, the data owner, the data owner's boss and whoever else you think fit, and get their written assent to go ahead. Otherwise, sometime down the line people will discover that the database is full of duplicate rows or somebody will complain that a "valid" record hasn't been loaded, and it will all be your fault ... unless you have a signed piece of paper with authorisation from the appropriate top brass.
Good luck
Haki's suggestion of using MERGE has the same effect as NOVALIDATE, because it would load new records and suppress all duplicates. However, it is even more of a kludge: it doesn't address the notion of uniqueness at all. Anybody who had INSERT or UPDATE access would still be able to have any rows they liked. So this approach would only work if you could completely lock down privileges on that table so that its data can only be manipulated through MERGE and no other DML. Depends whether ongoing uniqueness matters. Again, a business decision.
sounds like you need an upsert - or as oracle calls it MERGE
A MERGE operation between two tables allows you to handle two common situations -
The record already exist in the target table and I need to do
something with it - either update or do nothing.
The record does not exist in the target table - Insert it.

Resources