Using PL/SQL transactions with wizards in Oracle APEX 5.0 - oracle

I'm trying to use a transaction process with a wizard in apex 5.0.
I want to register a new Student to the database, so in the first page of the
wizard I want to create a savepoint (s1) then insert the information of the
student into the table "STUDENT", and in the second page I want to insert the info of the student's superior.
what I want to do is when the user click the Previous button, I want to rollback to savepoint s1 and undo the insert statement.
I tried to create a process but it seems that the rollback statement in the second page can't see the savepoint I declared in the first page.
so, does any one can help with that?

Apex uses connection pooling. Unlike client-server environments like Oracle Forms, Apex is state-less. DB connections are extremely short and fleeting and are not tied to one apex session. The session in apex is something for apex itself.
This means that transactional control is not as you'd think it is. A render of a page is a short DB connection / session and ends when the page has rendered. When submitting it'll be another session.
Oracle Apex Documentation link
2.6.2 What Is a Session?
A session is a logical construct that establishes persistence (or stateful behavior) across page views. Each
session is assigned a unique identifier. The Application Express
engine uses this identifier (or session ID) to store and retrieve an
application's working set of data (or session state) before and after
each page view.
Because sessions are entirely independent of one another, any number
of sessions can exist in the database at the same time. A user can
also run multiple instances of an application simultaneously in
different browser programs.
Sessions are logically and physically distinct from Oracle database
sessions used to service page requests. A user runs an application in
a single Oracle Application Express session from sign in to sign out
with a typical duration measured in minutes or hours. Each page
requested during that session results in the Application Express
engine creating or reusing an Oracle database session to access
database resources. Often these database sessions last just a fraction
of a second.
Can you still use savepoints? Yes. But not just anywhere. You could use it in one process. Can you set one in one page and then rollback from another? No. The technology just does not allow it. Even if it did, you'd have to deal with implicit commits, as outlined in #Cristian_I his answer.
For the same reason you can not use global temporary tables.
What CAN you use?
You could use apex collections. You can compare them to temporary tables, in that they will hold data in one apex session.
Simply store your information in collections and then process the data in them once you get to the end.
The other things you can do is: well, you can just keep the data stored in your page items. Session state is in effect. You can still access the session state of the page's items on the final step. If for some reason you wish to move back a step and then "auto-clear" that page, all you need to do is to clear the cache for that page. This is more difficult if you wish to use a tabular form somewhere since you'd have to build it on a collection, but I'd recommend a repeatable step in that case.

I think that your problem is that Apex gives commit statements when you switch from one page to another.
A simple rollback or commit erases all savepoints. (You can find out more here.) According to a post by Dan McGhan Apex gives implicit commits in the following situations:
On load, after a page finishes rendering
On submit, before branching to another page
On submit, if one or more validations fail, before re-rendering the
page
After a PL/SQL process that contains one or more bind variables has
completed
After a computation
When APEX_UTIL.SET_SESSION_STATE is called
When APEX_MAIL.PUSH_QUEUE is called
Meybe you can simulate savepoints functionalty by using some temporary tables.

Since Apex is stateless and the results of each page request are always either fully committed or fully rolled back (i.e. no inter-page savepoints are possible), you need to make a choice between two strategies:
Option 1: allow the intermediate info to be committed to the table. One way to do this is to add a flag to the table, e.g. "status", which is set to "provisional" on the first page, and updated to "complete" on the second page. This may require changes to other parts of your application so they know how to deal with any abandoned records that are left in "provisional" status.
Option 2: save the intermediate results in an Apex Collection. This data is available for the scope of the user's Apex session and is not accessible to other sessions, so would be ideal for this scenario. https://docs.oracle.com/database/121/AEAPI/apex_collection.htm#AEAPI531

Related

Cache and update regularly complex data

Lets star with background. I have an api endpoint that I have to query every 15 minutes and that returns complex data. Unfortunately this endpoint does not provide information of what exactly changed. So it requires me to compare the data that I have in db and compare everything and than execute update, add or delete. This is pretty boring...
I came to and idea that I can simply remove all data from certain tables and build everything from scratch... But it I have to also return this cached data to my clients. So there might be a situation that the db will be empty during some request from my client because it will be "refreshing/rebulding". And that cant happen because I have to return something
So I cam to and idea to
Lock the certain db tables so that the client will have to wait for the "refreshing the db"
or
CQRS https://martinfowler.com/bliki/CQRS.html
Do you have any suggestions how to solve the problem?
It sounds like you're using a relational database, so I'll try to outline a solution using database terms. The idea, however, is more general than that. In general, it's similar to Blue-Green deployment.
Have two data tables (or two databases, for that matter); one is active, and one is inactive.
When the software starts the update process, it can wipe the inactive table and write new data into it. During this process, the system keeps serving data from the active table.
Once the data update is entirely done, the system can begin to serve data from the previously inactive table. In other words, the inactive table becomes the active table, and vice versa.

Two database connection vs replication on one database

Have a web service which connect to oracle database. On database side I have two database, from first I need select some information and in second database need to make some operations.
My question is about which is better to connect only second database and make replication by dbms or scheduler job from first db which release x times in a day to refresh data or make two data source on java side and after select some data from first database, connect second one to make some operations.
From my point of view, I'd access only the "second" database (in which you do those operations) and let it acquire data it needs from the "first" database via a database link.
That can be done directly, such as
select some_columns from db1_table#db_link where ...
or, if it turns out to be way too slow and difficult to tune, create a materialized view in the second database which would then be refreshed using one of available options (a scheduled refresh might be one of the choices).
As it is primarily opinion-based answer, I presume that you'll hear other options from someone else. Further discussion will raise the most appropriate solution to the top.

Parameterized trigger - concurrency concerns

My question is quite similar to this one but I need more guidance. I also read the Oracle context doc.
The current (test) trigger is :
CREATE OR REPLACE TRIGGER CHASSIS_DT_EVNT_AIUR_TRG_OLD AFTER DELETE OR INSERT OR UPDATE
OF ETA
ON CHASSITRANSPORTS
REFERENCING NEW AS New OLD AS Old
FOR EACH ROW
DECLARE
BEGIN
INSERT INTO TS_CHASSIS_DATE_EVENTS (CHASSISNUMBER,DATETYPE,TRANSPORTLEGSORTORDER,OLDDATE,CREATEDBY,CREATEDDATE,UPDATEDBY,UPDATEDDATE) VALUES (:old.chassino,'ETA',:old.sortorder,:old.eta,'xyz',sysdate,'xyz',sysdate);
EXCEPTION
WHEN OTHERS THEN
NULL;
END TS_CHASSIS_DT_EVNT_AIUR_TRG;
Now the 'CREATEDBY', 'UPDATEDBY' will be the web application users who have logged in and made the changes which caused the trigger execution, hence, these values need to be passed from the application.
The web application :
Is deployed in Websphere Application Server where the datasources are configured
As expected, is using db connection pooling
My question is which approach mentioned in the thread and the doc. should I take to avoid the 'concurrency' issues i.e the updates by the app. users in multiple sessions at the application level as well the db level should not interfere with each other.
I don't think any one of the approaches in that link would apply to you, primarily due to multi-user environment and connection pooling.
Connection pooling by nature allows different connections to share the same session. Setting a context (either sys_context or any other application context) is valid throughout the lifetime of the session. So two different connections can overwrite each other's values and read other's values. (concurrency issues)
I'd actually argue against doing an insert like this inside a trigger at all. It seems to me the insert you are doing is to write to a log table all updates that happened on the main table. If that is the case, why not insert to the log table at the time of making any updates to this table.
So the procedure that does UPDATE CHASSITRANSPORTS ... would also have another INSERT statement inside it that writes to the other table. If there is no procedure and it is a direct update statement from the application, then write a procedure for this.
You could say that there are multiple places where the same update happens and I'll suggest that in that scenario create an API for the base table CHASSITRANSPORTS that handles updates and so behind a black box also writes to the log table. Any place where you need to update that table column you'd use that API.
(I'm ignoring the fact that you are suppressing all errors in the trigger with WHEN OTHERS THEN NULL with the hope that this is probably just a small example)

How to structure models, beans, controllers, and views for a different jsp pages but reside in to one table in a database?

This is a new project we are doing using Spring MVC 2.5, jsp , Java7, Ajax, and HTML5. On my part I am going to have have 7-10 jsp pages which contain one form each.These pages are sequential. i.e One have to pass the first page successfuly to go to the second and pass the second page to go to the third and so on.
The data in order to be persisted, one has to get to the last page (after passing the rest successfully) and confirm the information is correct. Once the user confirms, I have to persist all the data stored in a bean or session (All or none). No incomplete data should be persisted. Let's call our database table "employee"
I am new to Spring MVC but got the idea and implemented the page flow using a controller.
My question is should I need to have one model class or bean to store all the data, or use session to store each pages information and keep it in the session until it gets persisted?
Or its better to have one model class, but multiple controller/bean to control the data flow from each page. Which one do you recommend? Is there any design pattern already implemented to answer my question? If you have a better idea please feel free to discuss your idea.
There are two approaches as you have already mentioned. Which one to use depends on the datasize and other requirements, for example, whether the user can come back later and continue from where he left. The model and controller need not be just one. It can be designed appropriately.
a) Store data from each screen in the session:
Pros: Unnecessary data is not persisted to db. Can manipulate data from within the session when user traverses back and forth on the screens and hence faster.
Cons of this approach: Too much information in the session can cause memory issues. May not be very helpful during session failover.The user cannot log back in and continue from where the user left, if this functionality is required.
b) Persist each screen data as the user moves on:
Pros: Session is lighter, so only minimum relevant information is stored in the session. User can log back in and continue from where the user left.
A separate inprogress db tables can be used to store this information and only on final submit insert/update the data into the actual tables, else the db would contain a lot of unsubmitted data. This way the inprogress db can be cleaned up periodically.
Cons: Need to make db calls to persist and retrieve for every screen, even though it may not be submitted by the user.
You are correct about your use of the HTTP session for storing the state of the forms.
or use session to store each pages information and keep it in the
session until it gets persisted?
because of this requirement:
No incomplete data should be persisted
As for
should I need to have one model class or bean to store all the data
You can model this as you see fit. Perhaps a model to represent the flow and then an object for each page. Depends on how the data is split across the pages.
Although as noted in a comment above you might be able to make use of WebFlow to achieve this. However that is ultimately just a lightweight framework over Spring MVC.

Oracle Database Change Notification and ROWID's

Oracle's database change notification feature sends rowids (physical row addresses) on row inserts, updates and deletes. As indicated in the oracle's documentation this feature can be used by the application to built a middle tier cache. But this seems to contradict when we have a detailed look on how row ids work.
ROWID's (physical row addresses) can change when various database operations are performed as indicated by this stackoverflow thread. In addition to this, as tom mentions in this thread clustered tables can have same rowids.
Based on the above research, it doesn't seem to be safe to use the rowid sent during the database change notification as the key in the application cache right? This also raises a question on - Should database change notification feature be used to built an application server cache? or is a recommendation made to restart all the application server clusters (to reload/refresh the cache) when the tables of the cached objects undergo any operations which result in rowid's to change? Would that be a good assumption to be made for production environments?
It seems to me to none of operations that can potentially change the ROWID is an operation that would be carried out in a productive environment while the application is running. Furthermore, I've seen a lot of productive software that uses the ROWID accross transaction (usually just for a few seconds or minutes). That software would probably fail before your cache if the ROWID changed. So creating a database cache based on change notification seems reasonable to me. Just provide a small disclaimer regarding the ROWID.
The only somewhat problematic operation is an update causing a movement to another partition. But that's something that rarely happens because it defeats the purpose of the partitioning, at least if it occurred regularly. The designer of a particular database schema will be able to tell you whether such an operation can occur and is relevant for caching. If none of the tables has ENABLE ROW MOVEMENT set, you don't even need to ask the designer.
As to duplicate ROWIDs: ROWIDs aren't unique globally, they are unique within a table. And you are given both the ROWID and the table name in the change notification. So the tuple of ROWID and table name is a perfect unique key for building a reliable cache.

Resources