Two database connection vs replication on one database - oracle

Have a web service which connect to oracle database. On database side I have two database, from first I need select some information and in second database need to make some operations.
My question is about which is better to connect only second database and make replication by dbms or scheduler job from first db which release x times in a day to refresh data or make two data source on java side and after select some data from first database, connect second one to make some operations.

From my point of view, I'd access only the "second" database (in which you do those operations) and let it acquire data it needs from the "first" database via a database link.
That can be done directly, such as
select some_columns from db1_table#db_link where ...
or, if it turns out to be way too slow and difficult to tune, create a materialized view in the second database which would then be refreshed using one of available options (a scheduled refresh might be one of the choices).
As it is primarily opinion-based answer, I presume that you'll hear other options from someone else. Further discussion will raise the most appropriate solution to the top.

Related

Cache and update regularly complex data

Lets star with background. I have an api endpoint that I have to query every 15 minutes and that returns complex data. Unfortunately this endpoint does not provide information of what exactly changed. So it requires me to compare the data that I have in db and compare everything and than execute update, add or delete. This is pretty boring...
I came to and idea that I can simply remove all data from certain tables and build everything from scratch... But it I have to also return this cached data to my clients. So there might be a situation that the db will be empty during some request from my client because it will be "refreshing/rebulding". And that cant happen because I have to return something
So I cam to and idea to
Lock the certain db tables so that the client will have to wait for the "refreshing the db"
or
CQRS https://martinfowler.com/bliki/CQRS.html
Do you have any suggestions how to solve the problem?
It sounds like you're using a relational database, so I'll try to outline a solution using database terms. The idea, however, is more general than that. In general, it's similar to Blue-Green deployment.
Have two data tables (or two databases, for that matter); one is active, and one is inactive.
When the software starts the update process, it can wipe the inactive table and write new data into it. During this process, the system keeps serving data from the active table.
Once the data update is entirely done, the system can begin to serve data from the previously inactive table. In other words, the inactive table becomes the active table, and vice versa.

Dynamically List contents of a table in database that continously updates

It's kinda real-world problem and I believe the solution exists but couldn't find one.
So We, have a Database called Transactions that contains tables such as Positions, Securities, Bogies, Accounts, Commodities and so on being updated continuously every second whenever a new transaction happens. For the time being, We have replicated master database Transaction to a new database with name TRN on which we do all the querying and updating stuff.
We want a sort of monitoring system ( like htop process viewer in Linux) for Database that dynamically lists updated rows in tables of the database at any time.
TL;DR Is there any way to get a continuous updating list of rows in any table in the database?
Currently we are working on Sybase & Oracle DBMS on Linux (Ubuntu) platform but we would like to receive generic answers that concern most of the platform as well as DBMS's(including MySQL) and any tools, utilities or scripts that can do so that It can help us in future to easily migrate to other platforms and or DBMS as well.
To list updated rows, you conceptually need either of the two things:
The updating statement's effect on the table.
A previous version of the table to compare with.
How you get them and in what form is completely up to you.
The 1st option allows you to list updates with statement granularity while the 2nd is more suitable for time-based granularity.
Some options from the top of my head:
Write to a temporary table
Add a field with transaction id/timestamp
Make clones of the table regularly
AFAICS, Oracle doesn't have built-in facilities to get the affected rows, only their count.
Not a lot of details in the question so not sure how much of this will be of use ...
'Sybase' is mentioned but nothing is said about which Sybase RDBMS product (ASE? SQLAnywhere? IQ? Advantage?)
by 'replicated master database transaction' I'm assuming this means the primary database is being replicated (as opposed to the database called 'master' in a Sybase ASE instance)
no mention is made of what products/tools are being used to 'replicate' the transactions to the 'new database' named 'TRN'
So, assuming part of your environment includes Sybase(SAP) ASE ...
MDA tables can be used to capture counters of DML operations (eg, insert/update/delete) over a given time period
MDA tables can capture some SQL text, though the volume/quality could be in doubt if a) MDA is not configured properly and/or b) the DML operations are wrapped up in prepared statements, stored procs and triggers
auditing could be enabled to capture some commands but again, volume/quality could be in doubt based on how the DML commands are executed
also keep in mind that there's a performance hit for using MDA tables and/or auditing, with the level of performance degradation based on individual config settings and the volume of DML activity
Assuming you're using the Sybase(SAP) Replication Server product, those replicated transactions sent through repserver likely have all the info you need to know which tables/rows are being affected; so you have a couple options:
route a copy of the transactions to another database where you can capture the transactions in whatever format you need [you'll need to design the database and/or any customized repserver function strings]
consider using the Sybase(SAP) Real Time Data Streaming product (yeah, additional li$ence is required) which is specifically designed for scenarios like yours, ie, pull transactions off the repserver queues and format for use in downstream systems (eg, tibco/mqs, custom apps)
I'm not aware of any 'generic' products that work, out of the box, as per your (limited) requirements. You're likely looking at some different solutions and/or customized code to cover your particular situation.

Oracle Materialized View - Need help in creating View with very large number of records

We have to create a materialised view in our database from a remote database which is currently in production. The view has about 5 crore records and is taking a long time. In between , the connection has snapped once and not even a single record persisted in our database. Since the remote database is production server, we get a very limited window to create the view.
My question is, can we have something like auto commit /auto start from where left at last time while the view is created so that we don't have to do the entire operation in one go ?
We are tying to make the query such that records can be fetched in smaller numbers as a alternate. But the data is read only for us and doesn't really have a where clause at this point which we can use.
Due to sensitive nature of data, I cannot post the view structure or the query.
No, you can not commit during the process of creating the view. But why don't you import the data to a table instead of a view? This would provide the possibility to commit in between. Furthermore this might provide the possibility, to load only the delta of changes maybe on daily basis - this would reduce the required time dramatically.

Using PL/SQL transactions with wizards in Oracle APEX 5.0

I'm trying to use a transaction process with a wizard in apex 5.0.
I want to register a new Student to the database, so in the first page of the
wizard I want to create a savepoint (s1) then insert the information of the
student into the table "STUDENT", and in the second page I want to insert the info of the student's superior.
what I want to do is when the user click the Previous button, I want to rollback to savepoint s1 and undo the insert statement.
I tried to create a process but it seems that the rollback statement in the second page can't see the savepoint I declared in the first page.
so, does any one can help with that?
Apex uses connection pooling. Unlike client-server environments like Oracle Forms, Apex is state-less. DB connections are extremely short and fleeting and are not tied to one apex session. The session in apex is something for apex itself.
This means that transactional control is not as you'd think it is. A render of a page is a short DB connection / session and ends when the page has rendered. When submitting it'll be another session.
Oracle Apex Documentation link
2.6.2 What Is a Session?
A session is a logical construct that establishes persistence (or stateful behavior) across page views. Each
session is assigned a unique identifier. The Application Express
engine uses this identifier (or session ID) to store and retrieve an
application's working set of data (or session state) before and after
each page view.
Because sessions are entirely independent of one another, any number
of sessions can exist in the database at the same time. A user can
also run multiple instances of an application simultaneously in
different browser programs.
Sessions are logically and physically distinct from Oracle database
sessions used to service page requests. A user runs an application in
a single Oracle Application Express session from sign in to sign out
with a typical duration measured in minutes or hours. Each page
requested during that session results in the Application Express
engine creating or reusing an Oracle database session to access
database resources. Often these database sessions last just a fraction
of a second.
Can you still use savepoints? Yes. But not just anywhere. You could use it in one process. Can you set one in one page and then rollback from another? No. The technology just does not allow it. Even if it did, you'd have to deal with implicit commits, as outlined in #Cristian_I his answer.
For the same reason you can not use global temporary tables.
What CAN you use?
You could use apex collections. You can compare them to temporary tables, in that they will hold data in one apex session.
Simply store your information in collections and then process the data in them once you get to the end.
The other things you can do is: well, you can just keep the data stored in your page items. Session state is in effect. You can still access the session state of the page's items on the final step. If for some reason you wish to move back a step and then "auto-clear" that page, all you need to do is to clear the cache for that page. This is more difficult if you wish to use a tabular form somewhere since you'd have to build it on a collection, but I'd recommend a repeatable step in that case.
I think that your problem is that Apex gives commit statements when you switch from one page to another.
A simple rollback or commit erases all savepoints. (You can find out more here.) According to a post by Dan McGhan Apex gives implicit commits in the following situations:
On load, after a page finishes rendering
On submit, before branching to another page
On submit, if one or more validations fail, before re-rendering the
page
After a PL/SQL process that contains one or more bind variables has
completed
After a computation
When APEX_UTIL.SET_SESSION_STATE is called
When APEX_MAIL.PUSH_QUEUE is called
Meybe you can simulate savepoints functionalty by using some temporary tables.
Since Apex is stateless and the results of each page request are always either fully committed or fully rolled back (i.e. no inter-page savepoints are possible), you need to make a choice between two strategies:
Option 1: allow the intermediate info to be committed to the table. One way to do this is to add a flag to the table, e.g. "status", which is set to "provisional" on the first page, and updated to "complete" on the second page. This may require changes to other parts of your application so they know how to deal with any abandoned records that are left in "provisional" status.
Option 2: save the intermediate results in an Apex Collection. This data is available for the scope of the user's Apex session and is not accessible to other sessions, so would be ideal for this scenario. https://docs.oracle.com/database/121/AEAPI/apex_collection.htm#AEAPI531

materialized view over multiple databases

Set-up :
There is one TRANSPORT database and 4 PRODUNIT databases. All these 5 DBs are on different machines and are Oracle databases.
Requirement :
A 'UNIFIED view' is required in the TRANSPORT db which will retrieve data from a table that is present in all the 4 PRODUNIT databases. So when there is a query on the TRANSPORT database(with where clause), the data may be present in any one of the 4 PRODUNIT databases
The query will be kind of 'real time' i.e it requires that as soon as the data is inserted/updated in the table of any of the 4 PRODUNIT databases, it is IMMEDIATELY available in the TRANSPORT db
I searched on the net and ended up with the materialized view. I have the below concerns before I proceed :
Will the 'fast refresh on commit' ensure requirement 2 ?
The table in the individual PRODUNIT databases will experience frequent DML. I'm suspecting a performance impact on the TRANSPORT db - am I correct ? If yes, how shall I proceed ?
I'm rather wondering if there is an approach better than the materialized view !
A materialized view that refreshes on commit cannot refer to a remote object so it doesn't do you a lot of good. If you could do a refresh on commit, you could maintain the data in the transport database synchronously. But you can't.
I would seriously question the wisdom of wanting to do synchronous replication in this case. If you could, then the local databases would become unusable if the transport database was down or the network connection was unavailable. You'd incur the cost of a two-phase commit on every transaction. And it would be very easy for one of the produnit databases to block transactions happening on the other databases.
In virtually every instance I've ever come across, you'd be better served with asynchronous replication that keeps the transport database synchronized to within, say, a few seconds of the produnit database. You probably want to look into GoldenGate or Streams for asynchronous replication with relatively short delays.
whether or not you require a MV would depend on the performance between your DBs and the volume of data concerned.
I would start with a normal view, using DB links to select the data from the databases, but would need to test this to see that the performance is like.
Given requirement 2, a refresh on commit would probably be the best approach if performance on a normal view was poor.

Resources