Global Temp Table Oracle - oracle

New for Oracle. I have used the global temporary table in Oracle. This temp table stores the input values from the front end, and we further process the data.
So my question is that since multiple users will send the requests how will store the data for different user? For example User A has send the request with record id 101 and 102 and at the same time user B has send the request with record id 103 and 104. So it will process the data independently? Will it not merge the data?

Global temporary tables store data at the session level. So if User "A" and User "B" are using separate, dedicated connections there is no problem: neither will see the other's data.
Of course in the modern world many applications are web applications and users connect to the database through shared connections in a connection pool. If this is your architecture you have a problem: web architectures are stateless and global temporary tables are stateful. How you would work around this depends on exactly why you are using GTTs in the first place.

Related

Can I keep a copy of a table of one database in another database in a microservice architecture?

I am currently new to microservice architecture so thanks in advance.
I have two different services a User Service and a Footballer Service each having their individual databases.(User database and Footballer database).
The Footballer service has a database with a single table storing footballer informations.
The User service has a database which stores User details along with other user related data.
Now a User can add footballers to their team by querying the Footballer service and I need to store them somewhere in order to be displayed later.
Currently I'm storing the footballers for each user in a table in the User database whereby I make a call to the Footballer service to give me the details of a specific footballer by ID and save them in the USer database by mapping against the USer ID.
So is this a good idea to do that and by any chance does it mean im replicating data between two services
and if it is than what other ways can i achieve the same functionality ?
Currently I'm storing the footballers for each user in a table in the User database whereby I make a call to the Footballer service to give me the details of a specific footballer by ID and save them in the USer database by mapping against the USer ID.
"Caching" is a fairly common pattern. From the perspective of the User microservice, the data from Footballer is just another input which you might save or not. If you are caching, you'll usually want to have some sort of timestamp/version on the cached data.
Caching identifiers is pretty normal - we often need some kind of correlation identifier to connect data in two different places.
If you find yourself using Footballer data in your User domain logic (that is to say, the way that User changes depends on the Footballer data available)... that's more suspicious, and may indicate that your boundaries are incorrectly drawn / some of your capabilities are in the wrong place.
If you are expecting the User Service to be autonomous - that is to say, to be able to continue serving its purpose even when Footballer is out of service, then your code needs to be able to work from cached copies of the data from Footballer and/or be able to suspend some parts of its work until fresh copies of that data are available.
People usually follow DDD (Domain driven design) in case of micro-services :
So here in your case there are two domains i.e. 2 services :
Users
Footballers
So, user service should only do user specific tasks, it should not be concerned about footballer's data.
Hence, according to DDD, the footballers that are linked to the user should be stored in football service.
Replicating the ID wouldn't be considered replication in case of microservices architecture.

Parallel processing of records from database table

I have a relational table that is being populated by an application. There is a column named o_number which can be used to group the records.
I have another application that is basically having a Spring Scheduler. This application is deployed on multiple servers. I want to understand if there is a way where I can make sure that each of the scheduler instances processes a unique group of records in parallel. If a set of records are being processed by one server, it should not be picked up by another one. Also, in order to scale, we would want to increase the number of instances of the scheduler application.
Thanks
Anup
This is a general question, so here's my general 2 cents on the matter.
You create a new layer managing the requesting originating from your application instances to the database. So, probably you will be building a new code/project running on the same server as the database (or some other server). The application instances will be talking to that managing layer instead of the database directly.
The manager will keep track of which records are requested hence fetch records that are yet to be processed upon each new request.

Can Data Replication Deliver/Push One of Two Set Data to Client Nodes?

I step into a retail system merge project lately. A retail chain company acquires a far smaller different business retail chain company. The company decides to modify their retail system so that it also can be used in the acquired retail stores. Their retail system is built with the SAP retail application and Oracle Data replication with a store inventory application. They have one set of DB tables under one schema for read-only in the store application and another set of DB tables under another schema for data generated in their store application. In other words, the first set of DB table is for inbound data and the second set of DB tables for both outbound and inbound data from a store point of view.
The SDEs who built the store application suggest adding a new column, store type, to multiple tables for the inbound data to differentiate the two different retail business system data. For example, they want to add a store type column to their vendor table. To my understanding, data replication shall/can set up so that only related data is sent to a client node. For example, a store of one of their retail business system shall receive vendor inbound data for the business, but not any vendor data for the other system. If so, why a new column is needed? Those SDEs are not experts of data replication. I didn't know anything about data replication until three weeks ago. I don't know whether I miss something on this subject or not.

How much data/networking usage does an Oracle Client use while running queries across schemas?

When I run a query to copy data from schemas, does it perform all SQL on the server end or copy data to a local application and then push it back out to the DB?
The two tables sit in the same DB, but the DB is accessed through a VPN. Would it change if it was across databases?
For instance (Running in Toad Data Point):
create table schema2.table
as
select
sum(row1)
,row2
from schema1
The purpose I ask the question is because I'm getting quotes for a Virtual Machine in Azure Cloud and want to make sure that I'm not going to break the bank on data costs.
The processing of SQL statements on the same database usually takes place entirely on the server and generates little network traffic.
In Oracle, schemas are a logical object. There is no physical barrier between them. In a SQL query using two tables it makes no difference if those tables are in the same schema or in different schemas (other than privilege issues).
Some exceptions:
Real Application Clusters (RAC) - RAC may share a huge amount of data between the nodes. For example, if the table was cached on one node and the processing happened on another, it could send all the table data through the network. (I'm not sure how this works on the cloud though. Normally the inter-node traffic is done with a separate, dedicated network connection.)
Database links - It should be obvious if your application is using database links though.
Oracle Reports and Forms(?) - A few rare tools have client-side PL/SQL processing. Possibly those programs might send data to the client for processing. But I still doubt it would do something crazy like send an entire table to the client to be sorted, and then return the results to the server.
Backups/archive logs - I assume all the data will be backed up. I'm not sure how that's counted, but possibly that means all data written will also be counted as network traffic eventually.
The queries below are examples of different ways to check the network traffic being generated.
--SQL*Net bytes sent for a session.
select *
from gv$sesstat
join v$statname
on gv$sesstat.statistic# = v$statname.statistic#
--You probably also want to filter for a specific INST_ID and SID here.
where lower(display_name) like '%sql*net%';
--SQL*Net bytes sent for the entire system.
select *
from gv$sysstat
where lower(name) like '%sql*net%'
order by value desc;

Dead lock is happening same data base record updating in multiple connection sessions concurrently

We have implemented client server socket based application to process multiple shopping cart requests. Daily we receive thousands of shopping cart requests.
For this we implemented multi threaded architecture to process requests concurrently. We are using Oracle Connection Pool for data base operations and we set optimal value for connection pool size. As per our business process we have a main database table and we need to update same set of rows by multiple threads using multiple connection sessions concurrently. Now are getting some dead lock issues because of multiple threads will try to update the data on same rows using multiple connection sessions concurrently and also we are some other primary key violations on tables. Sometimes data base is also getting locked by inserting same data in multiple connection sessions concurrently.
Please suggest me good approach to handle above problems immediately.
There are a few different general solutions to writing multithreaded code that does not encounter deadlocks. The simplest is to ensure that you always lock resources in the same order.
A deadlock occurs when one session holds a lock on A and wants a lock on B while another session holds a lock on B and wants a lock on A. If you ensure that your code always locks A before B (or B before A), you can be guaranteed that you won't have a deadlock.
As for your comment about primary key violations, are you using something other than an Oracle sequence to generate your primary keys? If so, that is almost certainly the problem. Oracle sequences are explicitly designed to provide unique primary keys in the case where you have multiple sessions doing simultaneous inserts.

Resources