Need guidance. I have two databases A & B, each residing in different servers. Database A is primary, at the main site and database B is at remote location, user side. Table structure in B is a subset of A. Database B will fetch certain data from A for certain functionalities to work and when data is updated by users in B, they need to be updated in A. The problem here is, the there is connectivity issue between A and B as B is in a remote area.
When connection is alive, both databases should be in sync. And when connection goes off, B should be able to function actions basing on last pulled data from A. and when connection restores, data sync should happen.
I am using Oracle 12c. Please help guys.
Regards
Related
When I run a query to copy data from schemas, does it perform all SQL on the server end or copy data to a local application and then push it back out to the DB?
The two tables sit in the same DB, but the DB is accessed through a VPN. Would it change if it was across databases?
For instance (Running in Toad Data Point):
create table schema2.table
as
select
sum(row1)
,row2
from schema1
The purpose I ask the question is because I'm getting quotes for a Virtual Machine in Azure Cloud and want to make sure that I'm not going to break the bank on data costs.
The processing of SQL statements on the same database usually takes place entirely on the server and generates little network traffic.
In Oracle, schemas are a logical object. There is no physical barrier between them. In a SQL query using two tables it makes no difference if those tables are in the same schema or in different schemas (other than privilege issues).
Some exceptions:
Real Application Clusters (RAC) - RAC may share a huge amount of data between the nodes. For example, if the table was cached on one node and the processing happened on another, it could send all the table data through the network. (I'm not sure how this works on the cloud though. Normally the inter-node traffic is done with a separate, dedicated network connection.)
Database links - It should be obvious if your application is using database links though.
Oracle Reports and Forms(?) - A few rare tools have client-side PL/SQL processing. Possibly those programs might send data to the client for processing. But I still doubt it would do something crazy like send an entire table to the client to be sorted, and then return the results to the server.
Backups/archive logs - I assume all the data will be backed up. I'm not sure how that's counted, but possibly that means all data written will also be counted as network traffic eventually.
The queries below are examples of different ways to check the network traffic being generated.
--SQL*Net bytes sent for a session.
select *
from gv$sesstat
join v$statname
on gv$sesstat.statistic# = v$statname.statistic#
--You probably also want to filter for a specific INST_ID and SID here.
where lower(display_name) like '%sql*net%';
--SQL*Net bytes sent for the entire system.
select *
from gv$sysstat
where lower(name) like '%sql*net%'
order by value desc;
Lets say we have a microservice A and a B. B has its own database. However B has to be horizontally scaled, thus we end up having 3 instances of B. What happens to the database? Does it scale accordingly, does it stays the same (centralized) database for the 3 B instances, does it become a distributed database, what happens?
The answer is based on the which kind of data should be shared from 3 B instances. Some occasions:
The B is just read data without write anything, the DB can use replicate methodology, and three B instance just read data from different DB instance, and DB was replicated.
The B instance can read/write data without interrupt other B instance, that mean every B instance can have designated data, and no data sharing between instances, the database was changed to three databases with same schema but totally different data;
The B instances should share the most of data, and every instance can occasion write the data back to the DB. So B instance should use one DB and some DB lock to avoid conflict between the instances.
In other some different situation, there will be many other approaches to solve the issue such as using memory DB like redis, queue service like rabbitMQ for B instance.
using one database by mutliple service instances is ok when you are using data partitioning.
As explained by Chris Richardson in pattern database per service,
Instances of the same service should share the same database
I have two oracle databases. Database A and Database B. Database B should be in sync with Database A. Data within DB- B wont be altered, it is only for view purpose. All the data change in DB- A should reflect in DB- B. After googling, I found db link and Materialized view
could help but I am not clear how to use them. Please give any idea.
I think you need to read the following:
http://docs.oracle.com/cd/B19306_01/backup.102/b14191/rcmdupdb.htm
http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_overview.htm#SUTIL100
A materialized view can be used for replication purposes but what you are referring to is duplication not replication.
If you all have a DBA in your org most definitely hand this task over to them. These are the kind of problems they eat for breakfast.
Best of luck.
I have two Rails apps running on Heroku, each has its own PostgreSQL 9.1.5 database (with Amazon endpoints accessible by me)
Both apps are running the same codebase, so they initialise the two database using the same set of schema. But App 1 only uses say Table A,B and C while App 2 only uses Table D and E.
(e.g. App 1's database's table D and E are empty)
Now, I need to move/copy all the data (table D and E ) from App 2's database to App 1's database (and then reconfig App 2 to use App 1's database from now on).
If I just take a pg_dump on App 2's database and restore it on App 1's database, it will erase the existing Table A,B and C rows, I believe. Or is there any flag/option that I can set so that it will preserve the existing data? Or what other methods I should look into?
Thanks!
P.S.
This post suggested pg_dump or database link, but after reading the pages, I am still not confident that I know how to use them so that my existing data won't be erased/overwritten
how to copy data from one database to another database in postgresql?
I would fork your database. https://devcenter.heroku.com/articles/heroku-postgres-fork
Create a backup of your db
Fork the db
Hook up App2 to your 2nd databse
Verify app1 and app2 are working find (and not writing to each others tables)
delete columns ABC from the new (app2) db
delete columns DE from the original (app1) db
I have a problem with 2 databases that I have created on my local machine. I keep changing one of the database instances(say SID A) and the other instance(say SID B) is only changed once every 2-3 weeks. I want to find out all the changes that I have done on the local DB (Procedures, inserts, deletions, functions etc.) in SID A. Both the instances have 10 users, and the changes are present across all the 10 users.
I have tried to do a "diff" in sqldeveloper, but I end up getting a list of all the tables, procedures etc. - all to be created in SID B.
I have seen some tools, ready made scripts etc.
Is there a definite way that I am missing - I dont want to do a database export and import every time I want to migrate the changes.
Database: Oracle 10G
Thanks in advance for helping out.
Thanks,
Contrib
One option is to use a tool like Red Gate's "Schema Compare for Oracle"; it's rock solid and will do exactly what you need it to, pretty much out of the box.
Before going down this sort of route though, I would suggest that you think about how you are deploying changes to your environments. For example, if you stored the incremental DML and DDL changes you made to schema A in source control, you could then play those in against schema B very easily.