How do you configure the number of replicas per table or database? - cockroachdb

On Cockroachdb, is it possible to configure the number of replicas per table or database?

Yes. You can use the CONFIGURE ZONE subcommand to do so. For a database, you can configure your replicas using ALTER DATABASE <db> CONFIGURE ZONE USING num_replicas=<number>. For table, it would be same as above except replace DATABASE keyword with the TABLE keyword. You can view your replication zone setup using the SHOW ZONE CONFIGURATIONS query. Please refer to the docs for more information: https://www.cockroachlabs.com/docs/stable/configure-zone.html

Related

How to change table monitoring setting

I'm working on improving Oracle tables' performances.
The table that I have been working on created with 'monitoring' and 'logging' clause. These two, decreasing performance of query and I need to change monitoring to nomonitoring(without dropping tables).
This is working well:
alter table some_table nologging
But to alter monitoring to nomonitoring I use;
alter table some_table nomonitoring
query executes without any errors but there is no change in table structure.
I've been researching on internet for days and also as I saw here there is no such topic for my specific problem.
Thanks in advance.
The monitoring/nomonitoring options are deprecated and are no longer used in Oracle.
Quote from the Oracle 11.2 manual
Formerly, you enabled DBMS_STATS to automatically gather statistics for a table by specifying the MONITORING keyword in the CREATE (or ALTER) TABLE statement. Starting with Oracle Database 11g, the MONITORING and NOMONITORING keywords have been deprecated and statistics are collected automatically. If you do specify these keywords, they are ignored.
(Emphasis mine)

Data replication between Oracle and Postgres

Is there a way to replicate data(like triggers or jobs) from oracle tables to postgres tables and vice versa(for different set of tables) without using external tools? Just one way replication for both the scenarios.
Just a hint:
You can think of create a DB link from Oracle to Postgres which is called heterogeneous connectivity which makes it possible to select data from Postgres with a select statement in Oracle.
Then use materialized views to schedule and store the results of those selects.
As you don't want to use any external tool otherwise the solution should have been much simpler
for 20 tables I need to replicate data from oracle to postgres. For 40 different tables, I need to replicate from postgres to oracle.
I could imagine the following setup:
For the Oracles tables that need to be accessible from Postgres, simply create foreign tables inside the Postgres server. They appear to be "local" tables in the Postgres server, but the FDW ("foreign data wrapper") will forward any request to the Oracle server. So no replication required. Whether or not this will be fast enough depends on how you access the tables. Some operations (WHERE clause, ORDER BY etc) can be pushed down to the Oracle server, some will be done by the Postgres server after all the rows have been fechted.
For the Postgres tables that need to be replicated to Oracle you could have a similar setup: create a foreign table that points to the target table in Oracle. Then create triggers on the Postgres table that will update the foreign table, thus sending the changes to Oracle.
This could all be managed on the Postgres side.

Postgres - Can execute trigger for re-ordering rows in table having clustered index?

I am currently working on migrating SQL Server database to Postgres. I found that there is a provision in Postgres to cluster table based on an index which is similar to clustered indexes in Microsoft SQL Server. However the clustering of table is done only one time as per Postgres documentation. And also it is found that we need to exclusively execute the command 'Cluster' on table periodically if we require for each new update or insert operations on the table.
So, I am thinking about adding a trigger that issues 'CLUSTER' command on that particular table based on an index so that the result would be similar to clustered index in MS SQL.
Can anyone have idea if there are any problems in adding trigger for clustering table for each and every update/insert operation?

Best approach for Oracle Database Tables Auto Purging

We have many tables in my Oracle Databases for which i need to schedule purging on the basis of Date Column.
Current Approach we am using - Scheduled a job which runs query to move data from Transaction table to Backup Table and then delete same from Transaction table.
Please suggest if there is any better/inbuilt approach, like i can define purging logic at the time of Table Creation or any other.
Databse Oracle 12c EE.
Thanks.
A simplistic approach would be to use partitioning, and partition the table on Date column. You can then purge (drop partition) or move (partition exchange) depending on your needs.
There is also a full blown ILM (Information Lifecycle Management) capability; take a look at
http://www.oracle.com/technetwork/database/focus-areas/performance/implementingilmdb12c-2543023.pdf

Golden Gate replication from primary to secondary database, WARNING OGG-01154 Oracle GoldenGate Delivery for Oracle, rgg1ab.prm: SQL error 1403

I am using golden gate to replicate data from primary to secondary. I have inserted records in the primary database, but replication abdends with error message
WARNING OGG-01154 Oracle GoldenGate Delivery for Oracle, rgg1ab.prm: SQL error 1403 mapping primaryDB_GG1.TB_myTableName to secondaryDB.TB_myTableName OCI Error ORA-01403: no data found, SQL < UPDATE ......
The update statement has all the columns from table in the where clause.
Whereas there is no such update statement in the application with so many columns in where clause.
Can you help on this issue. Why Golden Gate replication is converting insert in to update while replication.
I know this very old, but if you haven't figured out a solution, please provide your prm file if you can. You may a parameter in there that is converting inserts to updates based upon a PK already existing in the target database. It is likely that handlecollisions or CDR is set.
For replication, you might have already enabled the transaction log in the source db. Now, you need to run from ggsci:
"ADD TRANDATA schema_name.table_name, COLS(...)"
In the COLS part, you need to mention the Column/Columns(comma seperated) that can be used to identify a unique record (You can mention the unique indexed columns if present). If there is no unique index on the table and you are not sure of what columns could be used to uniquely identify a row, then just run from ggsci:
"ADD TRANDATA schema_name.table_name"
Then Golden gate will start logging all the necessary columns for uniquely identifying a row.
Note: This should be done before you start the replication process.

Resources