How do I Drop Table from slony - slony
I have a database which is being backed up by slony. I dropped a table from the replicated DB and re-created the same table using sql scripts and nothing through slony scripts.
I found this on a post and tried it:
Recreate the table
Get the OID for the recreated table: SELECT OID from pg_class WHERE relname = <your_table>' AND relkind = 'r';
Update the tab_reloid in sl_table for the problem table.
Execute SET DROP TABLE ( ORIGIN = N, ID = ZZZ); where N is the NODE # for the MASTER, and ZZZ is the ID # in sl_table.
But it doesn't seem to work.
How do I drop the table from the replicated DB? Or is there a way to use the newly created table in place of the old one?
The authoritative documentation on dropping things from Slony is here.
It's not really clear what state things were in before you ran the commands above, and you haven't clarified "doesn't seem to work".
There is one significant "gotcha" that I know off with dropping tables from replication with Slony. After you remove a table from replication, you can have trouble actually physically dropping the table on the slaves (but not on the master) with Slony 1.2, getting a cryptic error like this:
ERROR: "table_pkey" is an index
This may be fixed in Slony 2.0, but the problem here is that there is a referential integrity relationship between the unreplicated table on the slave and the replicated table, and slony 1.2 has intentionally corrupted the system table some as part of it's design, causing this issue.
A solution is to run the "DROP TABLE" command through slonik_execute_script. If you have already physically dropped the table on the master, you can use the option "EXECUTE ONLY ON" to run the command only on a specific slave. See the docs for EXECUTE SCRIPT for details.
you have dropped the table from the database but you haven't dropped from the _YOURCLUSTERNAME.sl_table.
It's importatnt de "_" before YOURCLUSTERNAME.
4 STEPS to solve the mess:
1. Get the tab_id
select tab_id from _YOURCLUSTERNAME.sl_table where tab_relname='MYTABLENAME' and tab_nspname='MYSCHEMANAME'
It returna a number 2 in MYDATABASE
2. Delete triggers
select _YOURCLUSTERNAME.altertablerestore(2);
This can return an error. Because It's trying to delete triggers in the original table, and now there is a new one.
3. Delete slony index if were created
select _YOURCLUSTERNAME.tableDropKey(2);
This can return an error.
Because It's trying to delete a index in the original table, and now there is a new table.
4. Delete the table from sl_table
delete from _YOURCLUSTERNAME.sl_table where tab_id = 2;
The best way for dropping a table is:
1. Drop the table form the cluster:
select tab_id from _YOURCLUSTERNAME.sl_table where tab_relname='MYTABLENAME' and tab_nspname='MYSCHEMANAME'
It returna a number 2 in MYDATABASE
Execute with slonik < myfile.slonik
where myfile.slonik is:
cluster name=MYCLUSTER;
NODE 1 ADMIN CONNINFO = 'dbname=DATABASENAME host=HOST1_MASTER user=postgres port=5432';
NODE 2 ADMIN CONNINFO = 'dbname=DATABASENAME host=HOST2_SLAVE user=postgres port=5432';
SET DROP TABLE (id = 2, origin = 1);
2 is the tab_id from sl_table and 1 is NODE 1, HOST1_MASTER
2. Drop the table from slave
with SQL DROP TABLE
Related
Does ClickHouseDB ReplicatedMergeTree prevents adding same column twice?
I'm trying to understand what will happen if my application will attempt to add 2 columns simultaneously to ClickHouseDB table based on ReplicatedMergeTree engine using 2 different nodes. Will ClickHouse reject one of the ALTERs or will it fail to apply? So I have 2 nodes A and B and table alter_test. And then I run on node A ALTER TABLE alter_test ADD COLUMN Added1 UInt32 FIRST; and at the same time on node B ALTER TABLE alter_test ADD COLUMN Added1 String FIRST; Will I always have 1 of the statements failed? I tried the docs and they say that the ALTERs are executed asynchronously after registering in Zookeeper. I guess my question is if ClickHouse will detect the conflict on the Zookeeper stage.
Better use ALTER TABLE alter_test ON CLUSTER 'cluster_name' ADD COLUMN IF NOT EXISTS Added1 Uint32
system.mutations is not responding after I ran an ALTER sql on a table A. Table A is also stuck. I just want to drop Table A
I ran an ALTER sql on Table A. Then table A is stuck, system.mutations is not responding. I also wait 3 hours. I tried below actions, restart Clickhouse service, reboot this Clickhouse server machine. reboot my client computer. checked log, and found this, "Current max source part size for mutation is 0 but part size 1513. Will not mutate part all_24_24_0. " The sql I used is like "alter table A update columnA=columnA/(select sum()....) where ...." No other data is inserting. No recoreds in system.merges or system.replication_queue. Table A has around 1000 records I just want to drop this table A, then I can recreate it with the history records, but table A is not responding. Also system.mutations.
Hive External Table - Drop Partition
Facing a weird issue. Alter table command to drop partition works well for > or < or >= or <= signs but not for = check. Working command: ALTER TABLE XYZ DROP PARTITION(bizdate>'20231230'); Command that's not working and throwing an error stating that partition does not exist: ALTER TABLE XYZ DROP PARTITION(bizdate='20231230'); When I do show partitions, I can see '20231230' partition. Note: bizdate is a varchar(10)
Check the list of partition in the table: SHOW PARTITIONS <table>; Perhaps it tries dropping the partition. It seems that the data was removed at some point on HDFS but the hive tables metadata still thinks those partitions exist ALTER TABLE *tableName* drop if exists PARTITION(bizdate="20231230"); & Re-pair the broken table msck repair table *table_name*;
Whats the best way to perform selective record replication at an Oracle database
Suppose the following scenario: I have a master database that contains lots of data, in this database I have a key table that I'm going to call DataOwners for this example, the DataOwners table has 4 records, each record of each of the other tables in the database "belongs" directly or indirectly to a record of the DataOwners, and by belongs I mean is linked to it with foreign keys. I also have other 2 slave databases with the exact same structure from my master database whose are only updated through replication from my master database, but SlaveDatabase1 should only have records from DataOwner 2 and SlaveDatabase2 should only have records from DataOwners 1 and 3 whereas MasterDatabase has records of DataOwners 1, 2, 3 and 4. Is there any tool for Oracle that allows me to do this kind of selective record replication? If not, is there any way to improve my replication method? which is: add to each table a trigger that inserts the record changes in a group of replication tables execute the commands of the replication tables at selected slaves
The simplest option would be to define materialized views in the various slave databases that replicate just the data that you want. So, for example, if there is a table A in the master database, then in slave database 1, you'd create a materialized view CREATE MATERIALIZED VIEW a <<refresh criteria>> AS SELECT a.* FROM a#to_master a, dataOwners#to_master dm WHERE a.dataOwnerID = dm.dataOwnerID AND dm.some_column = <<some criteria that selects DataOwner2>> while slave database 2 has a very similar materialized view CREATE MATERIALIZED VIEW a <<refresh criteria>> AS SELECT a.* FROM a#to_master a, dataOwners#to_master dm WHERE a.dataOwnerID = dm.dataOwnerID AND dm.some_column = <<some criteria that selects DataOwner1 & 3>> Of course, if the dataOwnerID can be hard-coded, you could simplify things and avoid doing the join. I'm guessing, though, that there is some column in the DataOwners table that identifies which slave a particular owner should be replicated to. Assuming that you want only incremental changes to be replicated, you'd need to create some materialized view logs on the base tables in the master database. And you would probably want to configure refresh groups on the slave databases so that all the materialized views would refresh at the same time and would be transactionally consistent with each other.
Oracle Golden Gate software can do all these tasks. Insert/Update/Delete have the same order of the master db, so it can avoid the foreign keys and other constraint issues. MasterDatabase Extract generates a trail file, then split out the data to DB 1,2,3 and 4. It also can do multiple ways replications, i.e. DB 1 sends data back to the Master DB. Besides the Golden Gate, trigger may be your other option. But it requires some programming.
Why Oracle is giving error : "no more tables permitted in this cluster"?
I am getting error "no more tables permitted in this cluster" when trying to create a clustered table. Oracle's documentation about the cluster feature says: A cluster can contain a maximum of 32 tables. But at the time of the error, the cluster contains only 18 tables as per following query: select * from user_tables where cluster_name='MY_CLUSTER'; I suspect that the tables that were once part of the cluster but were later dropped are still being counted towards that maximum allowed limit. Is there a way to check the above hypothesis?
Check the view USER_CLU_COLUMNS. If you've dropped a table it may still be listed here but with an internal name not its original. select count(distinct (TABLE_NAME)) from USER_CLU_COLUMNS where CLUSTER_NAME = 'MY_CLUSTER'; This may be because your database has a recycle bin. Check this: select * from RECYCLEBIN where ORIGINAL_NAME = '<your table>'; Check this link out for more information on the recycle bin.