Why Oracle is giving error : "no more tables permitted in this cluster"? - oracle

I am getting error "no more tables permitted in this cluster" when trying to create a clustered table.
Oracle's documentation about the cluster feature says:
A cluster can contain a maximum of 32 tables.
But at the time of the error, the cluster contains only 18 tables as per following query:
select * from user_tables where cluster_name='MY_CLUSTER';
I suspect that the tables that were once part of the cluster but were later dropped are still being counted towards that maximum allowed limit.
Is there a way to check the above hypothesis?

Check the view USER_CLU_COLUMNS. If you've dropped a table it may still be listed here but with an internal name not its original.
select count(distinct (TABLE_NAME))
from USER_CLU_COLUMNS
where CLUSTER_NAME = 'MY_CLUSTER';
This may be because your database has a recycle bin. Check this:
select *
from RECYCLEBIN
where ORIGINAL_NAME = '<your table>';
Check this link out for more information on the recycle bin.

Related

Oracle v$archive_gap returns no rows selected

In my DG environment, alert log shows the logs are applied on standby.
Then, I am querying the standby and primary using v$archive_gap. The query returns no rows selected. Because there is no gap?
Only when there is archive gaps on a standby database, a row is returned??
This will give you the interval of missing arch-files, if none are missing then you will get no rows..
select * from v$archive_gap;

Purging Oracle Unified Audit Trail doesn't cleanup lob data

I'm experiencing rapid growth in my SYSAUX scheme. I have found that the majority of the space (27Gb) is being consumed by a LOBSEGMENT object in the AUDSYS schema. The research I did, suggested that the Unified Audit Log needed to be purged and I went ahead and cleaned it up as it was really massive, however, space has not been released from the LOBSEGEMENT and I'm wondering if there is a way to do this?
DB Version: Oracle Database 12c Release 12.1.0.1.0 - 64bit Production
I used the below to identify large objects in the system
select s.owner, s.segment_name, s.segment_type, s.tablespace_name, sum(s.BYTES) /1024/1024/1024 SIZE_GB
from DBA_SEGMENTS s
group by s.owner, s.segment_name, s.segment_type, s.tablespace_name;
From there I identified the table name associated with the largest segment with the below:
select * from dba_lobs where SEGMENT_NAME='SYS_LOB0000019764C00014$$';
The LOG_PIECE column of the AUDSYS.CLI_SWP$ea27aff$1$1 table was identified, but I cannot query the table directly. even connected with sysdba, when I try and query the table to find out what is in it, I get "ORA-00942: table or view does not exist". I also cannot find any reference to the table or column in any other views, procedures, synonyms, etc in the DB. So I have no idea how to view the contents of the table in order to figure out what it is.
When I look at the Unified Audit Trail, I can't find anything that would link to this column either.
After purging I did another backup of the system in the hopes that it might release unused space, space is still being used and the purge did not clean it up.
Any ideas on 1. How to figure out what is in the table/column and 2. how to clean it up would really be appreciated as I'm at a bit of a loss here.

Cannot select Oracle table from other Server with schema name prefix

Few days ago there was network problems. Also one of the database harddisk partition storage ran out of space but it has been fixed now.
Additional note: one of the DBA compressed the archive log to spare some space during the problem happened.
One of the outcome was that now I CANNOT select one table from other Oracle database on the other server if using schema name prefix.
For example if I run query from one of the schema/user in database1 from Toad or sqlplus:
select * from office.room#database2
The query runs forever and never stops.
Usually it's not a problem. The other tables are fine; I can select them using office.*#database2 query.
The other odd thing is that if I use SYNONYM, I CAN select that table. Let's say that table has synonym 'room' on db2 database, this query is OK:
select * from room#database2
The table itself on database2 is OK, meaning that if I login to schema "office" on database database2, I can select the table data.
I still can not find out what causes this problem.
New founding, I can select the table with no hang up if I add WHERE filter or I select the columns, for example :
select * from office.room#database2 where roomnumber='A';
or
select roomname, rumnumber from office.room#database2;
But the select * from office.room#database2 still hang up.

Golden Gate replication from primary to secondary database, WARNING OGG-01154 Oracle GoldenGate Delivery for Oracle, rgg1ab.prm: SQL error 1403

I am using golden gate to replicate data from primary to secondary. I have inserted records in the primary database, but replication abdends with error message
WARNING OGG-01154 Oracle GoldenGate Delivery for Oracle, rgg1ab.prm: SQL error 1403 mapping primaryDB_GG1.TB_myTableName to secondaryDB.TB_myTableName OCI Error ORA-01403: no data found, SQL < UPDATE ......
The update statement has all the columns from table in the where clause.
Whereas there is no such update statement in the application with so many columns in where clause.
Can you help on this issue. Why Golden Gate replication is converting insert in to update while replication.
I know this very old, but if you haven't figured out a solution, please provide your prm file if you can. You may a parameter in there that is converting inserts to updates based upon a PK already existing in the target database. It is likely that handlecollisions or CDR is set.
For replication, you might have already enabled the transaction log in the source db. Now, you need to run from ggsci:
"ADD TRANDATA schema_name.table_name, COLS(...)"
In the COLS part, you need to mention the Column/Columns(comma seperated) that can be used to identify a unique record (You can mention the unique indexed columns if present). If there is no unique index on the table and you are not sure of what columns could be used to uniquely identify a row, then just run from ggsci:
"ADD TRANDATA schema_name.table_name"
Then Golden gate will start logging all the necessary columns for uniquely identifying a row.
Note: This should be done before you start the replication process.

How do I Drop Table from slony

I have a database which is being backed up by slony. I dropped a table from the replicated DB and re-created the same table using sql scripts and nothing through slony scripts.
I found this on a post and tried it:
Recreate the table
Get the OID for the recreated table: SELECT OID from pg_class WHERE relname = <your_table>' AND relkind = 'r';
Update the tab_reloid in sl_table for the problem table.
Execute SET DROP TABLE ( ORIGIN = N, ID = ZZZ); where N is the NODE # for the MASTER, and ZZZ is the ID # in sl_table.
But it doesn't seem to work.
How do I drop the table from the replicated DB? Or is there a way to use the newly created table in place of the old one?
The authoritative documentation on dropping things from Slony is here.
It's not really clear what state things were in before you ran the commands above, and you haven't clarified "doesn't seem to work".
There is one significant "gotcha" that I know off with dropping tables from replication with Slony. After you remove a table from replication, you can have trouble actually physically dropping the table on the slaves (but not on the master) with Slony 1.2, getting a cryptic error like this:
ERROR: "table_pkey" is an index
This may be fixed in Slony 2.0, but the problem here is that there is a referential integrity relationship between the unreplicated table on the slave and the replicated table, and slony 1.2 has intentionally corrupted the system table some as part of it's design, causing this issue.
A solution is to run the "DROP TABLE" command through slonik_execute_script. If you have already physically dropped the table on the master, you can use the option "EXECUTE ONLY ON" to run the command only on a specific slave. See the docs for EXECUTE SCRIPT for details.
you have dropped the table from the database but you haven't dropped from the _YOURCLUSTERNAME.sl_table.
It's importatnt de "_" before YOURCLUSTERNAME.
4 STEPS to solve the mess:
1. Get the tab_id
select tab_id from _YOURCLUSTERNAME.sl_table where tab_relname='MYTABLENAME' and tab_nspname='MYSCHEMANAME'
It returna a number 2 in MYDATABASE
2. Delete triggers
select _YOURCLUSTERNAME.altertablerestore(2);
This can return an error. Because It's trying to delete triggers in the original table, and now there is a new one.
3. Delete slony index if were created
select _YOURCLUSTERNAME.tableDropKey(2);
This can return an error.
Because It's trying to delete a index in the original table, and now there is a new table.
4. Delete the table from sl_table
delete from _YOURCLUSTERNAME.sl_table where tab_id = 2;
The best way for dropping a table is:
1. Drop the table form the cluster:
select tab_id from _YOURCLUSTERNAME.sl_table where tab_relname='MYTABLENAME' and tab_nspname='MYSCHEMANAME'
It returna a number 2 in MYDATABASE
Execute with slonik < myfile.slonik
where myfile.slonik is:
cluster name=MYCLUSTER;
NODE 1 ADMIN CONNINFO = 'dbname=DATABASENAME host=HOST1_MASTER user=postgres port=5432';
NODE 2 ADMIN CONNINFO = 'dbname=DATABASENAME host=HOST2_SLAVE user=postgres port=5432';
SET DROP TABLE (id = 2, origin = 1);
2 is the tab_id from sl_table and 1 is NODE 1, HOST1_MASTER
2. Drop the table from slave
with SQL DROP TABLE

Resources