I'm using oracle 12C and one of my partitioned table exist on primary database.
However, on my standby db ,when I'm selecting data from the same table, then it's throwing below error:
ORA-08103 'object no longer exist'
It's very strange my standby is behaving like this which is in sync with primary.
All other objects as well as tables ,I'm able to access and there data from other tables as well.
What could possibly causing this error and how to troubleshoot this? any help will be appreciated.
Related
I'm trying to replicate several schemas in a Oracle database to a PostgresSQL database.
When the DMS task is started with Full load, ongoing replication type the task fails after sometimes while the tables are in the Before Load status. This is the error I'm getting when the task fails
Last Error Task error notification received from subtask 0, thread 0 [reptask/replicationtask.c:2673] [1022301]
Oracle CDC stopped; Error executing source loop; Stream component failed at subtask 0,
component st_0_LBI2ND3ZI65BF6DQQYK4ITPYAY ; Stream component 'st_0_LBI2ND3ZI65BF6DQQYK4ITPYAY'
terminated [reptask/replicationtask.c:2680] [1022301] Stop Reason FATAL_ERROR Error Level FATAL
However when the same tables are added to a task with Full Load type it works without any issue. The error occurs only when trying to run the task for replicating ongoing changes.
I tried searching for this error but couldn't find a exact reason. I have configured the endpoints properly and both source and target endpoints have the required permissions for replicating changes. How can I get this resolved?
For the replication to work properly you need to enable SUPPLEMENTAL LOGGING across all the required tables in your source DB
So this can be due to multiple reasons. Although the basic cause remains the same, DMS is not able to read the logs in your oracle database and it times out.
Before proceeding forward I assume you have followed all steps mentioned in aws documentation for CDC setup here.
As mentioned in above answer the Supplemental logging should be enabled on
database level as well as for all columns and primary keys at table level ex:
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS; ALTER
TABLE schema_name.table_name ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
ALTER table PCUSER.PC_POLICY ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY)
COLUMNS;
The log retention period should be enough so that CDC ke read the logs before deleted. Here is the troubleshooting link for this issue on aws docs.
The DMS user that you are using should have read/write/alter access for all the schemas you are trying to read from. In my case it happened several times, that afer adding new tables to the schema I got this error again as the user I was using did not have the access to read newly added tables.
Also it depends on, what are you using to mine the logs. If it is LogMiner the setup is quite simple, for binary there are few extra commands you need to execute. Which are mentioned in the setup documentation.
Login to the database using the same user, you are using on DMS and check if the redo logs exists at-
SELECT * FROM V$ARCHIVED_LOG;
Also check for the DEST_ID, highlighted in the above screenshot. As far as I read the default value is 0 on DMS. You can check this for your database and add set it in the extra connection attributes-
archivedLogDestId=1;
Check if there are multiple DEST_ID's for your logs, for example if you see the DEST_ID as 1, as in above screenshot, confirm using-
SELECT * FROM V$ARCHIVED_LOG WHERE dest_id NOT IN (1)
This should return nothing, but if this return records, copy those extra
DEST_ID's and paste them in below connection attribute-
additionalArchivedLogDestId=[0,2,3, ...,n]
Finally if this doesn't work, enable detailed debug logging, here how you can . In our case the logminer and thus the DMS user did not have the access to read the redo logs.
Few extra connection attributes that I used may help you for logminer-
addSupplementalLogging=Y;useLogminerReader=Y;archivedLogDestId=1;additionalArchivedLogDestId=[0,2,3];ignoreTxnCtxValidityCheck=false;cdcTimeout=1200
In one of my Oracle databases, I tried to enable some constraints on a particular table. It shows that the table is altered and no error is showed.
When I query all_constraints, the constraint is disabled. I tried this several times but i got the same issue. On another database it's working fine. It sounds weird to me.
After restarting the Impala server, we are not able to see the tables(i.e. tables are not coming up).Anyone help me what order we have to follow to avoid this issue.
Thanks,
Srinivas
You should try running "invalidate metadata;" from impala-shell. This usually clears up tables not being visible as impala caches metadata.
From:
https://www.cloudera.com/documentation/enterprise/5-8-x/topics/impala_invalidate_metadata.html
The following example shows how you might use the INVALIDATE METADATA
statement after creating new tables (such as SequenceFile or HBase tables) through the Hive shell. Before the INVALIDATE METADATA statement was issued, Impala would give a "table not found" error if you tried to refer to those table names.
Few days ago while i tried perform database copy from remote server to local server i got some warnings and one of them was like this
"Error occured executing DDL for TABLE:MASTER_DATA".
And then i clicked yes, but the result of database copy was unexpected, there were only few tables has been copied.
When i tried to see DDL from SQL section/tab on one of table, i got this kind of information
-- Unable to render TABLE DDL for object COMPANY_DB_PROD.MASTER_DATA with DBMS_METADATA attempting internal generator.
I also got this message and i believe this message showed up because there's something wrong with DDL on my database so tables won't be created.
ORA-00942: table or view does not exist
I've never encountered this problem before and i always perform database copy every day since two years ago.
For the record before this problem occurred, i have removed old .arch files manually not by RMAN and i never using any RMAN commands. I also have removed old .xml log files, because these two type of files have made my remote server storage full.
How to trace and fix this kind of problem? Is there any corruption on my Oracle?
Thanks in advance.
The problem was caused by datafile has reached its max size though. I have resolved the problem by following the answer of this discussion ORA-01652: unable to extend temp segment by 128 in tablespace SYSTEM: How to extend?
Anyway, thank you everyone for the help.
I am facing a strange behaviour in my oracle 10g DB. Frequently I am unable to update one of my table via sql developer. When I fire "Update" Query more than 1000 Secs the query still running. The content of that table is very very less.
How can I resolve the above issue? Is there any methods.
Regards,
Jerald.