We have one system status table A in DB and Application process select and update on that table for 4 times in one seconds so huge audit logs are generating.
So I have tried
NOAUDIT ALL on schema.A;
but still audit logs are generated why?
and how do I find out previously fired Audit statement?
You must restart the sessions for them to stop logging in the trail.
Check that you have no trigger on that table who would do the auditing above Oracle's mechanism
Related
We have an oracle database installed in a Azure Virtual Machine sitting in its own private VNET. We would like to capture the insert, update, delete events happening on the Oracle DB records and feed these events to some kind of queue (Service Bus Queue, Event Grid, Event Hub etc.) which can then be processed by the Azure Function or Azure Logic App.
What will be the best way to capture these events in Azure?
I don't know about the details of Azure, but I would start in the Oracle Database itself by either using the build-in auditing features or custom triggers if you need more control over what must be audited.
If you use the build-in auditing, you will then just select from the auditing views and when using a trigger you will log all the needed auditing information in the trigger and then select from the custom audit tables.
Example for auditing:
create audit policy my_audit_policy actions all on hr.regions;
audit policy my_audit_policy;
Example for trigger:
create trigger aud_regions_trigger
after insert or delete or update
on hr.regions
for each row
begin
-- log data in tables
end;
/
I'm trying to replicate several schemas in a Oracle database to a PostgresSQL database.
When the DMS task is started with Full load, ongoing replication type the task fails after sometimes while the tables are in the Before Load status. This is the error I'm getting when the task fails
Last Error Task error notification received from subtask 0, thread 0 [reptask/replicationtask.c:2673] [1022301]
Oracle CDC stopped; Error executing source loop; Stream component failed at subtask 0,
component st_0_LBI2ND3ZI65BF6DQQYK4ITPYAY ; Stream component 'st_0_LBI2ND3ZI65BF6DQQYK4ITPYAY'
terminated [reptask/replicationtask.c:2680] [1022301] Stop Reason FATAL_ERROR Error Level FATAL
However when the same tables are added to a task with Full Load type it works without any issue. The error occurs only when trying to run the task for replicating ongoing changes.
I tried searching for this error but couldn't find a exact reason. I have configured the endpoints properly and both source and target endpoints have the required permissions for replicating changes. How can I get this resolved?
For the replication to work properly you need to enable SUPPLEMENTAL LOGGING across all the required tables in your source DB
So this can be due to multiple reasons. Although the basic cause remains the same, DMS is not able to read the logs in your oracle database and it times out.
Before proceeding forward I assume you have followed all steps mentioned in aws documentation for CDC setup here.
As mentioned in above answer the Supplemental logging should be enabled on
database level as well as for all columns and primary keys at table level ex:
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS; ALTER
TABLE schema_name.table_name ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
ALTER table PCUSER.PC_POLICY ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY)
COLUMNS;
The log retention period should be enough so that CDC ke read the logs before deleted. Here is the troubleshooting link for this issue on aws docs.
The DMS user that you are using should have read/write/alter access for all the schemas you are trying to read from. In my case it happened several times, that afer adding new tables to the schema I got this error again as the user I was using did not have the access to read newly added tables.
Also it depends on, what are you using to mine the logs. If it is LogMiner the setup is quite simple, for binary there are few extra commands you need to execute. Which are mentioned in the setup documentation.
Login to the database using the same user, you are using on DMS and check if the redo logs exists at-
SELECT * FROM V$ARCHIVED_LOG;
Also check for the DEST_ID, highlighted in the above screenshot. As far as I read the default value is 0 on DMS. You can check this for your database and add set it in the extra connection attributes-
archivedLogDestId=1;
Check if there are multiple DEST_ID's for your logs, for example if you see the DEST_ID as 1, as in above screenshot, confirm using-
SELECT * FROM V$ARCHIVED_LOG WHERE dest_id NOT IN (1)
This should return nothing, but if this return records, copy those extra
DEST_ID's and paste them in below connection attribute-
additionalArchivedLogDestId=[0,2,3, ...,n]
Finally if this doesn't work, enable detailed debug logging, here how you can . In our case the logminer and thus the DMS user did not have the access to read the redo logs.
Few extra connection attributes that I used may help you for logminer-
addSupplementalLogging=Y;useLogminerReader=Y;archivedLogDestId=1;additionalArchivedLogDestId=[0,2,3];ignoreTxnCtxValidityCheck=false;cdcTimeout=1200
I am trying to alter a table by adding a column, but it's giving the following error:
ALTER TABLE TUSER
ADD CREATED_BY VARCHAR2(250)
SQL Error: ORA-14411: The DDL cannot be run concurrently with other DDLs
How to unlock the resource which is causing this error?
A little old question, but found another solution.
Looks like this error also can occur due to deadlock in the table (many users working on the same table, etc.)
So you can kill sessions via then menu: Tools --> Monitor Sessions --> Choose selection.
There you should see a table with all running commands, the command, the user and more.
Right click --> Kill session.
Link to Oracle documentation
My colleague had the same problem in the Oracle SQL developer, he executed a DDL statement and the machine was taking forever. Somehow it was not reachable, and after some time it responded again. No idea what happened and my colleague called me for help.
After the machine answered again he tried to execute the same statement which returned an ORA-14411.
The solution was to just click rollback in the same prompt-frame, and after that we were able to re-execute the same statement successfully.
try this:
ALTER TABLE TUSER
RENAME TO new_TUSER;
ALTER TABLE new_TUSER
ADD (CREATED_BY VARCHAR2(250));
ALTER TABLE new_TUSER
RENAME TO TUSER;
I have an Oracle instance with 8 users/schemas already but since late last week I am unable to create any new users on that instance. When I run the create user script it just keeps running....
This is a development box and I have full access to it. I am not a DBA so how do I troubleshoot to find out what the issue could be? and what could the issue be?
Here is the create user script:
create user usr_ARCHIVE identified by usr_ARCHIVEpw
default tablespace USERS
temporary tablespace TEMP
profile DEFAULT
quota UNLIMITED on USERS;
Please have a look at the user tablespace to see whether it has enough free space or not. I have faced similar issues in the past.
It's probably waiting or trying to obtain a lock. To determine what is going on you need to enable tracing. Before executing the create user statement, execute the following command:
alter session set events '10046 trace name context forever, level 12';
This will create a trace file in the trace directory. By default this is the same directory where the alert.log file is stored. Analyze the trace file and especially check for the lines that start with WAIT.
The problem was with another 10G TNSListener that was running. Once the 10G TNSListener was stopped and 11G Listener restarted the issue was resolved.
I am trying to drop a tablespace in oracle 10g , using my application .
A bit about my application -- In my application I can create tablespaces.
Now what happens in oracle is that when you create a tablespace , then a new user automatically gets created and is attached to the database.
When you have to drop a tablespace what one has to do is to , first drop the user connected to the database and then the database.
When I try to drop a user associated with a tablespace.
An exception is thrown by the database which is the System.Data.OracleClient.OracleException
The details of the exception are as follows - ORA - 01904 (Can Not drop a user that is currently connected)
The thing is I have closed all the connections.Pretty sure about this.
Still oracle is throwing this exception.
Any suggestions???
Still it is not able to drop the user and throws the exception.
It can happen that you closed applications but did not ended Oracle sessions for that user. Log in as sysdba and query active sessions:
SQL> select sid, serial#, username from v$session;
SID SERIAL# USERNAME
---------- ---------- ------------------------------
122 2557 SYS
126 7878 SOME_USER
If you find your user in this list then kill all his sessions:
SQL> alter system kill session 'sid,serial#';
Seems to be your error code is ORA-01940 and not ORA-01904 which says -
ORA-01940: cannot DROP a user that is currently logged in
Cause: An attempt was made to drop a user that was currently logged in.
Action: Make sure the user is logged out, then re-execute the command.
Hope the below link might help you -
http://www.dba-oracle.com/t_ora_01940_cannot_drop_user.htm
We do following and works..
ALTER TABLESPACE "OUR_INDEX" OFFLINE NORMAL;
DROP TABLESPACE "OUR_INDEX" INCLUDING CONTENTS AND DATAFILES CASCADE CONSTRAINTS;
Please be sure that the user you're trying to drop is not currently connected. I did encounter this problem last year. My workaround was to restart the database. Once the database is up i drop the user.
Another workaround i haven't tried was to restart the listener. This too (logically) can ensure that the 'to be dropped' user is not connected when the listener is down.
This workaround (of course) cannot be used in production database.
A user does NOT automatically get created when you create a tablespace.
A user does get assigned a default tablespace. They may (or may not) create objects in that tablespace. They may (or may not) create objects in other tablespaces too.
Generally, rather than dropping the user, I would drop the user's objects. Then lock the account so they can't log in again. Then revoke any privileges they have.
If desired, you can then drop the 'unused' users after month or so.