Purging Oracle Unified Audit Trail doesn't cleanup lob data - oracle

I'm experiencing rapid growth in my SYSAUX scheme. I have found that the majority of the space (27Gb) is being consumed by a LOBSEGMENT object in the AUDSYS schema. The research I did, suggested that the Unified Audit Log needed to be purged and I went ahead and cleaned it up as it was really massive, however, space has not been released from the LOBSEGEMENT and I'm wondering if there is a way to do this?
DB Version: Oracle Database 12c Release 12.1.0.1.0 - 64bit Production
I used the below to identify large objects in the system
select s.owner, s.segment_name, s.segment_type, s.tablespace_name, sum(s.BYTES) /1024/1024/1024 SIZE_GB
from DBA_SEGMENTS s
group by s.owner, s.segment_name, s.segment_type, s.tablespace_name;
From there I identified the table name associated with the largest segment with the below:
select * from dba_lobs where SEGMENT_NAME='SYS_LOB0000019764C00014$$';
The LOG_PIECE column of the AUDSYS.CLI_SWP$ea27aff$1$1 table was identified, but I cannot query the table directly. even connected with sysdba, when I try and query the table to find out what is in it, I get "ORA-00942: table or view does not exist". I also cannot find any reference to the table or column in any other views, procedures, synonyms, etc in the DB. So I have no idea how to view the contents of the table in order to figure out what it is.
When I look at the Unified Audit Trail, I can't find anything that would link to this column either.
After purging I did another backup of the system in the hopes that it might release unused space, space is still being used and the purge did not clean it up.
Any ideas on 1. How to figure out what is in the table/column and 2. how to clean it up would really be appreciated as I'm at a bit of a loss here.

Related

SQL to limit the types of tables returned when connecting to Oracle Database in Power BI

I have got a connection to an Oracle Database set up in Power BI.
This is working fine, apart from the fact it brings back 9500+ tables that start with "BIN".
Is there some SQL code I can put in the SQL statement section when connecting to the Oracle Database that limits the tables that it returns to ignore any table that begins with 'BIN'?
Tables starting with BIN$ are tables that have been dropped but not purged and are in Oracle's "recycle bin".
The simplest method of not showing them is, if they are no longer required, to PURGE (delete) the tables from the recycle bin and then you will not see them as they will not exist.
You can use (documentation link):
PURGE TABLE BIN$0+xyzabcdefghi123 to get rid of an individual tables with that name (or you can use the original name of the table).
PURGE TABLESPACE tablespace_name USER username; to get rid of all recycled tables in a table space belonging to a single user.
PURGE TABLESPACE tablespace_name; to get rid of all recycled tables in a table space.
PURGE RECYCLEBIN; to get rid of all of the current user's recycled tables.
PURGE DBA_RECYCLEBIN; to get rid of everything in the recycle bin (assuming you have SYSDBA privileges).
Before purging tables you should make sure that they are really not required as you would need to restore from backups to bring them back.

How to (un)mark an Oracle table read-only for the owner?

In my Oracle instance I have a table. It existed just fine for many years without problems, I run thousands of queries per day on it (through my software), mostly selects and inserts, with rare (once-a-week) updates.
Today, a week after the last update, I ran an update against it and it failed with an ORA-00942: table or view does not exist.
I am the owner of that table. I'm pretty sure that database didn't change much during the week, certainly not this table.
I can select from it just fine: select * from table_x, but updates and inserts fail: insert into table_x select * from table_x where 1 = 0 with the weird ORA-00942.
Since I'm the owner, the usual visibility and privilege problems don't seem to apply, and googling, sadly, doesn't help. I'm sure I'm missing something really simple, so any suggestions are very welcome.
How did I make an Oracle table read-only (or invisible) for myself (the owner)?
It's partitioned (not sure if that helps). It's about 50GB in size, half of that indexes (not sure if that helps either).
EDIT: Here's a screenshot of the sample statement from PL/SQL Developer:
Once I ran the same situation, according to the trace file and little googling which referenced to Materialized View Log which is associated with master table.
Use the following command to drop the materialized view log
DROP MATERIALIZED VIEW LOG ON <table_x>

Oracle 11g Deleting large amount of data without generating archive logs

I need to delete a large amount of data from my database on a regular basis. The process generates huge volume of archive logs. We had a database crash at one point because there was no storage space available on archive destination. How can I avoid generation of logs while I delete data?
The data to be deleted is already marked as inactive in the database. Application code ignores inactive data. I do not need the ability to rollback the operation.
I cannot partition the data in such a way that inactive data falls in one partition that can be dropped. I have to delete the data with delete statements.
I can ask DBAs to set certain configuration at table level/schema level/tablespace level/server level if needed.
I am using Oracle 11g.
What proportion of the data on the table would be deleted, what volume? Are there any referential integrity constraints to manage or is this table childless?
Depending on the answers , you might consider:
"CREATE TABLE keep_data UNRECOVERABLE AS SELECT * FROM ... WHERE
[keep condition]"
Then drop the original table
Then rename keep_table to original table
Rebuild the indexes (again with unrecoverable to prevent redo),constraints etc.
The problem with this approach is it's a multi-step DDL, process, which you will have a job to make fault tolerant and reversible.
A safer option might be to use data-pump to:
Data-pump expdp to extract the "Keep" data
TRUNCATE the table
Data-pump impdp import of data from step 1, with direct-path
At this point I suggest you read the Oracle manual on Data Pump, particularly the section on Direct Path Loads to be sure this will work for you.
MY preferred option would be partitioning.
Of course, the best way would be TenG solution (CTAS, drop and rename table) but it seems it's impossible for you.
Your only problem is the amount of archive logs and database crash problem. In this case, maybe you could partition your delete statement (for example per 10.000 rows).
Something like:
declare
e number;
i number
begin
select count(*) from myTable where [delete condition];
f :=trunc(e/10000)+1;
for i in 1.. f
loop
delete from myTable where [delete condition] and rownum<=10000;
commit;
dbms_lock.sleep(600); -- purge old archive if it's possible
end loop;
end;
After this operation, you should reorganize your table which is surely fragmented.
Alter the table to set NOLOGGING, delete the rows, then turn logging back on.

Verify an Oracle database rollback action is successful

How can I verify an Oracle database rollback action is successful? Can I use Number of rows in activity log and Number of rows in event log?
V$TRANSACTION does not contain historical information but it does contain information about all active transactions. In practice this is often enough to quickly and easily monitor rollbacks and estimate when they will complete.
Specifically the columns USED_UBLK and USED_UREC contain the number of UNDO blocks and records remaining. USED_UREC is not always the same as the number of rows; sometimes the number is higher because it includes index entries and sometimes the number is lower because it groups inserts together.
During a long rollback those numbers will decrease until they hit 0. No rows in the table imply that the transactions successfully committed or rolled back. Below is a simple example.
create table table1(a number);
create index table1_idx on table1(a);
insert into table1 values(1);
insert into table1 values(1);
insert into table1 values(1);
select used_ublk, used_urec, ses_addr from v$transaction;
USED_UBLK USED_UREC SES_ADDR
--------- --------- --------
1 6 000007FF1C5A8EA0
Oracle LogMiner, which is part of Oracle Database, enables you to query online and archived redo log files through a SQL interface. Redo log files contain information about the history of activity on a database.
LogMiner Benefits
All changes made to user data or to the database dictionary are
recorded in the Oracle redo log files so that database recovery
operations can be performed.
Because LogMiner provides a well-defined, easy-to-use, and
comprehensive relational interface to redo log files, it can be used
as a powerful data audit tool, as well as a tool for sophisticated
data analysis. The following list describes some key capabilities of
LogMiner:
Pinpointing when a logical corruption to a database, such as errors
made at the application level, may have begun. These might include
errors such as those where the wrong rows were deleted because of
incorrect values in a WHERE clause, rows were updated with incorrect
values, the wrong index was dropped, and so forth. For example, a user
application could mistakenly update a database to give all employees
100 percent salary increases rather than 10 percent increases, or a
database administrator (DBA) could accidently delete a critical system
table. It is important to know exactly when an error was made so that
you know when to initiate time-based or change-based recovery. This
enables you to restore the database to the state it was in just before
corruption. See Querying V$LOGMNR_CONTENTS Based on Column Values
for details about how you can use LogMiner to accomplish this.
Determining what actions you would have to take to perform
fine-grained recovery at the transaction level. If you fully
understand and take into account existing dependencies, it may be
possible to perform a table-specific undo operation to return the
table to its original state. This is achieved by applying
table-specific reconstructed SQL statements that LogMiner provides in
the reverse order from which they were originally issued. See
Scenario 1: Using LogMiner to Track Changes Made by a Specific
User for an example.
Normally you would have to restore the table to its previous state,
and then apply an archived redo log file to roll it forward.
Performance tuning and capacity planning through trend analysis. You
can determine which tables get the most updates and inserts. That
information provides a historical perspective on disk access
statistics, which can be used for tuning purposes. See Scenario 2:
Using LogMiner to Calculate Table Access Statistics for an
example.
Performing postauditing. LogMiner can be used to track any data
manipulation language (DML) and data definition language (DDL)
statements executed on the database, the order in which they were
executed, and who executed them. (However, to use LogMiner for such a
purpose, you need to have an idea when the event occurred so that you
can specify the appropriate logs for analysis; otherwise you might
have to mine a large number of redo log files, which can take a long
time. Consider using LogMiner as a complementary activity to auditing
database use. See the Oracle Database Administrator's Guide for
information about database auditing.)
Enjoy.

How can I tell if a Materialized View in Oracle is being used?

We have some Materialized views in our Oracle 9i database that were created a long time ago, by a guy no longer working here. Is there an easy (or any) method to determine whether Oracle is using these views to serve queries? If they aren't being used any more, we'd like to get rid of them. But we don't want to discover after the fact that those views are the things that allow some random report to run in less than a few hours. The answer I'm dreaming of would be something like
SELECT last_used_date FROM dba_magic
WHERE materialized_view_name = 'peters_mview'
Even more awesome would be something that could tell me what actual SQL queries were using the materialized view. I realize I may have to settle for less.
If there is a solution that requires 10g, we are upgrading soon, so those answers would be useful also.
Oracle auditing can tell you this once configured as per the docs. Once configured, enable it by "AUDIT SELECT ON {name of materialized view}". The audit trail will be in the AUD$ table in the SYS schema.
One method other than auditing would be to read the v$segment_statistics view after one refresh and before the next refresh to see if there have been any reads. You'd have to account for any automatic statistics collection jobs also.
V$SQLAREA table has two columns which help identify the queries executed by the database.
SQL_TEXT - VARCHAR2(1000) - First thousand characters of the SQL text for the current cursor
SQL_FULLTEXT - CLOB - All characters of the SQL text for the current cursor
We can use this columns to find the queries using the said materialized views

Resources