SQL to limit the types of tables returned when connecting to Oracle Database in Power BI - oracle

I have got a connection to an Oracle Database set up in Power BI.
This is working fine, apart from the fact it brings back 9500+ tables that start with "BIN".
Is there some SQL code I can put in the SQL statement section when connecting to the Oracle Database that limits the tables that it returns to ignore any table that begins with 'BIN'?

Tables starting with BIN$ are tables that have been dropped but not purged and are in Oracle's "recycle bin".
The simplest method of not showing them is, if they are no longer required, to PURGE (delete) the tables from the recycle bin and then you will not see them as they will not exist.
You can use (documentation link):
PURGE TABLE BIN$0+xyzabcdefghi123 to get rid of an individual tables with that name (or you can use the original name of the table).
PURGE TABLESPACE tablespace_name USER username; to get rid of all recycled tables in a table space belonging to a single user.
PURGE TABLESPACE tablespace_name; to get rid of all recycled tables in a table space.
PURGE RECYCLEBIN; to get rid of all of the current user's recycled tables.
PURGE DBA_RECYCLEBIN; to get rid of everything in the recycle bin (assuming you have SYSDBA privileges).
Before purging tables you should make sure that they are really not required as you would need to restore from backups to bring them back.

Related

After TRUNCATE TABLE, how can I recover the data?

I need to recover data truncated in Oracle table.
There are any folder in Linux where stores the truncated data?
Is there any table which store the information of table after truncating?
I am not DBA.
If you have not backed up the table (for example, by using RMAN, EXPDP or EXP) or created a RESTORE POINT then your data is lost.
From the Oracle documentation:
Caution:
You cannot roll back a TRUNCATE TABLE statement, nor can you use a FLASHBACK TABLE statement to retrieve the contents of a table that has been truncated.
You can check if you have an RMAN backup by logging into RMAN (rather than into the database) and using the LIST command.
You can check if you have a restore point (from a database user with the appropriate permissions) using:
SELECT name,
guarantee_flashback_database,
pdb_restore_point,
clean_pdb_restore_point,
pdb_incarnation#,
storage_size
FROM v$restore_point;
You are looking for a restore point where guarantee_flashback_database is YES.
(Assuming that the RESTORE POINT was created after the table was created and populated.)
Note:
If you restore from a backup or to a restore point then all changes made since that backup or creating the restore point will be lost.
To answer your additional questions:
[Are] there are any folder in Linux where stores the truncated data?
No
Is there any table which store the information of table after truncating?
No

Purging Oracle Unified Audit Trail doesn't cleanup lob data

I'm experiencing rapid growth in my SYSAUX scheme. I have found that the majority of the space (27Gb) is being consumed by a LOBSEGMENT object in the AUDSYS schema. The research I did, suggested that the Unified Audit Log needed to be purged and I went ahead and cleaned it up as it was really massive, however, space has not been released from the LOBSEGEMENT and I'm wondering if there is a way to do this?
DB Version: Oracle Database 12c Release 12.1.0.1.0 - 64bit Production
I used the below to identify large objects in the system
select s.owner, s.segment_name, s.segment_type, s.tablespace_name, sum(s.BYTES) /1024/1024/1024 SIZE_GB
from DBA_SEGMENTS s
group by s.owner, s.segment_name, s.segment_type, s.tablespace_name;
From there I identified the table name associated with the largest segment with the below:
select * from dba_lobs where SEGMENT_NAME='SYS_LOB0000019764C00014$$';
The LOG_PIECE column of the AUDSYS.CLI_SWP$ea27aff$1$1 table was identified, but I cannot query the table directly. even connected with sysdba, when I try and query the table to find out what is in it, I get "ORA-00942: table or view does not exist". I also cannot find any reference to the table or column in any other views, procedures, synonyms, etc in the DB. So I have no idea how to view the contents of the table in order to figure out what it is.
When I look at the Unified Audit Trail, I can't find anything that would link to this column either.
After purging I did another backup of the system in the hopes that it might release unused space, space is still being used and the purge did not clean it up.
Any ideas on 1. How to figure out what is in the table/column and 2. how to clean it up would really be appreciated as I'm at a bit of a loss here.

Oracle Clear Cached Sequence

I have a very simple table that has an ID (generated by a sequence), and a NAME. I inserted a couple rows which got cached but after a while I wanted to remove them because I wanted to redo my table so I issued a couple of DELETE statements to remove all records (I don't have the privileges to do a TRUNCATE).
After deleting the old rows, I again inserted a couple other records but I didn't bother resetting the sequence.
On PHP, when I SELECT everything on that table, I still get those old deleted rows. But on PL/SQL when I SELECT on that table, it only shows me the new records.
Is the problematic cache on the PHP or Oracle side? If it's on the Oracle side, how do I clear it out?
Thanks!

What happened to my table in my oracle database?

I have a situation where yesterday my code was working ok, but today I find that my code fails because a SQL query fails on my Oracle database. The query fails because the table used in the query does not exists. I am no Oracle expert so I am reaching out to you Oracle experts out there. Is there a way to see in a log file or log table when my table disappeared and who dropped my table?
Thanks
Depending on previous configuration one would hope that a production database would have auditing turned on. Try
select * from sys.AUD$
The audit table can log almost every user action including dropping tables or revoking grants but has to be configured.
Assuming you have the recyclebin turned on in your database, you might be able to restore the dropped table. As the user who owns the table, you can run this query:
select * from USER_RECYCLEBIN
or if you have SYS access you can check the query:
SELECT * from DBA_RECYCLEBIN;
Then as a user owns the table, run this FLASHBACK command to restore it:
FLASHBACK TABLE <your table name> TO BEFORE DROP;
If you get ORA-38305 you might have a tablespace issue - either run it as a different user or make sure it using a locally managed tablespace.

PostgreSQL temporary tables

I need to perform a query 2.5 million times. This query generates some rows which I need to AVG(column) and then use this AVG to filter the table from all values below average. I then need to INSERT these filtered results into a table.
The only way to do such a thing with reasonable efficiency, seems to be by creating a TEMPORARY TABLE for each query-postmaster python-thread. I am just hoping these TEMPORARY TABLEs will not be persisted to hard drive (at all) and will remain in memory (RAM), unless they are out of working memory, of course.
I would like to know if a TEMPORARY TABLE will incur disk writes (which would interfere with the INSERTS, i.e. slow to whole process down)
Please note that, in Postgres, the default behaviour for temporary tables is that they are not automatically dropped, and data is persisted on commit. See ON COMMIT.
Temporary table are, however, dropped at the end of a database session:
Temporary tables are automatically dropped at the end of a session, or
optionally at the end of the current transaction.
There are multiple considerations you have to take into account:
If you do want to explicitly DROP a temporary table at the end of a transaction, create it with the CREATE TEMPORARY TABLE ... ON COMMIT DROP syntax.
In the presence of connection pooling, a database session may span multiple client sessions; to avoid clashes in CREATE, you should drop your temporary tables -- either prior to returning a connection to the pool (e.g. by doing everything inside a transaction and using the ON COMMIT DROP creation syntax), or on an as-needed basis (by preceding any CREATE TEMPORARY TABLE statement with a corresponding DROP TABLE IF EXISTS, which has the advantage of also working outside transactions e.g. if the connection is used in auto-commit mode.)
While the temporary table is in use, how much of it will fit in memory before overflowing on to disk? See the temp_buffers option in postgresql.conf
Anything else I should worry about when working often with temp tables? A vacuum is recommended after you have DROPped temporary tables, to clean up any dead tuples from the catalog. Postgres will automatically vacuum every 3 minutes or so for you when using the default settings (auto_vacuum).
Also, unrelated to your question (but possibly related to your project): keep in mind that, if you have to run queries against a temp table after you have populated it, then it is a good idea to create appropriate indices and issue an ANALYZE on the temp table in question after you're done inserting into it. By default, the cost based optimizer will assume that a newly created the temp table has ~1000 rows and this may result in poor performance should the temp table actually contain millions of rows.
Temporary tables provide only one guarantee - they are dropped at the end of the session. For a small table you'll probably have most of your data in the backing store. For a large table I guarantee that data will be flushed to disk periodically as the database engine needs more working space for other requests.
EDIT:
If you're absolutely in need of RAM-only temporary tables you can create a table space for your database on a RAM disk (/dev/shm works). This reduces the amount of disk IO, but beware that it is currently not possible to do this without a physical disk write; the DB engine will flush the table list to stable storage when you create the temporary table.

Resources