ORACLE Table Loading Speed - oracle

This is a new issue that I haven't run into before.
I have a table that at one point contained over 100k records, it's an event log for a dev environment.
it took up to 10 seconds to load the table (simply clicking on it to view the data in the table).
I removed all but 30 rows and it still takes 7 seconds to load.
I'm using Toad, and it gives me a dialog box that says "Statement Processing..."
Any ideas?
The following are some select statements and how long they took
select * from log; 21 rows in 10 secs
select * from log where id = 120000; 1 row in 1 msec
select * from log where user = 35000; 9 rows in 7 sec
The id is the pk, there is no index on the user field.
I have a table view that contains all of the fields sitting ontop of this table as well and it runs just as slow.

If you issue a "select * from event_log_table", then you are scanning the entire table with a full table scan. It has to scan through all allocated segments to see if there are rows in there. If your table once contained over 100K rows, then it has allocated at least the amount of space to be able to hold those 100K+ rows. Please see: http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/schema.htm#sthref2100
Now if you delete rows, the space is still allocated to this table, and Oracle still has to scan all space. It works like a high water mark.
To reduce the high water mark, you can issue a TRUNCATE TABLE command, which resets the high water mark. But then you'll lose ALL rows.
And there is an option to shrink the space in the table. You can read about it and its preconditions here:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_3001.htm#sthref5117
Regards,
Rob.

I would better understand this if you started off with a 100M records table. But just in case, try running Oracle stats. If this doesn't help, drop and recreate indices on that table.

Related

MySQL Workbench shows running query but query not in process list, eventually times out at 7200 seconds

Purpose: Remove duplicate records from large table.
My Process:
Create table 2 with 9 fields. No indexes. Same data_types per field as Table 1.
insert 9 fields, all records into table 2 from existing table 1
Table 1 contains 71+ Million rows and 232 columns and many duplicate records.
No joins. No Where Clause.
Table 1 contains several indexes.
8 fields are required to get unique records.
I'm trying to set up a process to de-dup large tables, using dense_rank partitioning to identify the most recently entered duplicate. Thus, those 8 required fields from Table 1 plus the auto-increment from Table 1 are loaded into Table 2.
Version: 10.5.17 MariaDB
The next steps would be:
Create new Table 3 identical to table 1 but with no indexes.
Load all data from Table 1 into Table 3, joining Table 1 to Table 2 on the auto-increment fields, where table 2.Dense_Rank field value = 1. This inserts ~17 Million unique records
Drop any existing Foreign_Keys related to Table 1
Truncate Table 1
Insert all records from Table 3 into Table 1
Nullify columns in related tables where the foreign key values in Table 1 no longer exist
re-create Foreign Keys that had been dropped.
Creating a test instance of an existing system I accomplish everything I need to once - only the first time. But If I then drop table 2 before refreshing Table 1 as outlined immediately above, re-create and try to reload, workbench shows query running until 7200 second timeout.
While the insert into Table 2 query is running, opening a second instance of Workbench and selecting count of records in table 2 after 15 minutes gives me the 71+ Million records I'm looking for, but Workbench continues running until timeout.
The query shows up in Show Processlist for those 15 minutes, but disappears around the 15 minute mark - presumably once all records are loaded.
I have tried running with timeouts set to 0 as well as 86,400 seconds, indicating no read timeout and 24 hours timeout, respectively, but query still times out at 7200.0xx seconds, or 2 hours, every time.
The exact error message I get is: Error Code: 2013. Lost connection to MySQL server during query 7200.125 sec
I have tried running the insert statement with COMMIT and without.
This is being done in a Test Instance set up for this development where I am the only user, and only a single table is in use during the insert process.
Finding one idea on line I ran the following suggested query to identify locked tables but got the error message that the table does not exist:
SELECT TRX_ID, TRX_REQUESTED_LOCK_ID, TRX_MYSQL_THREAD_ID, TRX_QUERY
FROM INNODB_TRX
and, of course, with only a single table being called by a single user in the system nothing should be locked.
As noted above I can fulfill the entire process a single time but, when trying to run up to the point of stopping just before truncating Table 1 so I can start over, I am consistently unable to succeed. Table 2 never gets released after being loaded again.
The reason it is important for me test a second iteration is that once this process is successful it will be applied to several database instances that were not just set up for the purpose of testing this process, and if this only works on a newly created database instance that has had no other processing performed it may not be dependable.

Insert Into query running for 7+hrs on a table having ~450+million records

Database:- Oracle 19c Standard edition on AWS
we are trying to Insert around 450 million records from Table A into Table B using the below query
<insert into table a (col1,col2,col3,col4,...,col31 in)
select col1,col2,col3,col4,...,col31 in from table b;>
this query is been executing for 7+hrs.
we checked the V$session_longops around 6hrs ago (03.30 am IST) and it was showing a remaining time of around 288 mins with a ~17% completion rate.
when we are checking it now it shows 280 mins as time remaining after 7+ hrs
We have made all indexes unusable as well on the table.
the DBA team is also monitoring and doesn't see any resource crunch or tablespace issues.
is there a way to improve this?
Thanks

Delete from table is very slow in oracle standard edition

Delete on table in oracle standard edition(no partition) gets slow with time.
Important Info: I am working on oracle standard edition so partitioning option available.
Detail:
I have one table with no constraint on it (no PK or anyother key or trigger or index or anything).
More than a million record gets inserted in this table in every 15 min using sql loader.
we need to process this 15 min record in every 15 min and at end of process delete any record older than 30 minute so that at any point of time there is more than 30-40 minute of data in the table.
Problem:
As time passes due to so frequent insertion and deletion response from the table gets slow.
Data extraction and delete from table takes more time with every passing run.
After a while even a simple select query takes too long.
We cant truncate table as data loader runs continously and we may loose data if truncate and we dont have create table access to drop and create table.
we have to process data in every 15 minute and made it available to downstream for further processing. it just keep getting slow.
Kindly help me with the aforementioned situation.

Oracle Row Access Statistics

Here is a problem i am trying to solve -
Scenario:
Oracle production database having a table with large number of rows(~700 million)
Accumulated over the period,say 10 years
Requirement:
Partition it, in such a way that one partition should have rows which are being accessed or updated over a "period of defined time" and another will have rows which are never retrieved or updated in that "defined period of time".
Now since this table has updated timestamp columns it is easy to find out rows that are updated.
So i want to know is there any in-built row level stats available which can give me this info about row access?
SCN could help, if you want to find row modification time.
Select scn_to_timestamp(ora_rowscn), t.* from my_table t;
Note: scn highly depends on table definition - i.e. it could be defined as row level or block level. Another thing, there is a limit to which the oracle save the scn to timestamp mappings.

Deleting very large table records where id not in another table

I have one table values that have 80 million records. Another table values_history that has 250 million records.
I want to filter the values_history table and want to keep the only data for which id is preset in values table.
delete from values_history where id not in (select id from values);
This query takes such a long time that I have to abort the process.
Please some idea to speed up the process.
Can I delete the records in bunch like 1000000 at a time?
I have extracted out the required record and inserted into temp table .This took 2 hrs after that i dropped the table then again inserted extracted data back to the main table whole process took 4 hrs around that is fine for me.I have dropped foreign key and all other constraint before that..

Resources