How to optimize a ALTER (Drop of column) for a table (type oracle) that's HUGE? - oracle

Preconditions:
Table dimensions: HUGE 13 million records and still growing.
System: oracle.
Problem:
Today we faced with a serious problem when we dropped a column from that table it took almost 20 minutes. A time that is critical for our delpoyment of some modifications. Without getting in more details is partitioning the table after column a solution for a faster DROP of column ?
P.S. The method presented in: https://oracle-base.com/articles/8i/dropping-columns where we can drop first logical and then physical is not approachable in this case.

Related

Oracle table partition on non partitioned table

I have a legacy application currently running on Oracle 11g, a table which has 3 million data from the year of 2003 to to 2022 and data continuously growing. This table has 43 columns.
Because of this huge data, entire application is running slow.
The primary key of this table is used in most of the joins. all the columns are storing numbers except the creation date.
I was thinking of partitioning the table by year or every 6 months or every quarter.
As the data is growing everyday, I should be able to create a partition automatically for upcoming years.
which would be the better way to partition this table.?
how do I do partition on existing non partitioned table.?
Kindly provide some examples I can refer.
Looking for best advises.

Inserting data into temporary tables in PostgreSQL is significantly slower compared to Oracle

Our application supports multiple databases including Oracle and PostgreSQL. In several use-cases, multiple queries are run to fetch necessary data. The data obtained from one or more queries is filtered based on business logic, and the filtered data is then inserted into a temporary table using a parameterized INSERT statement. This temporary table is then joined with other tables in a subsequent query. We have noticed that time taken for inserting data into temporary table linearly increases with the number of rows inserted with PostgreSQL database. This temporary table has only one varchar column of 15 bytes size. Inserting 80 rows takes 16ms, 160 rows takes 32ms, 280 rows takes 63ms, and so on. The same operations with Oracle database take about 1 ms for these inserts.
We are using PostgreSQL 10.4 with psqlODBC driver 10.03 version. We have configured temp_buffers (256MB), shared_buffers (8GB), work_mem (128MB) and maintenance_work_mem (512MB) parameters based on the guidelines provided in PostgreSQL documentation.
Are there any other configuration options we could try to improve the performance of temp table inserts in PostgreSQL database? Please suggest.
You haven't really identified the temporary table as the problem.
For example, below is a quick test of inserts to a 15 character (not the same as bytes of course) varchar column
=> CREATE TEMP TABLE tt (vc varchar(15));
CREATE TABLE
=> \timing on
Timing is on.
=> INSERT INTO tt SELECT to_char(i, '0000000000') FROM generate_series(1,100) i;
INSERT 0 100
Time: 0.758 ms
This is on my cheap, several years old laptop. Unless you are running your PostgreSQL database on a Raspberry Pi then I don't think temporary table speed is a problem for you.

Oracle Row Access Statistics

Here is a problem i am trying to solve -
Scenario:
Oracle production database having a table with large number of rows(~700 million)
Accumulated over the period,say 10 years
Requirement:
Partition it, in such a way that one partition should have rows which are being accessed or updated over a "period of defined time" and another will have rows which are never retrieved or updated in that "defined period of time".
Now since this table has updated timestamp columns it is easy to find out rows that are updated.
So i want to know is there any in-built row level stats available which can give me this info about row access?
SCN could help, if you want to find row modification time.
Select scn_to_timestamp(ora_rowscn), t.* from my_table t;
Note: scn highly depends on table definition - i.e. it could be defined as row level or block level. Another thing, there is a limit to which the oracle save the scn to timestamp mappings.

Microstrategy / Oracle - slow performance

We have a Microstrategy / Oracle setup which has a fact table with 50+ billion rows (that is 50,000,000,000+ rows).
The system performance is very unstable; sometimes it runs OK but at other times it is very slow, i.e. simple reports will take 20 minutes to run!
The most weird part: if we add more constraints to a report (i.e. more where clauses) that end up in LESS data coming back, the report actually slows down further.
We are able to pick up the SQL from Microstrategy, and we find that the SQL itself runs quite slowly as well. However, since the SQL is generated by Microstrategy, we do not have much control over the SQL.
Any thoughts as to where we should look?
Look at the SQL and see if you can add any more useful indexes. Check that the query is using the indexes you think it should be.
Check that every column that is filtered has an index.
Remember to update the statistics for all the tables involved: with tables so big it is very important.
Look at the query plan and check that there aren't table scan on large tables (you can accept them on small look up tables)
EnableDescribeParam=1 in ODBC driver
If your environment is like mine then what i will provide may help with your request if not it may help others. we too have a table like that and after weeks of trying to add this index or that index the ultimate solution was setting parallel on the table and at the index level.
report runtime 25 mins
alter table TABLE_NAME parallel(degree 4 instances 4);
alter index INDEX_NAME parallel(degree 4 instances 4);
report runtime 6 secs.
there is criteria for a table to have parallel set up on it such as must be larger tha 1G, but play with parallel threads to get most optimal time.

Slow query execution in an empty table. (after deleting a large amount of inserts)

I have a table in an oracle database with 15 fields.
This table had 3500000 inserts. I deleted them all.
delete
from table
After that, whenever I execute a select statement
I get a very slow response (7 sec) even though the table is empty.
I get a normal response only in the case that I search
according to an indexed field.
Why?
As Gritem says, you need to understand high water marks etc
If you do not want to truncate the table now (because fresh data has been inserted), use alter table xyz shrink space documented here for 10g
Tom Kyte has a good explanation of this issue:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:492636200346818072
It should help you understand deletes, truncates, and high watermarks etc.
In sql when you want to completely clear out a table, you should use truncate instead of delete. Let's say you have your table with 3.5 million rows in it and there is an index (unique identifier) on a column of bigint that increments for each row. Truncating the table will completely clear out the table and reset the index to 0. Delete will not clear the index and will continue at 3,500,001 when the next record is inserted. Truncate is also much faster than delete. Read the articles below to understand the differences.
Read this article Read this article that explains the difference between truncate and delete. There are times to use each one. Here is another article from an Oracle point of view.

Resources