I wonder how works the DBA_TAB_MODIFICATIONS.
How long is kept the data for a table? I have tables with timestamp
on Dec/2019
What does it mean when a table is not in
DBA_TAB_MODIFICATIONS? do it mean the table hasn't had (delete,
insert, update) in a period? if yes, for how long?
I have a Schema with around 1000 tables, only around 300 appear in
DBA_TAB_MODIFICATIONS
DBA_TAB_MODIFICATIONS is used by Oracle internally to track how many inserts, updates and deletes have been done to a table or table partition since the stats had been gathered on it with dbms_stats.
What version of Oracle are you using because after Oracle 9 it is automatically inserted
into the DBA_TAB_MODIFICATIONS table, before oracle 9 you have to register a table as MONITORED.
Related
Our application supports multiple databases including Oracle and PostgreSQL. In several use-cases, multiple queries are run to fetch necessary data. The data obtained from one or more queries is filtered based on business logic, and the filtered data is then inserted into a temporary table using a parameterized INSERT statement. This temporary table is then joined with other tables in a subsequent query. We have noticed that time taken for inserting data into temporary table linearly increases with the number of rows inserted with PostgreSQL database. This temporary table has only one varchar column of 15 bytes size. Inserting 80 rows takes 16ms, 160 rows takes 32ms, 280 rows takes 63ms, and so on. The same operations with Oracle database take about 1 ms for these inserts.
We are using PostgreSQL 10.4 with psqlODBC driver 10.03 version. We have configured temp_buffers (256MB), shared_buffers (8GB), work_mem (128MB) and maintenance_work_mem (512MB) parameters based on the guidelines provided in PostgreSQL documentation.
Are there any other configuration options we could try to improve the performance of temp table inserts in PostgreSQL database? Please suggest.
You haven't really identified the temporary table as the problem.
For example, below is a quick test of inserts to a 15 character (not the same as bytes of course) varchar column
=> CREATE TEMP TABLE tt (vc varchar(15));
CREATE TABLE
=> \timing on
Timing is on.
=> INSERT INTO tt SELECT to_char(i, '0000000000') FROM generate_series(1,100) i;
INSERT 0 100
Time: 0.758 ms
This is on my cheap, several years old laptop. Unless you are running your PostgreSQL database on a Raspberry Pi then I don't think temporary table speed is a problem for you.
I have a situation like to update a column(all rows) in a table having 150 million records.
Creation of duplicate table with updates and dropping of previous table is the best way but there is no available disk space to hold the duplicate table.
So how to perform the update in less time? Partitions are there on the table.
I am using oracle 12c
The cleanest approach is NOT updating the table, but creating a new table with the new column of updated rows. For instance, let's say I needed to update a column called old_value with the max of some value, instead of updating the old_table one does:
create new_table as select foo, bar, max(old_value) from old_table;
drop table old_table;
rename new_table as old_table.
If you need even more speed, you can do this creation using a parallel query with nologging thereby generating very little redo and no undo logs. More details can be ascertained here: https://asktom.oracle.com/pls/asktom/f?p=100:11:0::NO::P11_QUESTION_ID:6407993912330
I am using Oracle9i (9.2). I have a situation where I have to populate a table daily. Daily at mid night this table will be truncated and new data will be put in. The new data population takes about 10-20 mins. The issue is that this table can't be down(locked). While the new data is being inserted, the previous days data needs to be available for a select procedure.
Edit - I am looking into the transaction levels. I just need some expert opinion.
Is this possible in Oracle?
How about using two tables. Have a "current" table that has the previous days data. Then have a new table which you can load. Then when you are ready, you can "swap" the two tables, using a series of rename operations.
Here is a problem i am trying to solve -
Scenario:
Oracle production database having a table with large number of rows(~700 million)
Accumulated over the period,say 10 years
Requirement:
Partition it, in such a way that one partition should have rows which are being accessed or updated over a "period of defined time" and another will have rows which are never retrieved or updated in that "defined period of time".
Now since this table has updated timestamp columns it is easy to find out rows that are updated.
So i want to know is there any in-built row level stats available which can give me this info about row access?
SCN could help, if you want to find row modification time.
Select scn_to_timestamp(ora_rowscn), t.* from my_table t;
Note: scn highly depends on table definition - i.e. it could be defined as row level or block level. Another thing, there is a limit to which the oracle save the scn to timestamp mappings.
One of the tables in my db has to be updated daily. A web server actively queries this table every 5 seconds. Now I can't simply UPDATE the table rows because of some constraints, so I need to clear all the rows and repopulate the table. How can I safely do this without affecting the functioning of web server?
The repopulation is done by an other web service isolated from the web server. I am using Spring framework.
The table has approx. 170k rows and 5 columns.
Truncate and re-populate the table in a single transaction. The truncate isn't visible to concurrent readers, who continue to see the old data. Correction per #AlexanderEmelianov and the docs:
TRUNCATE is not MVCC-safe. After truncation, the table will appear empty to concurrent transactions, if they are using a snapshot taken before the truncation occurred. See Section 13.5 for more details.
so after the TRUNCATEing txn commits , concurrent txns started before the TRUNCATE will see the table as empty.
Any transaction attempting to write to the table after it's truncated will wait until the transaction doing the truncate either commits or rolls back.
BEGIN;
TRUNCATE TABLE my_table;
INSERT INTO my_table (blah) VALUES (blah), (blah), (blah);
COMMIT;
You can COPY instead of INSERT too. Anything in a normal transaction.
Even better, there's an optimisation in PostgreSQL that makes populating a table after a truncate faster than if you do it otherwise.