I Found a couple query from AWR that shows the table space usage growth last 12 months , but i cant find for the DB size over the last 12 months.
Any one have query or clue how to get DB space usage last 12 months from AWR?
Em respotory is broken totaly so no need try get it from there.
Related
Hi I created Oracle flashback Archive with retention of 1 month and enabled this archive on few of the tables.
But when i execute a versions query like below, i get Error "ORA-08181: specified number is not a valid system change number. ORA-06512: at "SYS.TIMESTAMP_TO_SCN". " And i am not getting this consistently, Sometimes i can query way back 10 days and for some tables i cannot query past 2 days.
select versions_starttime from tbl1 versions between timestamp minvalue and maxvalue
or
select versions_starttime from tbl1 versions between timestamp sysdate-2 and sysdate
We do have AUTO undo management and undo retention is 24 hours and retention guarantee is set. As per many forums, its mentioned we get this Error when we try to look far back and as per the below link, it should be max( auto-tuned undo retention period, retention times of all flashback archives in the database).
https://docs.oracle.com/database/121/SQLRF/functions175.htm#SQLRF06325
Can someone help why we get this Error irrespective of FDA retention being one month?
I'm running queries against a Vertica table with close to 500 columns and only 100 000 rows.
A simple query (like select avg(col1) from mytable) takes 10 seconds, as reported by the Vertica vsql client with the \timing command.
But when checking column query_requests.request_duration_ms for this query, there's no mention of the 10 seconds, it reports less than 100 milliseconds.
The query_requests.start_timestamp column indicates that the beginning of the processing started 10 seconds after I actually executed the command.
The resource_acquisitions table show no delay in resource acquisition, but its queue_entry_timestamp column also shows the queue entry occurred 10 seconds after I actually executed the command.
The same query run on the same data but on a table with only one column returns immediately. And since I'm running the queries directly on a Vertica node, I'm excluding any network latency issue.
It feels like Vertica is doing something before executing the query. This is taking most of the time, and is related to the number of columns of the table. Any idea what it could be, and what I could try to fix it ?
I'm using Vertica 8, in a test environment with no load.
I was running Vertica 8.1.0-1, it seems the issue was caused by a Vertica bug in the query planning phase causing a performance degradation. It was solved in versions >= 8.1.1 :
https://my.vertica.com/docs/ReleaseNotes/8.1./Vertica_8.1.x_Release_Notes.htm
VER-53602 - Optimizer - This fix improves complex query performance during the query planning phase.
On Oracle SQL Developer, I'm looking to write a where function that gives me the last 7 days of data. I can write that part myself fine, but the extra part that I need to add on to the end is that I only want results that are for the past 7 days before my current time.
For example, if I query at 14:00 today, I would want it to return results for the past 7 days with data only up until 14:00, as opposed to the full day.
Is this possible?
SYSDATE is the Oracle built in date function that supplies the current date / time accurate to 1 second. All you have to do is:
select *
from whatever
where whatever.datecol between sysdate - 7 and sysdate;
I created multiple posts in the forum about the performance problem that I have but now after i made some tests and gathered all the info that is needed I'm creating this post.
I have performance issues with two big tables. Those tables are located on an oracle remote database. I'm running the quert :
insert into local_postgresql_table select * from oracle_remote_table.
The first table has 45M records and its size is 23G. The import of the data from the oracle remote database is taking 1 hour and 38 minutes. After that I create 13 regular indexes on the table and it takes 10 minutes per table ->2 hours and 10 minutes in total.
The second table has 29M records and its size is 26G. The import of the data from the oracle remote database is taking 2 hours and 30 minutes. The creation of the indexes takes 1 hours and 30 minutes (some are indexes on one column and the creation takes 5 min and some are indexes on multiples column and it takes 11 min.
Those operation are very problematic for me and I'm searching for a solution to improve the performance. The parameters I assigned :
min_parallel_relation_size = 200MB
max_parallel_workers_per_gather = 5
max_worker_processes = 8
effective_cache_size = 2500MB
work_mem = 16MB
maintenance_work_mem = 1500MB
shared_buffers = 2000MB
RAM : 5G
CPU CORES : 8
-I tried running select count(*) from table in oracle and in postgresql the running time is almost equal.
-Before importing the data I drop the indexes and the constraints.
-I tried to copy a 23G file from the oracle server to the postgresql server and it took me 12 minutes.
Please advice how can I continue ? How can I improve something in this operation ?
We have an application that gets busy during a month in a year. We have enabled awr repository period to 360 days to make sure we store the performance statistic information for analyzing later. Recently, we have an requirement to plan for standby database and for that we need to determine how many archivelogs generated during busiest month(which was 6 months ago) so that we can calculate the required bandwidth needed between the primary and standby location.
We cannot get the archivelogs details from v$loghistory as we don't have the information that long ago. So since we have AWR information, we can generate AWR reports but how do we find out the archivelog generation rate from it?
You can use DBA_HIST_SYSMETRIC_HISTORY to find the amount of redo generated. This should be good enough, although it won't generate the exact number. There will be some extr aredo that hasn't been archived yet, and the number may need to be multiplied to account for multiplexing.
select
to_char(begin_time, 'YYYY-MM') year_and_month,
round(sum(seconds*value)/1024/1024/1024, 1) gb_per_month
from
(
select begin_time, (end_time - begin_time) * 24 * 60 * 60 seconds, value
from dba_hist_sysmetric_history
where metric_name = 'Redo Generated Per Sec'
)
group by to_char(begin_time, 'YYYY-MM')
order by year_and_month;