How to get archivelog generated based on AWR reports/repository? - oracle

We have an application that gets busy during a month in a year. We have enabled awr repository period to 360 days to make sure we store the performance statistic information for analyzing later. Recently, we have an requirement to plan for standby database and for that we need to determine how many archivelogs generated during busiest month(which was 6 months ago) so that we can calculate the required bandwidth needed between the primary and standby location.
We cannot get the archivelogs details from v$loghistory as we don't have the information that long ago. So since we have AWR information, we can generate AWR reports but how do we find out the archivelog generation rate from it?

You can use DBA_HIST_SYSMETRIC_HISTORY to find the amount of redo generated. This should be good enough, although it won't generate the exact number. There will be some extr aredo that hasn't been archived yet, and the number may need to be multiplied to account for multiplexing.
select
to_char(begin_time, 'YYYY-MM') year_and_month,
round(sum(seconds*value)/1024/1024/1024, 1) gb_per_month
from
(
select begin_time, (end_time - begin_time) * 24 * 60 * 60 seconds, value
from dba_hist_sysmetric_history
where metric_name = 'Redo Generated Per Sec'
)
group by to_char(begin_time, 'YYYY-MM')
order by year_and_month;

Related

Oracle Flashback Archive and ORA-08181: specified number is not a valid system change number

Hi I created Oracle flashback Archive with retention of 1 month and enabled this archive on few of the tables.
But when i execute a versions query like below, i get Error "ORA-08181: specified number is not a valid system change number. ORA-06512: at "SYS.TIMESTAMP_TO_SCN". " And i am not getting this consistently, Sometimes i can query way back 10 days and for some tables i cannot query past 2 days.
select versions_starttime from tbl1 versions between timestamp minvalue and maxvalue
or
select versions_starttime from tbl1 versions between timestamp sysdate-2 and sysdate
We do have AUTO undo management and undo retention is 24 hours and retention guarantee is set. As per many forums, its mentioned we get this Error when we try to look far back and as per the below link, it should be max( auto-tuned undo retention period, retention times of all flashback archives in the database).
https://docs.oracle.com/database/121/SQLRF/functions175.htm#SQLRF06325
Can someone help why we get this Error irrespective of FDA retention being one month?

Best way to create tables for huge data using oracle

Functional requirement
We kinda work for devices. Each device, roughly speaking, has its unique identifier, an IP address, and a type.
I have a routine that pings all devices that has an IP address.
This routine is nothing more than a C# console application, which runs every 3 minutes trying to ping the IP address of each device.
The result of the ping I need to store in the database, as well as the date of verification (regardless of the result of the ping).
Then we got into the technical side.
Technical part:
Assuming my ping and bank structuring process is ready from the day 01/06/2016, I need to do two things:
Daily extraction
Extraction in real time (last 24 hours)
Both should return the same thing:
Devices that are unavailable for more than 24 hours.
Devices that are unavailable for more than 7 days.
Understood to be unavailable the device to be pinged AND did not responded.
It is understood by available device to be pinged AND answered successfully.
What I have today and works very badly:
A table with the following structure:
create table history (id_device number, response number, date date);
This table has a large amount of data (now has 60 million, but the trend is always grow exponentially)
** Here are the questions: **
How to achieve these objectives without encountering problems of slowness in queries?
How to create a table structure that is prepared to receive millions / billions of records within my corporate world?
Partition the table based on date.
For partitioning strategy consider performance vs maintanence.
For easy mainanence use automatic INTERVAL partitions by month or week.
You can even do it by day or manually pre-define 2 day intervals.
You query only needs 2 calendar days.
select id_device,
min(case when response is null then 'N' else 'Y' end),
max(case when response is not null then date end)
from history
where date > sysdate - 1
group by id_device
having min(case when response is null then 'N' else 'Y' end) = 'N'
and sysdate - max(case when response is not null then date end) > ?;
If for missing responses you write a default value instead of NULL, you may try building it as an index-organized table.
You need to read about Oracle partitioning.
This statement will create your HISTORY table partitioned by calendar day.
create table history (id_device number, response number, date date)
PARTITION BY RANGE (date)
INTERVAL(NUMTOYMINTERVAL(1, 'DAY'))
( PARTITION p0 VALUES LESS THAN (TO_DATE('5-24-2016', 'DD-MM-YYYY')),
PARTITION p1 VALUES LESS THAN (TO_DATE('5-25-2016', 'DD-MM-YYYY'));
All your old data will be in P0 partition.
Starting 5/24/2016 a new partition will be automatically created each day.
HISTORY now is a single logical object but physically it is a collection of identical tables stacked on top of each other.
Because each partitions data is stored separately, when a query asks for one day worth of data, only a single partition needs to be scanned.

Cassandra timing out when queried for key that have over 10,000 rows even after giving timeout of 10sec

Im using a DataStax Community v 2.1.2-1 (AMI v 2.5) with preinstalled default settings.
And i have a table :
CREATE TABLE notificationstore.note (
user_id text,
real_time timestamp,
insert_time timeuuid,
read boolean,
PRIMARY KEY (user_id, real_time, insert_time))
WITH CLUSTERING ORDER BY (real_time DESC, insert_time ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}
AND **default_time_to_live** = 20160
The other configurations are:
I have 2 nodes. on m3.large having 1 x 32 (SSD).
Im facing the issue of timeouts even if consistency is set to ONE on this particular table.
I increased the heap space to 3gb [ram size of 8gb]
I increased the read timeout to 10 secs.
select count (*) from note where user_id = 'xxx' limit 2; // errors={}, last_host=127.0.0.1.
I am wondering if the problem could be with time to live? or is there any other configuration any tuning that matters for this.
The data in the database is pretty small.
Also this problem occurs not as soon as you insert. This happens after some time (more than 6 hours)
Thanks.
[Copying my answer from here because it's the same environment/problem: amazon ec2 - Cassandra Timing out because of TTL expiration.]
You're running into a problem where the number of tombstones (deleted values) is passing a threshold, and then timing out.
You can see this if you turn on tracing and then try your select statement, for example:
cqlsh> tracing on;
cqlsh> select count(*) from test.simple;
activity | timestamp | source | source_elapsed
---------------------------------------------------------------------------------+--------------+--------------+----------------
...snip...
Scanned over 100000 tombstones; query aborted (see tombstone_failure_threshold) | 23:36:59,324 | 172.31.0.85 | 123932
Scanned 1 rows and matched 1 | 23:36:59,325 | 172.31.0.85 | 124575
Timed out; received 0 of 1 responses for range 2 of 4 | 23:37:09,200 | 172.31.13.33 | 10002216
You're kind of running into an anti-pattern for Cassandra where data is stored for just a short time before being deleted. There are a few options for handling this better, including revisiting your data model if needed. Here are some resources:
The cassandra.yaml configuration file - See section on tombstone settings
Cassandra anti-patterns: Queues and queue-like datasets
About deletes
For your sample problem, I tried lowering the gc_grace_seconds setting to 300 (5 minutes). That causes the tombstones to be cleaned up more frequently than the default 10 days, but that may or not be appropriate based on your application. Read up on the implications of deletes and you can adjust as needed for your application.

Send mail when transaction time is behind system time on UNIX

I have a query to run on oracle that gives output as sid,serial#,transaction start time,sql_id,transaction id.
What I need to do is, whenever a transaction's start time is more than 1 hour behind the system time, I need to run another query with that sql_id and send it as an email.
How do I compare this time output from ORACLE sql and compare it with the system time?
I need to automate this process and add it to the cron on UNIX.
Please help!
The function SYSDATE returns the current system date. When you subtract two dates, you get a difference measured in days. Multiply by 24 and you get a difference in terms of hours.
SELECT *
FROM v$transaction
WHERE (sysdate - start_date)*24 > 1
will give you the transactions that started more than 1 hour ago. You can also use interval arithmetic if you find that clearer.
SELECT *
FROM v$transaction
WHERE sysdate - interval '1' hour > start_date

Oracle 10g - Determine the average of concurrent connections

Is it possible to determine the average of concurrent connections on a 10g large database installation?
Any ideas??
This is probably more of a ServerFault question.
On a basic level, you could do this by regularly querying v$session to count the number of current sessions, store that number somewhere, and average it over time.
But there are already good utilities available to help with this. Look into STATSPACK. Then look at the scripts shown here to get you started.
Alternatively you could install a commercial monitoring application like Spotlight on Oracle.
If you have Oracle Enterprise Manager set up you can create a User Defined Metric which records SELECT COUNT(*) FROM V$SESSION. Select Related Links -> User Defined Metrics to set up a new User Defined Metric. Once it collects some data you can get the data out in raw form or it will do some basic graphing for you. As a bonus you can also set up alerting if you want to be e-mailed when the metric reaches a certain value.
The tricky bit is recording the connections. Oracle doesn't do this by default, so if you haven't got anything in place then you won't have a historical record.
The easiest way to start recording connections is with Oracle's built in audit functionality. It's as simple as
audit session
/
We can see the records of each connection in a view called dba_audit_session.
Now what? The following query uses a Common Table Expression to generate a range of datetime values which span 8th July 2009 in five minute chunks. The output of the CTE is joined to the audit view for that date; A count is calulated for each connection which spans a five minute increment.
with t as
( select to_date('08-JUL-2009') + ((level-1) * (300/86400)) as five_mins
from dual connect by level <= 288)
select to_char(t.five_mins, 'HH24:MI') as five_mins
, sum(case when t.five_mins between timestamp and logoff_time
then 1
else 0 end) as connections
from t
, dba_audit_session ssn
where trunc(ssn.timestamp) = to_date('08-JUL-2009')
group by to_char(t.five_mins, 'HH24:MI')
order by t.five_mins
/
You can then use this query as the input into a query which calculates the average number of connections.
This is a fairly crude implementation: I choose five minute increments out of display considerations , but obviously the finer grained the increment the more accurate the measure. Be warned: if you make the increments too fined grained and you have a lot of connections the resultant cross join will take a long time to run!

Resources