We have a clustered database with two nodes. My objective is to find out the size of the database. Could you please give me a script to estimate the size of the database?
A good script is go to the dba, give a few beers and you will get what you want. If that does not help, check the v$datafile, v$tempfile and v$log views. They will give you all needed data, if you have access to them, in which case you probably are the dba.
select sum(bytes)/1024/1024 MB from
( select sum (bytes) bytes from v$datafile
union
select sum (bytes) from v$tempfile
union
select sum (bytes * members) from v$log
)
/
I hope this helps.
select a.data_size+b.temp_size+c.redo_size+d.controlfile_size "total_size in GB"
from ( select sum(bytes)/1024/1024/1024 data_size
from dba_data_files) a,
( select nvl(sum(bytes),0)/1024/1024/1024 temp_size
from dba_temp_files ) b,
( select sum(bytes)/1024/1024/1024 redo_size
from sys.v_$log ) c,
( select sum(BLOCK_SIZE*FILE_SIZE_BLKS)/1024/1024/1024 controlfile_size
from v$controlfile) d
Use the code below to get DB size. Yes its the same as above but you can put it in a nice PL/SQL script to run in different databases.
SET SERVEROUTPUT ON
Declare
ddf Number:= 0;
dtf Number:= 0;
log_bytes Number:= 0;
total Number:= 0;
BEGIN
select sum(bytes)/power(1024,3) into ddf from dba_data_files;
select sum(bytes)/power(1024,3) into dtf from dba_temp_files;
select sum(bytes)/power(1024,3) into log_bytes from v$log;
total:= round(ddf+dtf+log_bytes, 3);
dbms_output.put_line('TOTAL DB Size is: '||total||'GB ');
END;
/
http://techxploration.blogspot.com.au/2012/06/script-to-get-oracle-database-size.html
A slight modification to Jaun's query to include members from v$log as was pointed out and this would probably be the most accurate because it includes the controle file info which is part of the overall database size.
select a.data_size+b.temp_size+c.redo_size+d.controlfile_size "total_size in GB"
from ( select sum(bytes)/1024/1024/1024 data_size
from dba_data_files) a,
( select nvl(sum(bytes),0)/1024/1024/1024 temp_size
from dba_temp_files ) b,
( select sum(bytes*members)/1024/1024/1024 redo_size
from sys.v_$log ) c,
( select sum(BLOCK_SIZE*FILE_SIZE_BLKS)/1024/1024/1024 controlfile_size
from v$controlfile) d
Related
I need a way in Oracle to query the size of the segments as well as the amount of rows for all segment types which are a 'Table'.
Is there a way to combine the statement below (which calculates the size) and the 'count' function?
SELECT s.segment_type, (s.bytes / 1024 / 1024) mb, s.segment_name, l.*, s.*
FROM dba_segments s, dba_lobs l
where s.segment_name = l.segment_name(+)
and s.owner='TEST' order by bytes desc
If you have Oracle 12c (please always provide your database version info), you can use the new inline functions.
with function fRowCount(aOwner in varchar2, aTableName in varchar2) return number
is
lCount number;
begin
execute immediate 'select count(*) from ' || aOwner || '.' || aTableName into lCount;
return lCount;
end;
SELECT case when segment_type = 'TABLE' then fRowCount(s.owner, s.segment_name) else null end rowcount,
s.segment_type, (s.bytes / 1024 / 1024) mb, s.segment_name, l.*, s.*
FROM dba_segments s, dba_lobs l
where s.segment_name = l.segment_name(+)
and s.owner='TEST' order by bytes desc
Please be really really careful with that. execute immediate without the option of proper binding variables and to be executed as a dba is a dangerous combination.
If you have gathered the stats at schema level then information you are seeking for must be present in dba_tables.
You can gather the statistics of the schema using the following:
EXEC DBMS_STATS.GATHER_SCHEMA_STATS('TEST');
After that, you can use the following query to fetch the desired result:
SELECT
S.SEGMENT_TYPE,
( S.BYTES / 1024 / 1024 ) MB,
DT.NUM_ROWS,
S.SEGMENT_NAME,
L.*,
S.*
FROM
DBA_SEGMENTS S
LEFT JOIN DBA_LOBS L ON ( S.SEGMENT_NAME = L.SEGMENT_NAME )
LEFT JOIN DBA_TABLES DT ON ( S.SEGMENT_NAME = DT.TABLE_NAME )
WHERE
S.OWNER = 'TEST'
ORDER BY
BYTES DESC
Note: Always use standard ANSI joins.
Cheers!!
I would like to understand why pipelined function is not returning any results
Any ideas what i am doing wrong here.
Giving credit where due to the Original Author
CREATE OR REPLACE PACKAGE MANAGE_SPACE AS
--Author Tanmay
g_tblspce_threshold number := 80;
TYPE tblespaces_record IS RECORD(
tablespace_name VARCHAR2(30),
percentage_used NUMBER
);
TYPE tblespaces_table IS TABLE OF tblespaces_record;
function list_tblspcs_excd_thresld
return tblespaces_table PIPELINED ;
END MANAGE_SPACE;
Package body
CREATE OR REPLACE PACKAGE BODY MANAGE_SPACE
AS
--Author Tanmay
FUNCTION list_tblspcs_excd_thresld
RETURN tblespaces_table PIPELINED
AS
tblspaces tblespaces_record;
BEGIN
for x in (SELECT a.tablespace_name tablespace_name,
ROUND (((c.BYTES - NVL (b.BYTES, 0)) / c.BYTES) * 100) percentage_used
into tblspaces
FROM dba_tablespaces a,
(SELECT tablespace_name,
SUM (BYTES) BYTES
FROM dba_free_space
GROUP BY tablespace_name
) b,
(SELECT COUNT (1) DATAFILES,
SUM (BYTES) BYTES,
tablespace_name
FROM dba_data_files
GROUP BY tablespace_name
) c
WHERE b.tablespace_name(+) = a.tablespace_name
AND c.tablespace_name(+) = a.tablespace_name
AND ROUND (((c.BYTES - NVL (b.BYTES, 0)) / c.BYTES) * 100) > g_tblspce_threshold
ORDER BY NVL (((c.BYTES - NVL (b.BYTES, 0)) / c.BYTES), 0) DESC)
loop
PIPE ROW (tblspaces);
end loop;
return;
END list_tblspcs_excd_thresld;
END MANAGE_SPACE;
executing this package does not return any rows
SQL> select * from table(MANAGE_SPACE.list_tblspcs_excd_thresld())
2
SQL> /
TABLESPACE_NAME PERCENTAGE_USED
------------------------------ ---------------
What am i doing wrong
You need to populate an array. The easiest way to do this is to use the BULK COLLECT syntax. Then loop round the array and pipe out rows.
Here is my revised version of your package.
CREATE OR REPLACE PACKAGE BODY MANAGE_SPACE
AS
FUNCTION list_tblspcs_excd_thresld
RETURN tblespaces_table PIPELINED
AS
-- collection type not record type
tblspaces tblespaces_table;
BEGIN
select *
-- populate the collection
bulk collect into tblspaces
from
(SELECT a.tablespace_name tablespace_name,
ROUND (((c.BYTES - NVL (b.BYTES, 0)) / c.BYTES) * 100) percentage_used
FROM dba_tablespaces a,
(SELECT tablespace_name,
SUM (BYTES) BYTES
FROM dba_free_space
GROUP BY tablespace_name
) b,
(SELECT COUNT (1) DATAFILES,
SUM (BYTES) BYTES,
tablespace_name
FROM dba_data_files
GROUP BY tablespace_name
) c
WHERE b.tablespace_name(+) = a.tablespace_name
AND c.tablespace_name(+) = a.tablespace_name
AND ROUND (((c.BYTES - NVL (b.BYTES, 0)) / c.BYTES) * 100) > g_tblspce_threshold
ORDER BY NVL (((c.BYTES - NVL (b.BYTES, 0)) / c.BYTES), 0) DESC);
-- loop round the collection
for i in 1..tblspaces.count()
loop
PIPE ROW (tblspaces(i));
end loop;
return;
END list_tblspcs_excd_thresld;
END MANAGE_SPACE;
Here is the output:
SQL> select * from table(MANAGE_SPACE.list_tblspcs_excd_thresld())
2 /
TABLESPACE_NAME PERCENTAGE_USED
------------------------------ ---------------
SYSTEM 100
SYSAUX 91
APEX_2614203650434107 90
EXAMPLE 87
USERS 82
SQL>
#a_horse_with_no_name makes a good point regarding memory consumption. Collections are held in the session's memory space, the PGA. In this particular case it is unlikely that a database would have enough tablespaces to blow the PGA limit. But for other queries which might have much larger result sets there is the BULK COLLECT ... LIMIT syntax, which allows us to fetch manageable chunks of the result set into our collection. Find out more.
This query will give me if a compression has been marked for compression
Select *
From All_Tab_Partitions
It came up in discussion that a Partition marked for compression can actually contain uncompressed data if it is loaded with FAST_LOAD.
I want to only issue compression commands for uncompressed partitions.
How can I find out if the data inside a partition is compressed or not?
Compression may be enabled on a partition but it is only used if data was created from a direct-path insert. Use sampling and the function
DBMS_COMPRESSION.GET_COMPRESSION_TYPE to estimate the percent of rows
compressed per partition.
Schema
create table compression_test
(
a number,
b varchar2(100)
) partition by range(a)
(
partition p_all_compressed values less than (2) compress,
partition p_none_compressed values less than (3) compress,
partition p_half_compressed values less than (4) compress
);
insert /*+ append */ into compression_test partition (p_all_compressed)
select 1, '0123456789' from dual connect by level <= 100000;
commit;
insert into compression_test partition (p_none_compressed)
select 2, '0123456789' from dual connect by level <= 100000;
commit;
insert /*+ append */ into compression_test partition (p_half_compressed)
select 3, '0123456789' from dual connect by level <= 50000;
commit;
insert into compression_test partition (p_half_compressed)
select 3, '0123456789' from dual connect by level <= 50000;
commit;
Estimate Code
--Find percent compressed from sampling each partition.
declare
v_percent_compressed number;
begin
--Loop through partitions.
for partitions in
(
select table_owner, table_name, partition_name
from dba_tab_partitions
--Enter owner and table_name here.
where table_owner = user
and table_name = 'COMPRESSION_TEST'
) loop
--Dynamic SQL to sample a partition and test for compression.
execute immediate '
select
round(sum(is_compressed)/count(is_compressed) * 100) percent_compressed
from
(
--Compression status of sampled rows.
--
--Numbers are based on constants from:
--docs.oracle.com/cd/E16655_01/appdev.121/e17602/d_compress.htm
--Assumption: Only basic compression is used.
--Assumption: Partitions are large enough for 0.1% sample size.
select
case dbms_compression.get_compression_type(
user,
'''||partitions.table_name||''',
rowid,
'''||partitions.partition_name||'''
)
when 4096 then 1
when 1 then 0
end is_compressed
from '||partitions.table_owner||'.'||partitions.table_name||'
partition ('||partitions.partition_name||') sample (0.1)
)
' into v_percent_compressed;
dbms_output.put_line(rpad(partitions.partition_name||':', 31, ' ')||
lpad(v_percent_compressed, 3, ' '));
end loop;
end;
/
Sample Output
P_ALL_COMPRESSED: 100
P_HALF_COMPRESSED: 55
P_NONE_COMPRESSED: 0
I need to write one procedure to pick the record for given rows
for example
procedure test1
(
start_ind number,
end_ind number,
p_out ref cursor
)
begin
opecn p_out for
select * from test where rownum between start_ind and end_ind;
end;
when we pass start_ind 1 and end_ind 10 its working.But when we change start_ind to 5
then query looks like
select * from test where rownum between 5 and 10;
and its fails and not shows the output.
Please assist how to fix this issue.Thanks!
The rownum is assigned and then the where condition evaluated. Since you'll never have a rownum 1-4 in your result set, you never get to rownum 5. You need something like this:
SELECT * FROM (
SELECT rownum AS rn, t.*
FROM (
SELECT t.*
FROM test t
ORDER BY t.whatever
)
WHERE ROWNUM <= 10
)
WHERE rn >= 5
You'll also want an order by clause in the inner select, or which rows you get will be undefined.
This article by Tom Kyte pretty much tells you everything you need to know: http://www.oracle.com/technetwork/issue-archive/2006/06-sep/o56asktom-086197.html
SELECT *
from (SELECT rownum AS rn, t.*
FROM MyTable t
WHERE ROWNUM <= 10
ORDER BY t.NOT-Whatever
-- (its highly important to use primary or unique key of MyTable)
WHERE rn > 5
As a hint, :
Typically we use store-procedures for data validation, access control, extensive or complex processing that requires execution of several SQL statements. Stored procedures may return result sets, i.e. the results of a SELECT statement. Such result sets can be processed using cursors, by other stored procedures, by associating a result set locator, or by applications
I think you are going to use the ruw-number to fetch paged queries.
Try to create a generic select query based on the idea mentioned above.
Two possibilities:
1) Your table is an index-organized table. So its data is sorted. You would select those first rows you want to avoid and based on that get the next rows you are looking for:
create or replace procedure get_records
(
vi_start_ind integer,
vi_end_ind integer,
vo_cursor out sys_refcursor
) as
begin
open vo_cursor for
select *
from test
where rownum <= vi_end_ind - vi_start_ind + 1
and rowid not in
(
select rowid
from test
where rownum < vi_start_ind
)
;
end;
2) Your table is not index-organized, which is normally the case. Then its records are not sorted. To get records m to n, you would have to tell the system what order you have in mind:
create or replace procedure get_records
(
vi_start_ind number,
vi_end_ind number,
vo_cursor out sys_refcursor
) as
begin
open vo_cursor for
select *
from test
where rownum <= vi_end_ind - vi_start_ind + 1
and rowid not in
(
select rowid from
(
select rowid
from test
order by somthing
)
where rownum < vi_start_ind
)
order by something
;
end;
All this said, think it over what you want to achieve. If you want to use this procedure to read your table block for block, keep in mind that it will read the same data again and again. To know what rows 1,000,001 to 1,000,100 are, the dbms must read through one million rows first.
Is there a way to make selecting random rows faster in oracle with a table that has million of rows. I tried to use sample(x) and dbms_random.value and its taking a long time to run.
Thanks!
Using appropriate values of sample(x) is the fastest way you can. It's block-random and row-random within blocks, so if you only want one random row:
select dbms_rowid.rowid_relative_fno(rowid) as fileno,
dbms_rowid.rowid_block_number(rowid) as blockno,
dbms_rowid.rowid_row_number(rowid) as offset
from (select rowid from [my_big_table] sample (.01))
where rownum = 1
I'm using a subpartitioned table, and I'm getting pretty good randomness even grabbing multiple rows:
select dbms_rowid.rowid_relative_fno(rowid) as fileno,
dbms_rowid.rowid_block_number(rowid) as blockno,
dbms_rowid.rowid_row_number(rowid) as offset
from (select rowid from [my_big_table] sample (.01))
where rownum <= 5
FILENO BLOCKNO OFFSET
---------- ---------- ----------
152 2454936 11
152 2463140 32
152 2335208 2
152 2429207 23
152 2746125 28
I suspect you should probably tune your SAMPLE clause to use an appropriate sample size for what you're fetching.
Start with Adam's answer first, but if SAMPLE just isn't fast enough, even with the ROWNUM optimization, you can use block samples:
....FROM [table] SAMPLE BLOCK (0.01)
This applies the sampling at the block level instead of for each row. This does mean that it can skip large swathes of data from the table so the sample percent will be very rough. It's not unusual for a SAMPLE BLOCK with a low percentage to return zero rows.
Here's the same question on AskTom:
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:6075151195522
If you know how big your table is, use sample block as described above. If you don't, you can modify the routine below to get however many rows you want.
Copied from: http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:6075151195522#56174726207861
create or replace function get_random_rowid
( table_name varchar2
) return urowid
as
sql_v varchar2(100);
urowid_t dbms_sql.urowid_table;
cursor_v integer;
status_v integer;
rows_v integer;
begin
for exp_v in -6..2 loop
exit when (urowid_t.count > 0);
if (exp_v < 2) then
sql_v := 'select rowid from ' || table_name
|| ' sample block (' || power(10, exp_v) || ')';
else
sql_v := 'select rowid from ' || table_name;
end if;
cursor_v := dbms_sql.open_cursor;
dbms_sql.parse(cursor_v, sql_v, dbms_sql.native);
dbms_sql.define_array(cursor_v, 1, urowid_t, 100, 0);
status_v := dbms_sql.execute(cursor_v);
loop
rows_v := dbms_sql.fetch_rows(cursor_v);
dbms_sql.column_value(cursor_v, 1, urowid_t);
exit when rows_v != 100;
end loop;
dbms_sql.close_cursor(cursor_v);
end loop;
if (urowid_t.count > 0) then
return urowid_t(trunc(dbms_random.value(0, urowid_t.count)));
end if;
return null;
exception when others then
if (dbms_sql.is_open(cursor_v)) then
dbms_sql.close_cursor(cursor_v);
end if;
raise;
end;
/
show errors
Below Solution to this question is not the exact answer but in many scenarios you try to select a row and try to use it for some purpose and then update its status with "used" or "done" so that you do not select it again.
Solution:
Below query is useful but that way if your table is large, I just tried and see that you definitely face performance problem with this query.
SELECT * FROM
( SELECT * FROM table
ORDER BY dbms_random.value )
WHERE rownum = 1
So if you set a rownum like below then you can work around the performance problem. By incrementing rownum you can reduce the possiblities. But in this case you will always get rows from the same 1000 rows. If you get a row from 1000 and update its status with "USED", you will almost get different row everytime you query with "ACTIVE"
SELECT * FROM
( SELECT * FROM table
where rownum < 1000
and status = 'ACTIVE'
ORDER BY dbms_random.value )
WHERE rownum = 1
update the rows status after selecting it, If you can not update that means another transaction has already used it. Then You should try to get a new row and update its status. By the way, getting the same row by two different transaction possibility is 0.001 since rownum is 1000.
Someone told sample(x) is the fastest way you can.
But for me this method works slightly faster than sample(x) method.
It should take fraction of the second (0.2 in my case) no matter what is the size of the table. If it takes longer try to use hints (--+ leading(e) use_nl(e t) rowid(t)) can help
SELECT *
FROM My_User.My_Table
WHERE ROWID = (SELECT MAX(t.ROWID) KEEP(DENSE_RANK FIRST ORDER BY dbms_random.value)
FROM (SELECT o.Data_Object_Id,
e.Relative_Fno,
e.Block_Id + TRUNC(Dbms_Random.Value(0, e.Blocks)) AS Block_Id
FROM Dba_Extents e
JOIN Dba_Objects o ON o.Owner = e.Owner AND o.Object_Type = e.Segment_Type AND o.Object_Name = e.Segment_Name
WHERE e.Segment_Name = 'MY_TABLE'
AND(e.Segment_Type, e.Owner, e.Extent_Id) =
(SELECT MAX(e.Segment_Type) AS Segment_Type,
MAX(e.Owner) AS Owner,
MAX(e.Extent_Id) KEEP(DENSE_RANK FIRST ORDER BY Dbms_Random.Value) AS Extent_Id
FROM Dba_Extents e
WHERE e.Segment_Name = 'MY_TABLE'
AND e.Owner = 'MY_USER'
AND e.Segment_Type = 'TABLE')) e
JOIN My_User.My_Table t
ON t.Rowid BETWEEN Dbms_Rowid.Rowid_Create(1, Data_Object_Id, Relative_Fno, Block_Id, 0)
AND Dbms_Rowid.Rowid_Create(1, Data_Object_Id, Relative_Fno, Block_Id, 32767))
Version with retries when no rows returned:
WITH gen AS ((SELECT --+ inline leading(e) use_nl(e t) rowid(t)
MAX(t.ROWID) KEEP(DENSE_RANK FIRST ORDER BY dbms_random.value) Row_Id
FROM (SELECT o.Data_Object_Id,
e.Relative_Fno,
e.Block_Id + TRUNC(Dbms_Random.Value(0, e.Blocks)) AS Block_Id
FROM Dba_Extents e
JOIN Dba_Objects o ON o.Owner = e.Owner AND o.Object_Type = e.Segment_Type AND o.Object_Name = e.Segment_Name
WHERE e.Segment_Name = 'MY_TABLE'
AND(e.Segment_Type, e.Owner, e.Extent_Id) =
(SELECT MAX(e.Segment_Type) AS Segment_Type,
MAX(e.Owner) AS Owner,
MAX(e.Extent_Id) KEEP(DENSE_RANK FIRST ORDER BY Dbms_Random.Value) AS Extent_Id
FROM Dba_Extents e
WHERE e.Segment_Name = 'MY_TABLE'
AND e.Owner = 'MY_USER'
AND e.Segment_Type = 'TABLE')) e
JOIN MY_USER.MY_TABLE t ON t.ROWID BETWEEN Dbms_Rowid.Rowid_Create(1, Data_Object_Id, Relative_Fno, Block_Id, 0)
AND Dbms_Rowid.Rowid_Create(1, Data_Object_Id, Relative_Fno, Block_Id, 32767))),
Retries(Cnt, Row_Id) AS (SELECT 1, gen.Row_Id
FROM Dual
LEFT JOIN gen ON 1=1
UNION ALL
SELECT Cnt + 1, gen.Row_Id
FROM Retries
LEFT JOIN gen ON 1=1
WHERE Retries.Row_Id IS NULL AND Retries.Cnt < 10)
SELECT *
FROM MY_USER.MY_TABLE
WHERE ROWID = (SELECT Row_Id
FROM Retries
WHERE Row_Id IS NOT NULL)
Can you use pseudorandom rows?
select * from (
select * from ... where... order by ora_hash(rowid)
) where rownum<100