Oracle - Insert x amount of rows with random data - oracle

I am currently doing some testing and am in the need for a large amount of data (around 1 million rows)
I am using the following table:
CREATE TABLE OrderTable(
OrderID INTEGER NOT NULL,
StaffID INTEGER,
TotalOrderValue DECIMAL (8,2)
CustomerID INTEGER);
ALTER TABLE OrderTable ADD CONSTRAINT OrderID_PK PRIMARY KEY (OrderID)
CREATE SEQUENCE seq_OrderTable
MINVALUE 1
START WITH 1
INCREMENT BY 1
CACHE 10000;
and want to randomly insert 1000000 rows into it with the following rules:
OrderID needs to be be sequential (1, 2, 3 etc...)
StaffID needs to be a random number between 1 and 1000
CustomerID needs to be a random number between 1 and 10000
TotalOrderValue needs to be a random decimal value between 0.00 and 9999.99
Is this even possible to do? I can I could generate all of these using this update statement? however generating a million rows in 1 go I am not sure on how to do this
Thanks for any help on this matter
This is how i would randomly generate the number on update:
UPDATE StaffTable SET DepartmentID = DBMS_RANDOM.value(low => 1, high => 5);

For testing purposes I created the table and populated it in one shot, with this query:
CREATE TABLE OrderTable(OrderID, StaffID, CustomerID, TotalOrderValue)
as (select level, ceil(dbms_random.value(0, 1000)),
ceil(dbms_random.value(0,10000)),
round(dbms_random.value(0,10000),2)
from dual
connect by level <= 1000000)
/
A few notes - it is better to use NUMBER as data type, NUMBER(8,2) is the format for decimal. It is much more efficient for populating this kind of table to use the "hierarchical query without PRIOR" trick (the "connect by level <= ..." trick) to get the order ID's.
If your table is created already, insert into OrderTable (select level...) (same subquery as in my code) should work just as well. You may be better off adding the PK constraint only after you create the data though, so as not to slow things down.
A small sample from the table created (total time to create the table on my cheap laptop - 1,000,000 rows - was 7.6 seconds):
SQL> select * from OrderTable where orderid between 500020 and 500030;
ORDERID STAFFID CUSTOMERID TOTALORDERVALUE
---------- ---------- ---------- ---------------
500020 666 879 6068.63
500021 189 6444 1323.82
500022 533 2609 1847.21
500023 409 895 207.88
500024 80 2125 1314.13
500025 247 3772 5081.62
500026 922 9523 1160.38
500027 818 5197 5009.02
500028 393 6870 5067.81
500029 358 4063 858.44
500030 316 8134 3479.47

Related

When to add a sequence of fields in a materialized view

Good evening,
I am trying to understand in which cases the sequence would be used, for the example below, since the rowids would not always give me a single row to manage the changes.
Why consider a sequence of additional fields?
I will be grateful if you could clarify the doubt, with some example.
Thank you so much,
Greetings.
CREATE MATERIALIZED VIEW LOG ON sales
WITH ROWID, SEQUENCE(amount_sold, time_id, prod_id)
INCLUDING NEW VALUES;
Now imagine that you want to create a materialized view that contains aggregates on this table. Because the materialized view log has been created with all referenced columns in the materialized view's defining query, the materialized view is fast refreshable. If DML is applied against the sales table, then the changes are reflected in the materialized view when the commit is issued.
CREATE MATERIALIZED VIEW sum_sales
REFRESH FAST ON COMMIT AS
SELECT s.time_id, COUNT(*) AS count_grp,
SUM(s.amount_sold) AS sum_dollar_sales,
COUNT(s.amount_sold) AS count_dollar_sales
FROM sales s
GROUP BY s.time_id;
Without using the sequence, you will have the following error
ORA-12033: cannot use filter columns from materialized view log on "ADMIN"."SALES"
So let me try to explain why, by using a Testcase
drop table sales;
create table sales (time_id number, prod_id number, amount_sold number)
CREATE MATERIALIZED VIEW LOG ON sales
WITH ROWID, SEQUENCE(amount_sold, time_id, prod_id)
INCLUDING NEW VALUES;
truncate table sales;
insert into sales values (1,1,23);
insert into sales values (1,2,23);
commit;
select time_id, sum(amount_sold) from sales group by time_id;
TIME_ID SUM(AMOUNT_SOLD)
------- ----------------
1 46
Now Imagine, that you will modify a row multiple times.
update sales set amount_sold = 55 where time_id = 1 and PROD_ID = 1;
update sales set amount_sold = 12 where time_id = 1 and PROD_ID = 1;
select time_id, sum(amount_sold) from sales group by time_id;
TIME_ID SUM(AMOUNT_SOLD)
------- ----------------
1 35
your new amount_sold is 35. How to do a fast refresh without reading the value for the row (1,2) because you only modified (1,1)
select * from MLOG$_SALES where DMLTYPE$$ != 'I' order by ;
AMOUNT_SOLD TIME_ID PROD_ID M_ROW$$ SEQUENCE$$ SNAPTIME$$ DMLTYPE$$ OLD_NEW$$ CHANGE_VECTOR$$ XID$$
----------- ------- ------- ------------------ ---------- -------------------- --------- --------- --------------- ----------------
23 1 1 AAAwaSAAAAAAFpTAAA 105 4000-01-01T00:00:00Z U U CA== 4222223434942993
55 1 1 AAAwaSAAAAAAFpTAAA 106 4000-01-01T00:00:00Z U N CA== 4222223434942993
55 1 1 AAAwaSAAAAAAFpTAAA 107 4000-01-01T00:00:00Z U U CA== 4222223434942993
12 1 1 AAAwaSAAAAAAFpTAAA 108 4000-01-01T00:00:00Z U N CA== 4222223434942993
So you can use the previous value 46 and increment/decrement using the old/new value as follow
select 46 - 23 + 55 - 55 + 12 as newval from dual;
NEWVAL
------
35
You can also do the same when doing a delete
This is not possible having only the rowid. To generate the new value 35 you need to read an unmodified value, so you cannot do a fast refresh.
Hope that this can help you to understand in which cases the sequence is used

simple random sampling while pulling data from warehouse(oracle engine) using proc sql in sas

I need to pull humongous amount of data, say 600-700 variables from different tables in a data warehouse...now the dataset in its raw form will easily touch 150 gigs - 79 MM rows and for my analysis purpose I need only a million rows...how can I pull data using proc sql directly from warehouse by doing simple random sampling on the rows.
Below code wont work as ranuni is not supported by oracle
proc sql outobs =1000000;
select * from connection to oracle(
select * from tbl1 order by ranuni(12345);
quit;
How do you propose I do it
Use the DBMS_RANDOM Package to Sort Records and Then Use A Row Limiting Clause to Restrict to the Desired Sample Size
The dbms_random.value function obtains a random number between 0 and 1 for all rows in the table and we sort in ascending order of the random value.
Here is how to produce the sample set you identified:
SELECT
*
FROM
(
SELECT
*
FROM
tbl1
ORDER BY dbms_random.value
)
FETCH FIRST 1000000 ROWS ONLY;
To demonstrate with the sample schema table, emp, we sample 4 records:
SCOTT#DEV> SELECT
2 empno,
3 rnd_val
4 FROM
5 (
6 SELECT
7 empno,
8 dbms_random.value rnd_val
9 FROM
10 emp
11 ORDER BY rnd_val
12 )
13 FETCH FIRST 4 ROWS ONLY;
EMPNO RND_VAL
7698 0.06857749035643605682648168347885993709
7934 0.07529612360785920635181751566833986766
7902 0.13618520865865754766175030040204331697
7654 0.14056380246495282237607922497308953768
SCOTT#DEV> SELECT
2 empno,
3 rnd_val
4 FROM
5 (
6 SELECT
7 empno,
8 dbms_random.value rnd_val
9 FROM
10 emp
11 ORDER BY rnd_val
12 )
13 FETCH FIRST 4 ROWS ONLY;
EMPNO RND_VAL
7839 0.00430658806761508024693197916281775492
7499 0.02188116061148367312927392115186317884
7782 0.10606515700372416131060633064729870016
7788 0.27865276349549877512032787966777990909
With the example above, notice that the empno changes significantly during the execution of the SQL*Plus command.
The performance might be an issue with the row counts you are describing.
EDIT:
With table sizes in the order of 150 gigs - 79 MM, any sorting would be painful.
If the table had a surrogate key based on a sequence incremented by 1, we could take the approach of selecting every nth record based on the key.
e.g.
--scenario n = 3000
FROM
tbl1
WHERE
mod(table_id, 3000) = 0;
This approach would not use an index (unless a function based index is created), but at least we are not performing a sort on a data set of this size.
I performed an explain plan with a table that has close to 80 million records and it does perform a full table scan (the condition forces this without a function based index) but this looks tenable.
None of the answers posted or comments helped my cause, it could but we have 87 MM rows
Now I wanted the answer with the help of sas: here is what I did: and it works. Thanks all!
libname dwh path username pwd;
proc sql;
create table sample as
(select
<all the variables>, ranuni(any arbitrary seed)
from dwh.<all the tables>
<bunch of where conditions goes here>);
quit);

Dynamic column value to be set as next row's another column value in Oracle [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am new to Oracle and I would like to form an Oracle query:
Id CrLmt Type Unit Price Amount Prev_bal NewBal
5-00001 100000 Sell 100 150 15000 100000 85000
Buy 75 600 45000 85000 130000
Buy 85 550 46750 130000 176750
Sell 60 1000 60000 176750 116750
5-00002 90000 Sell 100 400 40000 90000 50000
Buy 550 300 165000 50000 215000
Sell 300 1000 300000 215000 -85000
My conditions are as follows:
ID and CrLmt are combination and its subsequent rows come under this ID, CrLmt combination.
For every ID, CrLmt combination, the CrLmt will be assigned in Prev_Bal column, rest of the rows will have a calculation
Based on Buy/Sell in Type column, the values in Amount and Prev_Bal will be added or subtracted and the resultant value should be displayed in NewBal (dynamic) column
If the Type is "Sell" then the value in PrevBal should be subtracted with Amount column value and if the Type is "Buy" then then the value in PrevBal should be added with Amount column value and the resultant value should be displayed in NewBal (dynamic) column in the corresponding row
The value obtained in NewBal column in row 1 should be displayed in row 2 Prev_Bal column for 2nd row's calculation and so on.
If any negative value occurs in NewBal column the same needs to be carried out to next calculations.
I tried using LAG function to get previous values but doesn't know how to get a dynamic column's (NewBal) values on the go.
Here is a little example that you will have to adapt to your current structure. You will need a date on your transaction for the order clause of the sum.
all you need is a running sum to which you will either add the credit limit for the new_balance or to which you will take the previous row for the old_balance
--TEST DATA
CREATE TABLE credit_limit ( id varchar2(10), crlmt number );
CREATE TABLE transactions (transaction_type varchar2(4), unit number, price number, amount number, crlmt_id varchar2(10), date_transaction date );
INSERT INTO credit_limit values ('5-00001',100000);
INSERT INTO credit_limit values ('5-00002',90000);
INSERT INTO transactions values ('Sell',100,150,15000,'5-00001',sysdate-4);
INSERT INTO transactions values ('Buy',75,600,45000,'5-00001',sysdate-3);
INSERT INTO transactions values ('Buy',85,550,46750,'5-00001',sysdate-2);
INSERT INTO transactions values ('Sell',60,1000,60000,'5-00001',sysdate-1);
INSERT INTO transactions values ('Sell',100,400,40000,'5-00002',sysdate-3);
INSERT INTO transactions values ('Buy',550,300,165000,'5-00002',sysdate-2);
INSERT INTO transactions values ('Sell',300,1000,300000,'5-00002',sysdate-1);
--The query
select cr.id, cr.crlmt, tr.transaction_type, tr.unit, tr.price, tr.amount,
NVL(cr.crlmt + SUM(tr.amount*decode(tr.transaction_type,'Sell',-1,'Buy',1))
OVER (partition by cr.id order by cr.id, tr.date_transaction
rows between unbounded preceding and 1 preceding ),Cr.crlmt) old_bal,
cr.crlmt + SUM(tr.amount*decode(tr.transaction_type,'Sell',-1,'Buy',1))
OVER (partition by cr.id order by cr.id, tr.date_transaction
rows between unbounded preceding and current row ) new_bal
from
credit_limit cr
JOIN
transactions tr
ON cr.id=tr.crlmt_id
order by cr.id, tr.date_transaction
result :
ID CRLMT TRAN UNI PRICE AMOUNT OLD_BAL NEW_BAL
5-00001 100000 Sell 100 150 15000 100000 85000
5-00001 100000 Buy 75 600 45000 85000 130000
5-00001 100000 Buy 85 550 46750 130000 176750
5-00001 100000 Sell 60 1000 60000 176750 116750
5-00002 90000 Sell 100 400 40000 90000 50000
5-00002 90000 Buy 550 300 165000 50000 215000
5-00002 90000 Sell 300 1000 300000 215000 -85000

Constant-time index for string column on Oracle database

I have an orders table. The table belongs to a multi-tenant application, so there are orders from several merchants in the same table. The table stores hundreds of millions of records. There are two relevant columns for this question:
MerchantID, an integer storing the merchant's unique ID
TransactionID, a string identifying the transaction
I want to know whether there is an efficient index to do the following:
Enforce a unique constraint on Transaction ID for each Merchant ID. The constraint should be enforced in constant time.
Do constant time queries involving exact matches on both columns (for instance, SELECT * FROM <table> WHERE TransactionID = 'ff089f89feaac87b98a' AND MerchantID = 24)
Further info:
I am using Oracle 11g. Maybe this Oracle article is relevant to my question?
I cannot change the column's data type.
constant time means an index performing in O(1) time complexity. Like a hashmap.
Hash clusters can provide O(1) access time, but not O(1) constraint enforcement time. However, in practice the constant access time of a hash cluster is worse than the O(log N) access time of a regular b-tree index. Also, clusters are more difficult to configure and do not scale well for some operations.
Create Hash Cluster
drop table orders_cluster;
drop cluster cluster1;
create cluster cluster1
(
MerchantID number,
TransactionID varchar2(20)
)
single table hashkeys 10000; --This number is important, choose wisely!
create table orders_cluster
(
id number,
MerchantID number,
TransactionID varchar2(20)
) cluster cluster1(merchantid, transactionid);
--Add 1 million rows. 20 seconds.
begin
for i in 1 .. 10 loop
insert into orders_cluster
select rownum + i * 100000, mod(level, 100)+ i * 100000, level
from dual connect by level <= 100000;
commit;
end loop;
end;
/
create unique index orders_cluster_idx on orders_cluster(merchantid, transactionid);
begin
dbms_stats.gather_table_stats(user, 'ORDERS_CLUSTER');
end;
/
Create Regular Table (For Comparison)
drop table orders_table;
create table orders_table
(
id number,
MerchantID number,
TransactionID varchar2(20)
) nologging;
--Add 1 million rows. 2 seconds.
begin
for i in 1 .. 10 loop
insert into orders_table
select rownum + i * 100000, mod(level, 100)+ i * 100000, level
from dual connect by level <= 100000;
commit;
end loop;
end;
/
create unique index orders_table_idx on orders_table(merchantid, transactionid);
begin
dbms_stats.gather_table_stats(user, 'ORDERS_TABLE');
end;
/
Trace Example
SQL*Plus Autotrace is a quick way to find the explain plan and track I/O activity per statement. The number of I/O requests is labeled as "consistent gets" and is a decent way of measuring the amount of work done. This code demonstrates how the numbers were generated for other sections. The queries often need to be run more than once to warm things up.
SQL> set autotrace on;
SQL> select * from orders_cluster where merchantid = 100001 and transactionid = '2';
no rows selected
Execution Plan
----------------------------------------------------------
Plan hash value: 621801084
------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 16 | 1 (0)| 00:00:01 |
|* 1 | TABLE ACCESS HASH| ORDERS_CLUSTER | 1 | 16 | 1 (0)| 00:00:01 |
------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("MERCHANTID"=100001 AND "TRANSACTIONID"='2')
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
31 consistent gets
0 physical reads
0 redo size
485 bytes sent via SQL*Net to client
540 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
SQL>
Find Optimal Hashkeys, Trade-Offs
For optimal read performance all the hash collisions should fit in one block (all Oracle I/O is done per block, usually 8K). Getting the ideal storage right is tricky and requires knowing the hash algorithm, storage size (not the same as the block size), and number of hash keys (the buckets). Oracle has a default algorithm and size so it is possible to focus on only one attribute, the number of hash keys.
More hash keys leads to fewer collisions. This is good for TABLE ACCESS HASH performance as there is only one block to read. Below are the number of consistent gets for different hashkey sizes. For comparison an index access is also included. With enough hashkeys the number of blocks decreases to the optimal number, 1.
Method Consistent Gets (for transactionid = 1, 20, 300, 4000, and 50000)
Index 4, 3, 3, 3, 3
Hashkeys 100 1, 31, 31, 31, 31
Hashkeys 1000 1, 3, 4, 4, 4
Hashkeys 10000 1, 1, 1, 1, 1
More hash keys also lead to more buckets, more wasted space, and a slower TABLE ACCESS FULL operation.
Table type Space in MB
HeapTable 24MB
Hashkeys 100 26MB
hashkeys 1000 30MB
hashkeys 10000 81MB
To reproduce my results, use a sample query like select * from orders_cluster where merchantid = 100001 and transactionid = '1'; and change the last value to 1, 20, 300, 4000, and 50000.
Performance Comparison
Consistent gets are predictable and easy to measure, but at the end of the day only the wall clock time matters. Surprisingly, the index access with 4 times more
consistent gets is still faster than the optimal hash cluster scenario.
--3.5 seconds for b-tree access.
declare
v_count number;
begin
for i in 1 .. 100000 loop
select count(*)
into v_count
from orders_table
where merchantid = 100000 and transactionid = '1';
end loop;
end;
/
--3.8 seconds for hash cluster access.
declare
v_count number;
begin
for i in 1 .. 100000 loop
select count(*)
into v_count
from orders_cluster
where merchantid = 100000 and transactionid = '1';
end loop;
end;
/
I also tried the test with variable predicates but the results were similar.
Does it Scale?
No, hash clusters do not scale. Despite the O(1) time complexity of TABLE ACCESS HASH, and the O(log n) time complexity of INDEX UNIQUE SCAN, hash clusters never seem to outperform b-tree indexes.
I tried the above sample code with 10 million rows. The hash cluster was painfully slow to load, and still under-performed the index on SELECT performance. I tried to scale it up to 100 million rows but the insert was going to take 11 days.
The good news is that b*trees scale well. Adding 100 million rows to the above example only require 3 levels in the index. I looked at all DBA_INDEXES for a large database environment (hundreds of databases and a petabyte of data) - the worst index had only 7 levels. And that was a pathological index on VARCHAR2(4000) columns. In most cases your b-tree indexes will stay shallow regardless of the table size.
In this case, O(log n) beats O(1).
But WHY?
Poor hash cluster performance is perhaps a victim of Oracle's attempt to simplify things and hide the kind of details necessary to make a hash cluster work well. Clusters are difficult to setup and use properly and would rarely provide a significant benefit anyway. Oracle has not put a lot of effort into them in the past few decades.
The commenters are correct that a simple b-tree index is best. But it's not obvious why that should be true and it's good to think about the algorithms used in the database.

Oracle trouble in getting data from a partitioned table

On a new job I have to figure out how some database reporting scripts are working.
There is one table that is giving me some trouble. I see in existing scripts that it is a partitioned table.
My problem is that whatever query I run on this table returns me "no rows selected".
Here are some details about my investigation in this table:
Table size estimate
SQL> select sum(bytes)/1024/1024 Megabytes from dba_segments where segment_name = 'PPREC';
MEGABYTES
----------
45.625
Partitions
There are a total of 730 partitions on date range.
SQL> select min(PARTITION_NAME),max(PARTITION_NAME) from dba_segments where segment_name = 'PPREC';
MIN(PARTITION_NAME) MAX(PARTITION_NAME)
------------------------------ ------------------------------
PART20110201 PART20130130
There are several tablespaces and partitions are allocated in them
SQL> select tablespace_name, count(partition_name) from dba_segments where segment_name = 'PPREC' group by tablespace_name;
TABLESPACE_NAME COUNT(PARTITION_NAME)
------------------------------ ---------------------
REC_DATA_01 281
REC_DATA_02 48
REC_DATA_03 70
REC_DATA_04 26
REC_DATA_05 44
REC_DATA_06 51
REC_DATA_07 13
REC_DATA_08 48
REC_DATA_09 32
REC_DATA_10 52
REC_DATA_11 35
REC_DATA_12 30
Additional query:
SQL> select * from dba_segments where segment_name='PPREC' and partition_name='PART20120912';
OWNER SEGMENT_NAME PARTITION_NAME SEGMENT_TYPE TABLESPACE_NAME HEADER_FILE HEADER_BLOCK BYTES BLOCKS EXTENTS
----- ------------ -------------- --------------- --------------- ----------- ------------ ----- ------ -------
HIST PPREC PART20120912 TABLE PARTITION REC_DATA_01 13 475315 65536 8 1
INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS PCT_INCREASE FREELISTS FREELIST_GROUPS RELATIVE_FNO BUFFER_POOL
-------------- ----------- ----------- ----------- ------------ --------- --------------- ------------ -----------
65536 1 2147483645 13 DEFAULT
Tabespace usage
Here is a space summary (composite of dba_tablespaces, dba_data_files, dba_segments, dba_free_space)
TABLESPACE_NAME TOTAL_MEGABYTES USED_MEGABYTES FREE_MEGABYTES
------------------------------ --------------- -------------- --------------
REC_01_INDX 30,700 250 30,449
REC_02_INDX 7,745 7 7,737
REC_03_INDX 22,692 15 22,677
REC_04_INDX 15,768 10 15,758
REC_05_INDX 25,884 16 25,868
REC_06_INDX 27,992 16 27,975
REC_07_INDX 17,600 10 17,590
REC_08_INDX 18,864 11 18,853
REC_09_INDX 19,700 12 19,687
REC_10_INDX 28,716 16 28,699
REC_DATA_01 102,718 561 102,156
REC_DATA_02 24,544 3,140 21,403
REC_DATA_03 72,710 4 72,704
REC_DATA_04 29,191 2 29,188
REC_DATA_05 42,696 3 42,692
REC_DATA_06 52,780 323 52,456
REC_DATA_07 16,536 1 16,534
REC_DATA_08 49,247 3 49,243
REC_DATA_09 30,848 2 30,845
REC_DATA_10 49,620 3 49,616
REC_DATA_11 40,616 2 40,613
REC_DATA_12 184,922 123,435 61,486
The tablespace usage seems to confirm that this table is not empty, in fact its last tablespace (REC_DATA_12) seems pretty busy.
Existing scripts
What I find puzzling is that there are some PL/SQL stored procedures that seem to work on that table and get data out of it.
An example of such a stored procedure is as follows:
procedure FIRST_REC as
vpartition varchar2(12);
begin
select 'PART'||To_char(sysdate,'YYYYMMDD') INTO vpartition FROM DUAL;
execute immediate
'MERGE INTO FIRST_REC_temp a
USING (SELECT bno, min(trdate) mintr,max(trdate) maxtr
FROM PPREC PARTITION ('||vpartition||') WHERE route_id IS NOT NULL AND trunc(trdate) <= trunc(sysdate-1)
GROUP BY bno) b
ON (a.bno=b.bno)
when matched then
update set a.last_tr = b.maxtr
when not matched then
insert (a.bno,a.last_tr,a.first_tr)
values (b.bno,b.maxtr,b.mintr)';
commit;
However if I try using the same syntax manually on the table, here is what I get:
SQL> select count(*) from PPREC PARTITION (PART20120912);
COUNT(*)
----------
0
I have tried a few random partitions and I always get the same 0 count.
Summary
- I see a table that seems to contain data (space used, tablespaces, data files)
- The table is partitioned (one partition per day over a period of 730 days ending end of January 2013)
- Scripts are extracting data from that table somehow
Question
- My queries using PARTITION are all returning me "no rows selected". What am I doing wrong? How could I find out how to extract data from this table?
I suppose it's possible that some other process might be deleting the data, but without visiting your site there's no way for anyone here to tell if that might be so.
I don't see in your post that you mentioned the name of the partitioning DATE column, but based on the SQL you posted I'll assume it's TRDATE - if this is not correct, change TRDATE in the statement below to be the partitioning column.
That said, give this a try:
SELECT COUNT(*)
FROM PPREC
WHERE TRDATE >= TO_DATE('01-SEP-2012 00:00:00', 'DD-MON-YYYY HH24:MI:SS')
This assumes you should have data in this table from September. If you find data, great. If you don't - well, Back In The Day (when men were men, women were women, and computers were water-cooled :-) we had a little saying about memory on IBM mainframes:
1. If you can see it, and it's there, it's Real.
2. If you can't see it, but it's there, it's Protected.
3. If you can see it, but it's not there, it's Virtual.
4. If you can't see it, and it's not there, it's GONE!
:-)
Use of the PARTITION clause should be reserved for situations where you are experiencing a performance problem (note: guessing about what is or is not going to be a performance problem is not allowed. Until you've got a performance problem you don't have a performance problem. Over the years I've found that software spends a lot of execution time in the darndest places :-), and the usual fixes (adding indexes, deleting unnecessary data, human sacrifice, etc) haven't worked. Basically, write your queries normally and trust the database to get it right. (In the general case - always write the simplest code - and do the simplest thing - that could possibly work. 99+ percent of the time it will be fine. That allows you to spend your optimization time on the less-than-one-percent cases where simple isn't good enough - and most of the software you write or design will be simple and easy to understand).
Share and enjoy.

Resources