Oracle How to return a string in a particular format - oracle

I have one requirement where i want to return output in a particular format based on one column(Idx). In one column we have date range based on that loop should execute.
drop table t_table_test;
create table t_table_test ( ID NUMBER, NM VARCHAR2(4000), VAL VARCHAR2(4000), IDX NUMBER);
select * from t_table_test;
INSERT INTO t_table_test VALUES (1,'CNTRY', 'USA',1);
INSERT INTO t_table_test VALUES (1,'DT', '2017-01-01,2017-01-02',2);
INSERT INTO t_table_test VALUES (1,'PART', 'NA',3);
If Input is below
ID NM VAL IDX
1 CNTRY USA 1
1 DT 2017-01-01,2017-01-02 2
1 PART NA 3
Output should be this will be based on IDX column
CNTRY:USA,DT:2017-01-01,PART:NA?CNTRY:USA,DT:2017-01-02,PART:NA
I/P
ID NM VAL IDX
1 DT 2017-01-01,2017-01-02 1
1 CNTRY USA 2
1 PART NA 3
DT:2017-01-01,CNTRY:USA,PART:NA?DT:2017-01-02,CNTRY:USA,PART:NA
DELETE FROM t_table_test WHERE idx=3;
commit;
O/P DT:2017-01-01,CNTRY:USA?DT:2017-01-02,CNTRY:USA
DELETE FROM t_table_test WHERE idx=1;
commit;
O/P DT:2017-01-01?DT:2017-01-02
Need query which work in all above cases .

Perhaps you can try this:
select rtrim(XMLAGG(XMLELEMENT(E,val||'?')).EXTRACT('//text()'),',') from (
with dates as (SELECT distinct NM||':'||trim(regexp_substr(val, '[^,]+', 1, level)) val
from t_table_test where nm='DT' CONNECT BY instr(val, ',', 1, level - 1) > 0),
parts as (select NM||':'||VAL val FROM t_table_test where nm='PART')
SELECT NM||':'||t.VAL||','||d.VAL||','||p.val val from t_table_test t, dates d, parts p
where NM='CNTRY');
It does rely on the names being fixed - having only DT being a comma separated list of values, for instance - but I think it might help you format your results. You can use this as a basis for building many similar formatting solutions.

Related

Cursor to Query statement

I am inserting into TABLE_A as given below.
INSERT
INTO Table_A (house_id,
house_key_nbr,
mnty_code,
split)
SELECT wcp.id,
ld.ld_ln_id,
ld.ld_mnty,
ROUND((ld.ld_ln_bal/wla.LOAN_AMT) * 100,2) split
FROM table_B ld,
table_C cc,
TABLE_D wcp,
TABLE_E wla
WHERE cc.conv_id = I_conv_id
AND cc.ev_id = wcp.ev_id
AND cc.client_plan_nbr = ld.plan_id
AND wcp.ssn = ld.ssn
AND wla.house_id = wcp.id
AND wla.house_key_nbr = ld.ld_ln_id
AND ld.status_code in ('V','W');
Once i have loaded into the table_A then i created a cursor to find out the records having the sum of split not equal to 100. For those cases I will find the diff and then update the record as given below.
CURSOR max_percent IS
SELECT house_id,
house_key_nbr,
sum(split) percent_sum
FROM TABLE_A s1,
TABLE_D p1,
table_C c1
WHERE s1.house_id = p1.id
AND p1.ev_id = c1.ev_id
AND c1.conv_id = I_conv_id
GROUP BY house_id, house_key_nbr
HAVING SUM(split) != 100;
OPEN max_percent;
l_debug_msg:='Cursor Opened';
FETCH max_percent BULK COLLECT INTO mnty_rec;
l_debug_msg:='Fetching the values from cursor';
FOR i IN 1..mnty_rec.COUNT
LOOP
v_diff := 100.00 - mnty_rec(i).percent_sum;
l_debug_msg:='The difference is '||v_diff||' for the house_id : '||mnty_rec(i).house_id;
UPDATE work_conv_part_loan_mnty_splt wcplms
SET split = split + v_diff
WHERE wcplms.house_id = mnty_rec(i).house_id
AND wcplms.house_key_nbr = mnty_rec(i).house_key_nbr
AND rownum = 1;
l_debug_msg:='Updated the percentage value for the house_id'||mnty_rec(i).house_id ;
END LOOP;
CLOSE max_percent;
The question here is, I achieved this simple process using a cursor. Is there any way I can achieve it during the insertion time itself instead of writing the cursor?
I'm simplifying a bit your setup with two tables: table_a accumulation the data and table_b containing new data.
-- TABLE_A: Primary Key HOUSE_ID, HOUSE_KEY_NBR
create table table_a as
select 1 house_id, 1 house_key_nbr, 90 split from dual union all
select 1 house_id, 2 house_key_nbr, 30 split from dual union all
select 1 house_id, 3 house_key_nbr, 100 split from dual;
-- TABLE_B: new data
create table table_b as
select 1 house_id, 1 house_key_nbr, 5 split from dual union all
select 1 house_id, 1 house_key_nbr, 5 split from dual union all
select 1 house_id, 4 house_key_nbr, 50 split from dual union all
select 1 house_id, 4 house_key_nbr, 40 split from dual union all
select 1 house_id, 5 house_key_nbr, 100 split from dual;
The important point is that the table_a has the primary key defined, so you need to update only one row for the correction of the SPLIT
The first step is simple to MERGE the new data
MERGE INTO table_a a
USING (select HOUSE_ID, HOUSE_KEY_NBR, sum(SPLIT) SPLIT
from table_b
group by HOUSE_ID, HOUSE_KEY_NBR) b
ON (a.HOUSE_ID = b.HOUSE_ID and a.HOUSE_KEY_NBR = b.HOUSE_KEY_NBR)
WHEN MATCHED THEN
update SET a.SPLIT = a.SPLIT + b.SPLIT
WHEN NOT MATCHED THEN
insert (HOUSE_ID, HOUSE_KEY_NBR, SPLIT)
values (b.HOUSE_ID, b.HOUSE_KEY_NBR, b.SPLIT)
So basically you first aggregates the new data to the level of the PK and than using the MERGE either insert or update the table_a
In the second step perform the correction using the same approach with MERGE only use a different source table containing only the defference of the SPLIT to 100.
MERGE INTO table_a a
USING (select HOUSE_ID, HOUSE_KEY_NBR, 100 - sum(SPLIT) SPLIT
from table_a
group by HOUSE_ID, HOUSE_KEY_NBR
having sum(SPLIT) != 100) b
ON (a.HOUSE_ID = b.HOUSE_ID and a.HOUSE_KEY_NBR = b.HOUSE_KEY_NBR)
WHEN MATCHED THEN
update SET a.SPLIT = a.SPLIT + b.SPLIT
WHEN NOT MATCHED THEN
insert (HOUSE_ID, HOUSE_KEY_NBR, SPLIT)
values (b.HOUSE_ID, b.HOUSE_KEY_NBR, b.SPLIT)
After this step all SPLIT are equal 100
select HOUSE_ID, HOUSE_KEY_NBR, sum(SPLIT)
from table_a
group by HOUSE_ID, HOUSE_KEY_NBR
order by 1,2;
HOUSE_ID HOUSE_KEY_NBR SUM(SPLIT)
---------- ------------- ----------
1 1 100
1 2 100
1 3 100
1 4 100
1 5 100
If you do not want to MERGE in table_a and you use INSERT only, I'd challange this desing, because it is not clear which of the many records with the same key you want to update.
I'll recomend not to UPDATE but to INSERT additional rows with the calculated differece SPLIT.
If mnty_code is unique for each house_id, house_key_nbr pair, then you can use window functions in your insert. Try using this for inserting into the split column:
CASE WHEN 1 = ROWNUMBER() OVER ( PARTITION BY wcp.id, ld.ld_ln_id ORDER BY mnty_code DESC ) THEN
-- This is the last row for the given house_id / house_key_nbr, so do special split calculation
100 - SUM(ROUND((ld.ld_ln_bal/wla.LOAN_AMT) * 100,2)) OVER ( PARTITION BY wcp.id, ld.ld_ln_id ORDER BY mnty_code ASC ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING ) ELSE
-- Normal split calculation
ROUND((ld.ld_ln_bal/wla.LOAN_AMT) * 100,2)
END split
The idea is that, if you are inserting the last row for a given house_id, house_key_nbr, then set the split value to 100 minus the sum of all the previous values.
If mnty_code is not unique within each house_id, house_key_nbr pair, it gets problematic, because there is no way to identify the "last" row in each group.

How to select second split of column data from oracle database

I want to select the data from a Oracle table, whereas the table columns contains the data as , [ex : key,value] separated values; so here I want to select the second split i.e, value
table column data as below :
column_data
++++++++++++++
asper,worse
tincher,good
golder
null -- null values need to eliminate while selection
www,ewe
from the above data, desired output like below:
column_data
+++++++++++++
worse
good
golder
ewe
Please help me with the query
According to data you provided, here are two options:
result1: regular expressions one (get the 2nd word if it exists; otherwise, get the 1st one)
result2: SUBSTR + INSTR combination
SQL> with test (col) as
2 (select 'asper,worse' from dual union all
3 select 'tincher,good' from dual union all
4 select 'golder' from dual union all
5 select null from dual union all
6 select 'www,ewe' from dual
7 )
8 select col,
9 nvl(regexp_substr(col, '\w+', 1, 2), regexp_substr(col, '\w+', 1,1 )) result1,
10 --
11 nvl(substr(col, instr(col, ',') + 1), col) result2
12 from test
13 where col is not null;
COL RESULT1 RESULT2
------------ -------------------- --------------------
asper,worse worse worse
tincher,good good good
golder golder golder
www,ewe ewe ewe
SQL>

Oracle Query: Get distinct names having count greater than a threshold

I have a table having schema given below
EmpID,MachineID,Timestamp
1, A,01-Nov-13
2, A,02-Nov-13
3, C,03-Nov-13
1, B,02-Nov-13
1, C,04-Nov-13
2, B,03-Nov-13
3, A,02-Nov-13
Desired Output:
EmpID,MachineID
1, A
1, B
1, C
2, A
2, B
3, A
3, C
So basically, I want to find the Emp who have used more than one machines in the given time period.
The query I am using is
select EmpID,count(distinct(MachineID)) from table
where Timestamp between '01-NOV-13' AND '07-NOV-13'
group by EmpID having count(distinct(MachineID)) > 1
order by count(distinct(MachineID)) desc;
This query is given me output like this
EmpID,count(distinct(MachineID))
1, 3
2, 2
3, 2
Can anyone help with making changes to get the output like described above in my question.
One possible solution:
CREATE TABLE emp_mach (
empid NUMBER,
machineid VARCHAR2(1),
timestamp_val DATE
);
INSERT INTO emp_mach VALUES (1,'A', DATE '2013-11-01');
INSERT INTO emp_mach VALUES (2,'A', DATE '2013-11-02');
INSERT INTO emp_mach VALUES (3,'C', DATE '2013-11-03');
INSERT INTO emp_mach VALUES (1,'B', DATE '2013-11-02');
INSERT INTO emp_mach VALUES (1,'C', DATE '2013-11-04');
INSERT INTO emp_mach VALUES (2,'B', DATE '2013-11-03');
INSERT INTO emp_mach VALUES (3,'A', DATE '2013-11-02');
COMMIT;
SELECT DISTINCT empid, machineid
FROM emp_mach
WHERE empid IN (
SELECT empid
FROM emp_mach
WHERE timestamp_val BETWEEN DATE '2013-11-01' AND DATE '2013-11-07'
GROUP BY empid
HAVING COUNT(DISTINCT machineid) > 1
)
ORDER BY empid, machineid;
(I've changed the name of the timestamp column to timestamp_val)
Output:
EMPID MACHINEID
---------- ---------
1 A
1 B
1 C
2 A
2 B
3 A
3 C
you did the hardest. Your query has to be used to filter out the results:
SELECT t1.empid, t1.machineid
FROM
table t1
WHERE
EXIST (
SELECT
empid
FROM table t2
WHERE
timestamp BETWEEN '01-NOV-13' AND '07-NOV-13'
AND t2.empid = t1.empid
GROUP BY empid HAVING COUNT(distinct(machineid)) > 1
)
ORDER BY empid, machineid;
edit: posted a few secs after Przemyslaw Kruglej. I'll leave it here since it is just another alternative (using EXIST instead of IN)
SELECT * FROM
(SELECT DISTINCT(EmpID),COUNT(*) AS NumEMP
from TableA
WHERE Timestamp between '01-NOV-13' AND '07-NOV-13'
group by EmpID
order by EmpID
)
WHERE NumEmp >= 1

Replacing Text which does not match a pattern in Oracle

I have below text in a CLOB in a table
Table Name: tbl1
Columns
col1 - number (Primary Key)
col2 - clob (as below)
Row#1
-----
Col1 = 1
Col2 =
1331882981,ab123456,Some text here
which can run multiple lines and have a lot of text...
~1331890329,pqr123223,Some more text...
Row#2
-----
Col1 = 2
Col2 =
1331882981,abc333,Some text here
which can run multiple lines and have a lot of text...
~1331890329,pqrs23,Some more text...
Now I need to know how we can get below output
Col1 Value
---- ---------------------
1 1331882981,ab123456
1 1331890329,pqr123223
2 1331882981,abc333
2 1331890329,pqrs23
([0-9]{10},[a-z 0-9]+.), ==> This is the regular expression to match "1331890329,pqrs23" and I need to know how can replace which are not matching this regex and then split them into multiple rows
EDIT#1
I am on Oracle 10.2.0.5.0 and hence cannot use REGEXP_COUNT function :-( Also, the col2 is a CLOB which is massive
EDIT#2
I've tried below query and it works fine for some records (i.e. if I add a "where" clause). But when I remove the "where", it never returns any result. I've tried to put this into a view and insert into a table and left it run overnight but still it had not completed :(
with t as (select col1, col2 from temp_table)
select col1,
cast(substr(regexp_substr(col2, '[^~]+', 1, level), 1, 50) as
varchar2(50)) data
from t
connect by level <= length(col2) - length(replace(col2, '~')) + 1
EDIT#3
# of Chars in Clob Total
----------- -----
0 - 1k 3196
1k - 5k 2865
5k - 25k 661
25k - 100k 36
> 100k 2
----------- -----
Grand Total 6760
I have ~7k rows of clobs which have the distribution as shown above...
Well, you could try something like:
with v as
(
select 1 col1, '1331882981,ab123456,Some text here
which can run multiple lines and have a lot of text...
~1331890329,pqr123223,Some more text...' col2 from dual
union all
select 2 col1, '133188298777,abc333,Some text here
which can run multiple lines and have a lot of text...
~1331890329,pqrs23,Some more text...' col2 from dual
)
select distinct col1, regexp_substr(col2, '([0-9]{10},[a-z 0-9]+)', 1, level) split
from v
connect by level <= REGEXP_COUNT(col2, '([0-9]{10},[a-z0-9]+)')
order by col1
;
This gives:
1 1331882981,ab123456
1 1331890329,pqr123223
2 1331890329,pqrs23
2 3188298777,abc333
EDIT : for 10g, REGEXP_COUNT does not exist but you have workarounds. Here I replace the pattern found by something I hope I won't find in the text (here, XYZXYZ but you can choose something much more complex to be confident), do a diff with the same matching but replaced by the empty string, then divide by my pattern length (here, 6):
with v as
(
select 1 col1, '1331882981,ab123456,Some text here
which can run multiple lines and have a lot of text...
~1331890329,pqr123223,Some more text...' col2 from dual
union all
select 2 col1, '133188298777,abc333,Some text here
which can run multiple lines and have a lot of text...
~1331890329,pqrs23,Some more text...' col2 from dual
)
select distinct col1, regexp_substr(col2, '([0-9]{10},[a-z 0-9]+)', 1, level) split
from v
connect by level <= (length(REGEXP_REPLACE(col2, '([0-9]{10},[a-z 0-9]+)', 'XYZXYZ')) - length(REGEXP_REPLACE(col2, '([0-9]{10},[a-z 0-9]+)', ''))) / 6
order by col1
;
EDIT 2 : CLOBs (and LOBs in general) and regexp don't seem to fit well together:
ORA-00932: inconsistent datatypes: expected - got CLOB
Converting the CLOG to a string (regexp_substr(to_char(col2), ...) seems to fix the issue.
EDIT 3 : CLOBs don't like distinct either, so converting split result to char in an embedded request and then using the distinct on the upper request succeeds !
select distinct col1, split from
(
select col1, to_char(regexp_substr(col2, '([0-9]{10},[a-z 0-9]+)', 1, level)) split
from temp_epn
connect by level <= (length(REGEXP_REPLACE(col2, '([0-9]{10},[a-z 0-9]+)', 'XYZXYZ')) - length(REGEXP_REPLACE(col2, '([0-9]{10},[a-z 0-9]+)', ''))) / 6
order by col1
);
The above solutions didn't work and below is what I did.
update temp_table set col2=regexp_replace(col2,'([0-9]{10},[a-z0-9]+)','(\1)') ;
update temp_table set col2=regexp_replace(col2,'\),[\s\S]*~\(','(\1)$');
update temp_table set col2=regexp_replace(col2,'\).*?\(','$');
update temp_table set col2=replace(regexp_replace(col2,'\).*',''),'(','');
After these 4 update commands, the col2 will have something like
1 1331882981,ab123456$1331890329,pqr123223
2 1331882981,abc333$1331890329,pqrs23
Then I wrote a function to split this thing. The reason I went for the function is to split by "$" and the fact that the col2 still has >10k characters
create or replace function parse( p_clob in clob ) return sys.odciVarchar2List
pipelined
as
l_offset number := 1;
l_clob clob := translate( p_clob, chr(13)|| chr(10) || chr(9), ' ' ) || '$';
l_hit number;
begin
loop
--Find occurance of "$" from l_offset
l_hit := instr( l_clob, '$', l_offset );
exit when nvl(l_hit,0) = 0;
--Extract string from l_offset to l_hit
pipe row ( substr(l_clob, l_offset , (l_hit - l_offset)) );
--Move offset
l_offset := l_hit+1;
end loop;
end;
I then called
select col1,
REGEXP_SUBSTR(column_value, '[^,]+', 1, 1) col3,
REGEXP_SUBSTR(column_value, '[^,]+', 1, 2) col4
from temp_table, table(parse(temp_table.col2));

How can I return multiple identical rows based on a quantity field in the row itself?

I'm using oracle to output line items in from a shopping app. Each item has a quantity field that may be greater than 1 and if it is, I'd like to return that row N times.
Here's what I'm talking about for a table
product_id, quanity
1, 3,
2, 5
And I'm looking a query that would return
1,3
1,3
1,3
2,5
2,5
2,5
2,5
2,5
Is this possible? I saw this answer for SQL Server 2005 and I'm looking for almost the exact thing in oracle. Building a dedicated numbers table is unfortunately not an option.
I've used 15 as a maximum for the example, but you should set it to 9999 or whatever the maximum quantity you will support.
create table t (product_id number, quantity number);
insert into t values (1,3);
insert into t values (2,5);
select t.*
from t
join (select rownum rn from dual connect by level < 15) a
on a.rn <= t.quantity
order by 1;
First create sample data:
create table my_table (product_id number , quantity number);
insert into my_table(product_id, quantity) values(1,3);
insert into my_table(product_id, quantity) values(2,5);
And now run this SQL:
SELECT product_id, quantity
FROM my_table tproducts
,( SELECT LEVEL AS lvl
FROM dual
CONNECT BY LEVEL <= (SELECT MAX(quantity) FROM my_table)) tbl_sub
WHERE tbl_sub.lvl BETWEEN 1 AND tproducts.quantity
ORDER BY product_id, lvl;
PRODUCT_ID QUANTITY
---------- ----------
1 3
1 3
1 3
2 5
2 5
2 5
2 5
2 5
This question is propably same as this: how to calc ranges in oracle
Update solution, for Oracle 9i:
You can use pipelined_function() like this:
CREATE TYPE SampleType AS OBJECT
(
product_id number,
quantity varchar2(2000)
)
/
CREATE TYPE SampleTypeSet AS TABLE OF SampleType
/
CREATE OR REPLACE FUNCTION GET_DATA RETURN SampleTypeSet
PIPELINED
IS
l_one_row SampleType := SampleType(NULL, NULL);
BEGIN
FOR cur_data IN (SELECT product_id, quantity FROM my_table ORDER BY product_id) LOOP
FOR i IN 1..cur_data.quantity LOOP
l_one_row.product_id := cur_data.product_id;
l_one_row.quantity := cur_data.quantity;
PIPE ROW(l_one_row);
END LOOP;
END LOOP;
RETURN;
END GET_DATA;
/
Now you can do this:
SELECT * FROM TABLE(GET_DATA());
Or this:
CREATE OR REPLACE VIEW VIEW_ALL_DATA AS SELECT * FROM TABLE(GET_DATA());
SELECT * FROM VIEW_ALL_DATA;
Both with same results.
(Based on my article pipelined function)

Resources