How to create id with AUTO_INCREMENT on a view in Oracle - oracle

can any one help to create a AUTO_INCREMENT column on a view in oracle 11g.
Thanks

While it's not possible to return a single unique identity column for a view whose underlying data does not have any single unique identifier, it is possible to return composite values that uniquely identify the data. For example given a table of CSV Data with a unique ID on each row:
create table sample (id number primary key, csv varchar2(4000));
where the CSV column contains a string of comma separated values:
insert into sample
select 1, 'a' from dual union all
select 2, 'b,c' from dual union all
select 3, 'd,"e",f' from dual union all
select 4, ',h,' from dual union all
select 5, 'j,"",l' from dual union all
select 6, 'm,,o' from dual;
The following query will unpivot the csv data and the composite values (ID, SEQ) will uniquely identify each VALue, The ID column idetifies the record the data came from, and SEQ uniquely identifies the position in the CSV:
WITH pvt(id, seq, csv, val, nxt) as (
SELECT id -- Parse out individual list items
, 1 -- separated by commas and
, csv -- optionally enclosed by quotes
, REGEXP_SUBSTR(csv,'(["]?)([^,]*)\1',1,1,null,2)
, REGEXP_INSTR(csv, ',', 1, 1)
FROM sample
UNION ALL
SELECT id
, seq+1
, csv
, REGEXP_SUBSTR(csv,'(["]?)([^,]*)\1',nxt+1,1,null,2)
, REGEXP_INSTR(csv, ',', nxt+1, 1)
FROM pvt
where nxt > 0
)
select * from pvt order by id, seq;
ID SEQ CSV VAL NXT
---------- ---------- ---------- ---------- ----------
1 1 a a 0
2 1 b,c b 2
2 2 b,c c 0
3 1 d,"e",f d 2
3 2 d,"e",f e 6
3 3 d,"e",f f 0
4 1 ,h, [NULL] 1
4 2 ,h, h 3
4 3 ,h, [NULL] 0
5 1 j,"",l j 2
5 2 j,"",l [NULL] 5
5 3 j,"",l l 0
6 1 m,,o m 2
6 2 m,,o [NULL] 3
6 3 m,,o o 0
15 rows selected.

Related

How to correlate data and counts from columns to rows in Oracle (19c)?

I believe there is probably an easy way to solve these problems with pivots or partitions but I can't seem to find the proper solutions. I have a round about solution for problem 1 by using a long list of select sum()s and a long solution for problem 2 where I just select the count(*) from table B where id = id from table A multiple times in a (select) blocks but if I have a large number of IDs both of those solution equal very long SQL that gets very tedious and I'm sure there is a better way it is just eluding me.
I would really like solutions that would allow me to include a large set of multiple IDs or supply the solution with a table of IDs to evaluate.
Problem 1:
Table:
------------------
ID DESC YEAR
1 A 2021
1 B 2021
1 C 2021
2 A 2021
2 B 2021
2 C 2021
3 A 2019
3 B 2019
I would like to have the count of the ID's for each DESC by year.
Expected Result:
------------------
Year CountA CountB CountC
2019 1 1 0
2021 2 2 2
Problem 2:
Table A:
------------------
ID DESC
1 A
2 B
3 C
Table B:
------------------
SET ID
10 1
10 1
12 1
13 2
14 3
I would like to see (1) how many of each ID from Table A can be found in each SET in Table B and (2) how many of each ID from Table A can be found in each SET in Table B and not in any other SET of Table B (unique matches).
Expected Result 1:
------------------
ID Count10 Count12 Count13 Count14
1 2 1 0 0
2 0 0 1 0
3 0 0 0 1
Expected Result 2:
------------------
ID UniqueCount10 UniqueCount12 UniqueCount13 UniqueCount14
1 0 0 0 0
2 0 0 1 0
3 0 0 0 1
Thank you for any and all assistance.
All three problems can be solved with pivoting (calling Problem 2 "two different problems"), although it is not clear what purpose Result 2 would serve (in the second problem; see my comments to you).
Note that desc and set are reserved keywords, and year is a keyword, so they shouldn't be used as column names. I changed to descr, set_ (with an underscore) and yr. Also, I do not use double-quotes for column names in the output; all-caps column names are just fine.
In the second problem it is not clear why you need Table A. Could you have some id values that don't appear at all in Table B, but you still want them in the final output? If so, you will need to change my semi-joins to outer joins; left as an exercise, since it's a different (and much more basic) type of question.
In the first problem, you must pivot the result of a subquery, which selects only the relevant columns from the base table. There is no such need for the second problem (unless your tables have other columns that should not be considered - left for you to figure out).
Problem 1
Data:
create table tbl (id, descr, yr) as
select 1, 'A', 2021 from dual union all
select 1, 'B', 2021 from dual union all
select 1, 'C', 2021 from dual union all
select 2, 'A', 2021 from dual union all
select 2, 'B', 2021 from dual union all
select 2, 'C', 2021 from dual union all
select 3, 'A', 2019 from dual union all
select 3, 'B', 2019 from dual
;
Query and output:
select *
from (select descr, yr from tbl)
pivot (count(*) for descr in ('A' as count_a, 'B' as count_b, 'C' as count_c))
order by yr
;
YR COUNT_A COUNT_B COUNT_C
---- ------- ------- -------
2019 1 1 0
2021 2 2 2
Problem 2
Data:
create table table_a (id, descr) as
select 1, 'A' from dual union all
select 2, 'B' from dual union all
select 3, 'C' from dual
;
create table table_b (set_, id) as
select 10, 1 from dual union all
select 10, 1 from dual union all
select 12, 1 from dual union all
select 13, 2 from dual union all
select 14, 3 from dual
;
Part 1 - Query and result:
select *
from table_b
pivot (count(*) for set_ in (10 as count_10, 12 as count_12,
13 as count_13, 14 as count_14))
where id in (select id from table_a) -- is this needed?
order by id -- if needed
;
ID COUNT_10 COUNT_12 COUNT_13 COUNT_14
-- -------- -------- -------- --------
1 2 1 0 0
2 0 0 1 0
3 0 0 0 1
Part 2 - Query and result:
select *
from (
select id, case count(distinct set_) when 1 then max(set_) end as set_
from table_b
where id in (select id from table_a) -- is this needed?
group by id
)
pivot (count(*) for set_ in (10 as unique_ct_10, 12 as unique_ct_12,
13 as unique_ct_13, 14 as unique_ct_14))
order by id -- if needed
;
ID UNIQUE_CT_10 UNIQUE_CT_12 UNIQUE_CT_13 UNIQUE_CT_14
-- ------------ ------------ ------------ ------------
1 0 0 0 0
2 0 0 1 0
3 0 0 0 1
In this last part of the second problem, you might as well just take the subquery and run it separately - what's the purpose of pivoting its output?

Use SYS_CONNECT_BY_PATH to aggregate values

I would like to generate a data hierarchy.
This query:
select connect_by_root(parent_id) as root_id
,ID, NAME
,SYS_CONNECT_BY_PATH(PARENT_ID,'/') PATH
,level
,line
,LINE*power(10,-level+1) CALC
,ltrim(SYS_CONNECT_BY_PATH(lpad(LINE,3,'0'), '.'),'.') SORT
from (
select 3 ID, 1 LINE, 2 PARENT_ID FROM DUAL
union all
select 4 ID, 2 LINE, 2 PARENT_ID FROM DUAL
union all
select 5 ID, 3 LINE, 2 PARENT_ID FROM DUAL
union all
select 6 ID, 1 LINE, 5 PARENT_ID FROM DUAL
union all
select 7 ID, 1 LINE, 6 PARENT_ID FROM DUAL
) v
start with v.parent_id = 2
connect by nocycle prior id=parent_id
Generates:
ROOT_ID ID PATH LEVEL LINE CALC SORT
2 3 /2 1 1 1 001
2 4 /2 1 2 2 002
2 5 /2 1 3 3 003
2 6 /2/5 2 1 0.1 003.001
2 7 /2/5/6 3 1 0.01 003.001.001
What I would like:
ROOT_ID ID PATH LEVEL LINE CALC
2 3 /2 1 1 1
2 4 /2 1 2 2
2 5 /2 1 3 3
2 6 /2/5 2 1 3.1
2 7 /2/5/6 3 1 3.11
Is there a way to get sys_connect_by_path (or another function) to tally the CALC column and its parents?
Currently, I'm using the SORT field for ordering the rows; I'd rather sort on a proper numerical value (CALC field).
Try this:
select connect_by_root(parent_id) as root_id
,ID
,SYS_CONNECT_BY_PATH(PARENT_ID,'/') PATH
,level
,line
,LINE*power(10,-level+1) CALC
,XMLCAST(XMLQUERY(ltrim(SYS_CONNECT_BY_PATH(LINE*power(10,-level+1), '+'),'+') RETURNING CONTENT) AS NUMBER) SORT
from (
select 3 ID, 1 LINE, 2 PARENT_ID FROM DUAL
union all
select 4 ID, 2 LINE, 2 PARENT_ID FROM DUAL
union all
select 5 ID, 3 LINE, 2 PARENT_ID FROM DUAL
union all
select 6 ID, 1 LINE, 5 PARENT_ID FROM DUAL
union all
select 7 ID, 1 LINE, 6 PARENT_ID FROM DUAL
) v
start with v.parent_id = 2
connect by nocycle prior id=parent_id
You may take your SORT column and after some fidling (changing the first dot to comma and removing other dots) convert the result to a number.
The key part is here
to_number(regexp_replace(regexp_replace(SORT,'\.',',',1,1),'\.',null),
'99D9999' , ' NLS_NUMERIC_CHARACTERS = '',.'' ') sort2
Example
3.1.1 -> 3,1.1 -> 3,11 and convert to number
The complete query here
with v as (
select 3 ID, 1 LINE, 2 PARENT_ID FROM DUAL
union all
select 4 ID, 2 LINE, 2 PARENT_ID FROM DUAL
union all
select 5 ID, 3 LINE, 2 PARENT_ID FROM DUAL
union all
select 6 ID, 1 LINE, 5 PARENT_ID FROM DUAL
union all
select 7 ID, 1 LINE, 6 PARENT_ID FROM DUAL
), v2 as (
select connect_by_root(parent_id) as root_id
,ID
,SYS_CONNECT_BY_PATH(PARENT_ID,'/') PATH
,level my_level
,line
,LINE*power(10,-level+1) CALC
,ltrim(SYS_CONNECT_BY_PATH( LINE , '.'),'.') SORT
from v
start with v.parent_id = 2
connect by nocycle prior id=parent_id
)
select ROOT_ID, ID, PATH, my_level, LINE, CALC, SORT,
to_number(regexp_replace(regexp_replace(SORT,'\.',',',1,1),'\.',null),'99D9999' , ' NLS_NUMERIC_CHARACTERS = '',.'' ') sort2
from v2

How do you shift values down in a column in an Oracle table?

Given the following oracle database table:
group revision comment
1 1 1
1 2 2
1 null null
2 1 1
2 2 2
2 3 3
2 4 4
2 null null
3 1 1
3 2 2
3 3 3
3 null null
I want to shift the comment column one step down in relation to version, within its group, so that I get the following table:
group revision comment
1 1 null
1 2 1
1 null 2
2 1 null
2 2 1
2 3 2
2 4 3
2 null 4
3 1 null
3 2 1
3 3 2
3 null 3
I have the following query:
MERGE INTO example_table t1
USING example_table t2
ON (
(t1.revision = t2.revision+1 OR
(t2.revision = (
SELECT MAX(t3.revision)
FROM example_table t3
WHERE t3.group = t1.group
) AND t1.revision IS NULL)
)
AND t1.group = t2.group)
WHEN MATCHED THEN UPDATE SET t1.comment = t2.comment;
That does most of this (still need a separate query to cover revision = 1), but it is very slow.
So my question is, how do I use Max here as efficiently as possible to pull out the highest revision for each group?
I would use lag not max
create table example_table(group_id number, revision number, comments varchar2(40));
insert into example_table values (1,1,1);
insert into example_table values (1,2,2);
insert into example_table values (1,3,null);
insert into example_table values (2,1,1);
insert into example_table values (2,2,2);
insert into example_table values (2,3,3);
insert into example_table values (2,4,null);
select * from example_table;
merge into example_table e
using (select group_id, revision, comments, lag(comments, 1) over (partition by group_id order by revision nulls last) comments1 from example_table) u
on (u.group_id = e.group_id and nvl(u.revision,0) = nvl(e.revision,0))
when matched then update set comments = u.comments1;
select * from example_table;

Can I update a particular attribute of a tuple with the same attribute of another tuple of same table? If possible what should be the algorithm?

Suppose I have a table with 10 records/tuples. Now I want to update an attribute of 6th record with the same attribute of 1st record, 2nd-7th, 3rd-8th, 4th-9th, 5th-10th in a go i.e. without using cursor/loop. Use of any number of temporary table is allowed. What is the strategy to do so?
PostgreSQL (and probably other RDBMSes) let you use self-joins in UPDATE statements just as you can in SELECT statements:
UPDATE tbl
SET attr = t2.attr
FROM tbl t2
WHERE tbl.id = t2.id + 5
AND tbl.id >= 6
This would be easy with an update-with-join but Oracle doesn't do that and the closest substitute can be very tricky to get to work. Here is the easiest way. It involves a subquery to get the new value and a correlated subquery in the where clause. It looks complicated but the set subquery should be self-explanatory.
The where subquery really only has one purpose: it connects the two tables, much as the on clause would do if we could do a join. Except that the field used from the main table (the one being updated) must be a key field. As it turns out, with the self "join" being performed below, they are both the same field, but it is required.
Add to the where clause other restraining criteria, as shown.
update Tuples t1
set t1.Attr =(
select t2.Attr
from Tuples t2
where t2.Attr = t1.Attr - 5 )
where exists(
select t2.KeyVal
from Tuples t2
where t1.KeyVal = t2.KeyVal)
and t1.Attr > 5;
SqlFiddle is pulling a hissy fit right now so here the data used:
create table Tuples(
KeyVal int not null primary key,
Attr int
);
insert into Tuples
select 1, 1 from dual union all
select 2, 2 from dual union all
select 3, 3 from dual union all
select 4, 4 from dual union all
select 5, 5 from dual union all
select 6, 6 from dual union all
select 7, 7 from dual union all
select 8, 8 from dual union all
select 9, 9 from dual union all
select 10, 10 from dual;
The table starts out looking like this:
KEYVAL ATTR
------ ----
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
10 10
with this result:
KEYVAL ATTR
------ ----
1 1
2 2
3 3
4 4
5 5
6 1
7 2
8 3
9 4
10 5

how to match an integer with varchar containing digits separated by commas in oracle

I am facing a strange scenario where I need to match an integer with varchar containing digits separated by commas in oracle
Example:
Table t1:
id integer
key integer
Table t2
id integer,
keys varchar2
T1 values are:
1,111
2,201
3,301
T2 values are:
1, "111,301"
2, "111,201"
3, "201,301"
PROBLEM: Is there any way I can match or regular_expression match with key of T1 with keys of T2?
you can do a regular join without regex for this:
select *
from t1
inner join t2
on ','||t2.keys||',' like '%,'||to_char(t1.key)||',%';
eg:
SQL> create table t1(id, key)
2 as
3 select 1, 111 from dual union all
4 select 2, 201 from dual union all
5 select 3, 301 from dual;
Table created.
SQL> create table t2(id, keys)
2 as
3 select 1, '111,301' from dual union all
4 select 2, '111,201' from dual union all
5 select 3, '201,301' from dual;
Table created.
SQL> select *
2 from t1
3 inner join t2
4 on ','||t2.keys||',' like '%,'||to_char(t1.key)||',%';
ID KEY ID KEYS
---------- ---------- ---------- -------
1 111 1 111,301
1 111 2 111,201
2 201 2 111,201
2 201 3 201,301
3 301 1 111,301
3 301 3 201,301
6 rows selected.
It's not regex, just concatenation. For example lets say we wanted to compare
KEY KEYS
111 111,301
we could say
where keys like '%'||key||'%'
i.e. expanded, this is
where '111,301' like '%111%'
which matches fine. But I added some commas there too. ie I did this:
where ',111,301,' like '%,111,%'
Why? imagine instead you had this data:
KEY KEYS
111 1111,301
If we did the simple join:
where '1111,301' like '%111%'
it would incorrectly match. By injecting leading and trailing commas on both sides:
where ',1111,301,' like '%,111,%'
is no longer erroneously matches, as ,1111, isn't like ,111,.

Resources