We are trying to execute a select query which uses a column name in the case statement and if the same column name is used in where clause it is going into an infinite loop.
eg
select empId,empName,
(case when empDept in ('A','B','C') then empAge
when empDept in ('E','F','G') then empExp
else 'Dept-Not found' end)
from employee where empDept in ('A','B','C','D','E','F','G')
It does not matter even if we put or clause in the where clause instead of in.
EDIT:Edited the Query
clearly there is something else going on. The WHERE clause is evaluated before the SELECT (column list) clause, there is no way it could produce an infinite loop.
Consider (10.2.0.1):
SQL> CREATE TABLE employee AS
2 SELECT 1 empId, 'e1' empName, 1 empAge, 10 empExp, 'A' empDept FROM dual
3 UNION ALL SELECT 2, 'e2', 2, 9, 'B' FROM dual
4 UNION ALL SELECT 3, 'e3', 3, 8, 'C' FROM dual
5 UNION ALL SELECT 4, 'e4', 4, 7, 'D' FROM dual
6 UNION ALL SELECT 5, 'e5', 5, 6, 'E' FROM dual
7 UNION ALL SELECT 6, 'e6', 6, 5, 'F' FROM dual;
Table created
SQL> select empId,empName,
2 (case when empDept in ('A','B','C') then to_char(empAge)
3 when empDept in ('E','F','G') then to_char(empExp)
4 else 'Dept-Not found' end)
5 from employee where empDept in ('A','B','C','D','E','F','G');
EMPID EMPNAME (CASEWHENEMPDEPTIN('A','B','C'
---------- ------- ----------------------------------------
1 e1 1
2 e2 2
3 e3 3
4 e4 Dept-Not found
5 e5 6
6 e6 5
As you can see in my example I had to add a to_char to your case expressions because all results from a case must have the same type. Without the to_char, in my case I obtained an ORA-00932. Maybe your tool hangs when a query returns with an error?
As a pretty wild guess based on the little information you've given, I would say that without the WHERE clause the query is doing a full table scan, but when you add the WHERE clause it switches to an index range scan. If records for the six departments you are interested in are spread throughout the table, it is very possible that an index range scan will be much less efficient than a table scan.
One possible cause of this is that statistics on the table and index are stale.
In any case, your first step when diagnosing a SQL performance problem should be to look at the execution plan.
Related
I have a table which has data as:
id payor_name
---------------
1 AETNA
2 UMR
3 CIGNA
4 METLIFE
4 AETNAU
5 ktm
6 ktm
Id and payor_name are two columns.So,
My expected output is:
id payor_name
---------------
1 AETNA
2 UMR
3 CIGNA
4 METLIFE
4 AETNAU
6 ktm ...> I want to change the id of this row to be 6 from 5.
6 ktm
I want one to one mapping between id and payor_name.So,this is what I tried:
MERGE INTO offc.payor_collec A
USING (select id from offc.payor_collec where payor_name in(
select payor_name from offc.payor_collec group by payor_name having count(distinct id)>=2)) B
ON (A.id=B.id)
WHEN MATCHED THEN
UPDATE SET A.id=B.id
But when I compiled I got error as:
Error at line 1
ORA-38104: Columns referenced in the ON Clause cannot be updated: "A"."ID"
Id is number where as payor_name is varchar2.
How can I achieve this result?
MERGE works, but slightly different than your code.
SQL> select * from test;
ID PAYOR
---------- -----
1 aetna
2 umr
5 ktm
6 ktm
SQL> merge into test t
2 using (select max(t1.id) id,
3 t1.payor_name
4 from test t1
5 group by t1.payor_name
6 ) x
7 on (x.payor_name = t.payor_name)
8 when matched then update set
9 t.id = x.id;
4 rows merged.
SQL> select * from test;
ID PAYOR
---------- -----
1 aetna
2 umr
6 ktm
6 ktm
SQL>
Use a correlated subquery:
UPDATE PAYOR_COLLEC pc
SET pc.ID = (SELECT MAX(pc2.ID)
FROM PAYOR_COLLEC pc2
WHERE pc2.PAYOR_NAME = pc.PAYOR_NAME)
dbfiddle here
You can use a MERGE statement, as you tried and as Littlefoot has shown.
You can also use a correlated subquery as Bob Jarvis has shown, but that will be quite inefficient.
Many Oracle developers are unaware that you can also update through a join. Worse, there are many who say "there is no such thing in Oracle."
In your problem, you need to join your table to an aggregate query (picking just the max id for each payor_name) and the join is on the group by column in the aggregate. This already guarantees that the join column will be unique in the right-hand table; that is all Oracle needs to allow the update through join.
Here is a complete example, starting with the create table statement, then the update and then showing the table after the update. Note that I don't need any constraints (like primary key, not null, unique, etc.) or indexes on the base table. If they do exist, so much the better, but the solution works in the most general case.
create table t (id, payor_name) as
select 1, 'AETNA' from dual union all
select 2, 'UMR' from dual union all
select 3, 'CIGNA' from dual union all
select 4, 'METLIFE' from dual union all
select 4, 'AETNAU' from dual union all
select 5, 'ktm' from dual union all
select 6, 'ktm' from dual;
Table T created.
update
(
select id, payor_name, max_id
from t inner join
(select max(id) as max_id, payor_name from t group by payor_name)
using (payor_name)
)
set id = max_id where id != max_id
;
1 row updated.
select * from t;
ID PAYOR_NAME
----- ----------
1 AETNA
2 UMR
3 CIGNA
4 METLIFE
4 AETNAU
6 ktm
6 ktm
Notice the where clause at the end of the update statement, too. You don't want to update rows to their pre-existing value; that will still generate undo and redo data (although I understand that Oracle has changed that in more recent versions - it now doesn't generate undo and redo unless a row did indeed change). I assume ID is NOT NULL - otherwise you should rewrite the where clause as
where decode(id, max_id, 0) is null
or equivalent
I want to select the data from a Oracle table, whereas the table columns contains the data as , [ex : key,value] separated values; so here I want to select the second split i.e, value
table column data as below :
column_data
++++++++++++++
asper,worse
tincher,good
golder
null -- null values need to eliminate while selection
www,ewe
from the above data, desired output like below:
column_data
+++++++++++++
worse
good
golder
ewe
Please help me with the query
According to data you provided, here are two options:
result1: regular expressions one (get the 2nd word if it exists; otherwise, get the 1st one)
result2: SUBSTR + INSTR combination
SQL> with test (col) as
2 (select 'asper,worse' from dual union all
3 select 'tincher,good' from dual union all
4 select 'golder' from dual union all
5 select null from dual union all
6 select 'www,ewe' from dual
7 )
8 select col,
9 nvl(regexp_substr(col, '\w+', 1, 2), regexp_substr(col, '\w+', 1,1 )) result1,
10 --
11 nvl(substr(col, instr(col, ',') + 1), col) result2
12 from test
13 where col is not null;
COL RESULT1 RESULT2
------------ -------------------- --------------------
asper,worse worse worse
tincher,good good good
golder golder golder
www,ewe ewe ewe
SQL>
Good day!
Folks, I have an issue when I want to compare multiple values.
For example I have the Log Data for today, 23rd April. This is related to yesterday's job(22nd April) with specific name.
In my comparison, I should select all logs with their job dates with this specific name. I say that logs' dates must be bigger than our job with this specific name. There are other jobs earlier done with this specific name.
How can I choose only the last nearest one?
Please could you advise?
For example:
Table: Job
id
name
job start date
Table: Log
id
type
Log date
We have 3 different jobs with name "Coding"
and 3 different logs.
Jobs:
1, coding, 21-Apr-10
2, coding, 21-Apr-14
3, coding, 21-Apr-18
Logs:
1. 1, error, 23-Apr-10
2. 2, error, 23-Apr-15
3. 3, error, 20-Apr-18
4. 4, error, 23-Apr-18
And there, when I want to execute logs and jobs, I must look to their dates. For example, error with id 1 is related to job with id 1, because log's date is bigger than job's date.
Or also, log with id 3 should be related to job with id 2.
Generally speaking, something like this might do the job:
select *
from some_table t
where t.date_column = (select max(t1.date_column)
from some_table t1
where t1.id = t.id
)
If it doesn't, please, provide test case (CREATE TABLE and INSERT INTO sample data, which would be the input), and - based on that, tell us what you expect as the output.
[EDIT]
This is somewhat strange data model; jobs and logs aren't related by anything but dates ... anyway, here's code that returns what you described (at least, how I understood it).
SQL> with tjob (id, name, job_start) as
2 (select 1, 'coding', to_date('21.04.2010', 'dd.mm.yyyy') from dual union
3 select 2, 'coding', to_date('21.04.2014', 'dd.mm.yyyy') from dual union
4 select 3, 'coding', to_date('21.04.2018', 'dd.mm.yyyy') from dual
5 ),
6 logs (id, type, log_date) as
7 (select 1, 'error', to_date('23.04.2010', 'dd.mm.yyyy') from dual union
8 select 2, 'error', to_date('23.04.2015', 'dd.mm.yyyy') from dual union
9 select 3, 'error', to_date('20.04.2018', 'dd.mm.yyyy') from dual union
10 select 4, 'error', to_date('23.04.2018', 'dd.mm.yyyy') from dual
11 )
12 select j.id job_id
13 from tjob j
14 where j.job_start = (select max(j1.job_start)
15 from tjob j1
16 where j1.job_start < (select l.log_date
17 from logs l
18 where l.id = &log_id));
Enter value for log_id: 1
JOB_ID
----------
1
SQL> /
Enter value for log_id: 3
JOB_ID
----------
2
SQL>
Suppose I have a table with 10 records/tuples. Now I want to update an attribute of 6th record with the same attribute of 1st record, 2nd-7th, 3rd-8th, 4th-9th, 5th-10th in a go i.e. without using cursor/loop. Use of any number of temporary table is allowed. What is the strategy to do so?
PostgreSQL (and probably other RDBMSes) let you use self-joins in UPDATE statements just as you can in SELECT statements:
UPDATE tbl
SET attr = t2.attr
FROM tbl t2
WHERE tbl.id = t2.id + 5
AND tbl.id >= 6
This would be easy with an update-with-join but Oracle doesn't do that and the closest substitute can be very tricky to get to work. Here is the easiest way. It involves a subquery to get the new value and a correlated subquery in the where clause. It looks complicated but the set subquery should be self-explanatory.
The where subquery really only has one purpose: it connects the two tables, much as the on clause would do if we could do a join. Except that the field used from the main table (the one being updated) must be a key field. As it turns out, with the self "join" being performed below, they are both the same field, but it is required.
Add to the where clause other restraining criteria, as shown.
update Tuples t1
set t1.Attr =(
select t2.Attr
from Tuples t2
where t2.Attr = t1.Attr - 5 )
where exists(
select t2.KeyVal
from Tuples t2
where t1.KeyVal = t2.KeyVal)
and t1.Attr > 5;
SqlFiddle is pulling a hissy fit right now so here the data used:
create table Tuples(
KeyVal int not null primary key,
Attr int
);
insert into Tuples
select 1, 1 from dual union all
select 2, 2 from dual union all
select 3, 3 from dual union all
select 4, 4 from dual union all
select 5, 5 from dual union all
select 6, 6 from dual union all
select 7, 7 from dual union all
select 8, 8 from dual union all
select 9, 9 from dual union all
select 10, 10 from dual;
The table starts out looking like this:
KEYVAL ATTR
------ ----
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
10 10
with this result:
KEYVAL ATTR
------ ----
1 1
2 2
3 3
4 4
5 5
6 1
7 2
8 3
9 4
10 5
Id Field_id Field_value
------------------------------
1 10 'A'
1 11 'B'
1 12 'C'
I want to make rows like
Id Field_id Field_value data_1 data_2
--------------------------------------
1 10 'A' 'B' 'C'
Pl help.
Take a look at this:
with t (Id, Field_id, Field_value) as (
select 1, 10, 'A' from dual union all
select 1, 11, 'B' from dual union all
select 1, 12, 'C' from dual
)
select FIELD_ID1, "10_FIELD_ID", "11_FIELD_ID","12_FIELD_ID"
from (select id, field_id, min(field_id) over() field_id1, field_value from t)
pivot (
max(field_value) field_id
for field_id in (10, 11, 12)
)
FIELD_ID1 10_FIELD_ID 11_FIELD_ID 12_FIELD_ID
---------------------------------------------------
10 A B C
Read more about pivot here
Normally, people look for PIVOT query, however, your input data is not at all duplicate as already mentioned in comments.
What you could do is, using subquery factoring, select id, 10 as field_id, field_value, thus making field_it static value as 10. Then use LISTAGG. But, the grouped rows would be a single column as delimeter seperated values.
I hope your requirement is exactly what you stated and not a typo or a mistake while posting.