What is the sql code to print 'Query' if the data in field = 'Q'?
Welcome to SO!
An example out of the box:
select decode(dummy, 'X', 'Y') from dual;
For your scenario, something like:
select decode(mycol, 'Q', 'Query') mycol from mytable;
Best of luck!
Preferred option is to use CASE because of readability; although, as #Bjarte suggested, DECODE can also be used (which I what I do, especially for simple cases). Also, tables have columns, not fields.
Anyway, CASE:
SQL> with test (field) as
2 -- sample data; you already have that and don't type it
3 (select 'A' from dual union all
4 select 'Q' from dual union all
5 select 'B' from dual
6 )
7 -- query you need
8 select field,
9 case when field = 'A' then 'Answer'
10 when field = 'Q' then 'Query'
11 else 'Unknown'
12 end as result
13 from test;
F RESULT
- -------
A Answer
Q Query
B Unknown
SQL>
Related
Is it possible to keep order from a 'IN' conditional clause?
I found this question on SO but in his example the OP have already a sorted 'IN' clause.
My case is different, 'IN' clause is in random order
Something like this :
SELECT SomeField,OtherField
FROM TestResult
WHERE TestResult.SomeField IN (45,2,445,12,789)
I would like to retrieve results in (45,2,445,12,789) order. I'm using an Oracle database. Maybe there is an attribute in SQL I can use with the conditional clause to specify to keep order of the clause.
There will be no reliable ordering unless you use an ORDER BY clause ..
SELECT SomeField,OtherField
FROM TestResult
WHERE TestResult.SomeField IN (45,2,445,12,789)
order by case TestResult.SomeField
when 45 then 1
when 2 then 2
when 445 then 3
...
end
You could split the query into 5 queries union all'd together though ...
SELECT SomeField,OtherField
FROM TestResult
WHERE TestResult.SomeField = 4
union all
SELECT SomeField,OtherField
FROM TestResult
WHERE TestResult.SomeField = 2
union all
...
I'd trust the former method more, and it would probably perform much better.
Decode function comes handy in this case instead of case expressions:
SELECT SomeField,OtherField
FROM TestResult
WHERE TestResult.SomeField IN (45,2,445,12,789)
ORDER BY DECODE(SomeField, 45,1, 2,2, 445,3, 12,4, 789,5)
Note that value,position pairs (e.g. 445,3) are kept together for readability reasons.
Try this:
SELECT T.SomeField,T.OtherField
FROM TestResult T
JOIN
(
SELECT 1 as Id, 45 as Val FROM dual UNION ALL
SELECT 2, 2 FROM dual UNION ALL
SELECT 3, 445 FROM dual UNION ALL
SELECT 4, 12 FROM dual UNION ALL
SELECT 5, 789 FROM dual
) I
ON T.SomeField = I.Val
ORDER BY I.Id
There is an alternative that uses string functions:
with const as (select ',45,2,445,12,789,' as vals)
select tr.*
from TestResult tr cross join const
where instr(const.vals, ','||cast(tr.somefield as varchar(255))||',') > 0
order by instr(const.vals, ','||cast(tr.somefield as varchar(255))||',')
I offer this because you might find it easier to maintain a string of values rather than an intermediate table.
I was able to do this in my application using (using SQL Server 2016)
select ItemID, iName
from Items
where ItemID in (13,11,12,1)
order by CHARINDEX(' ' + Convert("varchar",ItemID) + ' ',' 13 , 11 , 12 , 1 ')
I used a code-side regex to replace \b (word boundary) with a space. Something like...
var mylist = "13,11,12,1";
var spacedlist = replace(mylist,/\b/," ");
Importantly, because I can in my scenario, I cache the result until the next time the related items are updated, so that the query is only run at item creation/modification, rather than with each item viewing, helping to minimize any performance hit.
Pass the values in via a collection (SYS.ODCINUMBERLIST is an example of a built-in collection) and then order the rows by the collection's order:
SELECT t.SomeField,
t.OtherField
FROM TestResult t
INNER JOIN (
SELECT ROWNUM AS rn,
COLUMN_VALUE AS value
FROM TABLE(SYS.ODCINUMBERLIST(45,2,445,12,789))
) i
ON t.somefield = i.value
ORDER BY rn
Then, for the sample data:
CREATE TABLE TestResult ( somefield, otherfield ) AS
SELECT 2, 'A' FROM DUAL UNION ALL
SELECT 5, 'B' FROM DUAL UNION ALL
SELECT 12, 'C' FROM DUAL UNION ALL
SELECT 37, 'D' FROM DUAL UNION ALL
SELECT 45, 'E' FROM DUAL UNION ALL
SELECT 100, 'F' FROM DUAL UNION ALL
SELECT 445, 'G' FROM DUAL UNION ALL
SELECT 789, 'H' FROM DUAL UNION ALL
SELECT 999, 'I' FROM DUAL;
The output is:
SOMEFIELD
OTHERFIELD
45
E
2
A
445
G
12
C
789
H
fiddle
I have a table with data in below format
COURSE
[]
["12345"]
["12345","7890"
I want to extract the data between [] but without "
So, my output would be in below format
COURSE
12345
12345, 7890
I tried the below code which works fine for first 3 rows
select REGEXP_SUBSTR (COURSE,
'"([^"]+)"',
1,
1,
NULL,
1) from TEST;
But 4th row only results in 12345.
Why not simple translate?
SQL> with test (course) as
2 (select '[]' from dual union
3 select null from dual union
4 select '["12345"]' from dual union
5 select '["12345","7890"]' from dual
6 )
7 select course,
8 translate(course, 'a[]"', 'a') result
9 from test;
COURSE RESULT
---------------- -----------------------------------------
["12345","7890"] 12345,7890
["12345"] 12345
[]
SQL>
Data in the file_name field of the generation table should be an assigned number, then _01, _02, or _03, etc. and then .pdf (example 82617_01.pdf).
Somewhere, the program is putting a state name and sometimes a date/time stamp, between the assigned number and the 01, 02, etc. (82617_ALABAMA_01.pdf or 19998_MAINE_07-31-2010_11-05-59_AM.pdf or 5485325_OREGON_01.pdf for example).
We would like to develop a SQL statement to find the bad file names and fix them. In theory it seems rather simple to find file names that include a varchar2 data type and remove it, but putting the statement together is beyond me.
Any help or suggestions appreciated.
Something like:
UPDATE GENERATION
SET FILE_NAME (?)
WHERE FILE_NAME (?...LIKE '%STRING%');?
You can find the problem rows like this:
select *
from Files
where length(FILE_NAME) - length(replace(FILE_NAME, '_', '')) > 1
You can fix them like this:
update Files
set FILE_NAME = SUBSTR(FILE_NAME, 1, instr(FILE_NAME, '_') -1) ||
SUBSTR(FILE_NAME, instr(FILE_NAME, '_', 1, 2))
where length(FILE_NAME) - length(replace(FILE_NAME, '_', '')) > 1
SQL Fiddle Example
You can also use Regexp_replace function:
SQL> with t1(col) as(
2 select '82617_mm_01.pdf' from dual union all
3 select '456546_khkjh_89kjh_67_01.pdf' from dual union all
4 select '19998_MAINE_07-31-2010_11-05-59_AM.pdf' from dual union all
5 select '5485325_OREGON_01.pdf' from dual
6 )
7 select col
8 , regexp_replace(col, '^([0-9]+)_(.*)_(\d{2}\.pdf)$', '\1_\3') res
9 from t1;
COL RES
-------------------------------------- -----------------------------------------
82617_mm_01.pdf 82617_01.pdf
456546_khkjh_89kjh_67_01.pdf 456546_01.pdf
19998_MAINE_07-31-2010_11-05-59_AM.pdf 19998_MAINE_07-31-2010_11-05-59_AM.pdf
5485325_OREGON_01.pdf 5485325_01.pdf
To display good or bad data regexp_like function will come in handy:
SQL> with t1(col) as(
2 select '826170_01.pdf' from dual union all
3 select '456546_01.pdf' from dual union all
4 select '19998_MAINE_07-31-2010_11-05-59_AM.pdf' from dual union all
5 select '5485325_OREGON_01.pdf' from dual
6 )
7 select col bad_data
8 from t1
9 where not regexp_like(col, '^[0-9]+_\d{2}\.pdf$');
BAD_DATA
--------------------------------------
19998_MAINE_07-31-2010_11-05-59_AM.pdf
5485325_OREGON_01.pdf
SQL> with t1(col) as(
2 select '826170_01.pdf' from dual union all
3 select '456546_01.pdf' from dual union all
4 select '19998_MAINE_07-31-2010_11-05-59_AM.pdf' from dual union all
5 select '5485325_OREGON_01.pdf' from dual
6 )
7 select col good_data
8 from t1
9 where regexp_like(col, '^[0-9]+_\d{2}\.pdf$');
GOOD_DATA
--------------------------------------
826170_01.pdf
456546_01.pdf
To that end your update statement might look like this:
update your_table
set col = regexp_replace(col, '^([0-9]+)_(.*)_(\d{2}\.pdf)$', '\1_\3');
--where clause if needed
I have to process a large table (2.5B records) row by row in order to keep track of two variables. As one can imagine, this is quite slow. I am looking for ideas on how to tune this procedure. Thank you.
declare
cursor c_data is select /* +index(data data_pk) */ * from data order by data_id;
r_data c_data%ROWTYPE;
lst_b_prc number(15,8);
lst_a_prc number(15,8);
begin
open c_data;
loop
fetch c_data into r_data;
exit when c_data%NOTFOUND;
if r_data.BATS = 'B' then
lst_b_prc := r_data.PRC;
end if;
if r_data.BATS = 'A' then
lst_a_prc := r_data.PRC;
end if;
if r_data.BATS = 'T' then
insert into trans .... lst_a_prc , lst_b_prc
end if;
end loop;
close c_data;
end;
The issue really comes down to finding efficient sql to track the latest PRC value when BATS='A' and BATS='B' for each BATS='T' record.
If I understand your problem correctly, with a table of data like this:
create table data as
select 1 data_id, 'T' bats, 1 prc from dual union all
select 2 data_id, 'A' bats, 2 prc from dual union all
select 3 data_id, 'B' bats, 3 prc from dual union all
select 4 data_id, 'T' bats, 4 prc from dual union all
select 5 data_id, 'A' bats, 5 prc from dual union all
select 6 data_id, 'T' bats, 6 prc from dual union all
select 7 data_id, 'B' bats, 7 prc from dual union all
select 8 data_id, 'T' bats, 8 prc from dual union all
select 9 data_id, 'T' bats, 9 prc from dual;
You you want to insert one row for each T, using the last PRC value for A and B. Which would look something like this:
T data_id Last A Last B
--------- ------ ------
1 null null
4 2 3
6 5 3
8 5 7
9 5 7
This query should work:
select data_id, last_A, last_B
from
(
select data_id, bats, prc
,max(case when bats = 'A' then prc else null end) over
(order by data_id
rows between unbounded preceding and current row) last_A
,max(case when bats = 'B' then prc else null end) over
(order by data_id
rows between unbounded preceding and current row) last_B
from data
)
where bats = 'T';
With so much data, you'll probably want to use direct path writes and parallelism.
The performance will largely depend on whether the sorting for the analytic functions can be done in memory or on disk. Optimizing memory can be very difficult, you'll probably need to work with a DBA to allow your process to use as much memory as possible without causing problems for other processes.
There are several options. Most importantly, you're probably keeping a huge UNDO/REDO log for all your inserts. You could occasionally commit your work, say every 1000 inserts.
Another option is to use a SQL MERGE statement (or simpler INSERT .. SELECT .. statement), that will allow your Oracle instance to operate on sets rather than on single records. The execution plan of your select might be optimised for optimal INSERT performance.
I wanted to see what some suggested approaches would be to validate a field that is stored as a CSV against a table containing appropriate values. Althought it would be desired, it is NOT an option to split the CSV list into another related table. In the example data below I would be trying to capture the code 99 for widget A.
Below is an example data representation.
Table: Widgets
WidgetName WidgetCodeList
A 1, 2, 3
B 1
C 2, 3
D 99
Table: WidgetCodes
WidgetCode
1
2
3
An earlier approach was to query the CSV column as rows using various string manipulations and CONNECT_BY_LEVEL however the performance was not acceptible.
You could try a pipelined function (here with a lateral join):
SQL> WITH widgets AS (
2 SELECT 'A' WidgetName, '1, 2, 3' WidgetCodeList FROM dual
3 UNION ALL SELECT 'B', '1' FROM DUAL
4 UNION ALL SELECT 'C', '2, 3' FROM DUAL
5 UNION ALL SELECT 'D', '99' FROM DUAL
6 ), widgetcodes AS (
7 SELECT ROWNUM widgetcode from dual CONNECT BY LEVEL <= 3
8 )
9 SELECT w.widgetname,
10 to_number(s.column_value) missing_widget
11 FROM widgets w
12 CROSS JOIN TABLE(demo_pkg.string_to_tab(w.WidgetCodeList)) s
13 WHERE NOT EXISTS (SELECT NULL
14 FROM widgetcodes ws
15 WHERE ws.widgetcode = to_number(s.column_value));
WIDGETNAME MISSING_WIDGET
---------- --------------
D 99
See this other SO for an example of a pipelined function that converts a character string to a table.
In PL/SQL, you could make use of the Apex utility for converting a delimited string to a PL/SQL collection like this:
procedure validate_csv (p_csv varchar2)
is
v_array apex_application_global.vc_arr2;
v_dummy varchar2(1);
begin
v_array := apex_util.string_to_table(p_csv, ', ');
for i in 1..v_array.count
loop
begin
select null
into v_dummy
from widgetcodes
where widgetcode = v_array(i);
exception
when no_data_found then
raise_application_error('Invalid widget code: '||v_array(i));
end;
end loop;
end;