This question already has answers here:
Create leading zero in Oracle
(2 answers)
Closed 6 years ago.
I am trying to update a column value. The column datatype is Number. As per the requirement, for the right records this column will be updated with 000. I have included this in the Else part of the condition but when the table is getting updated it's taking only 0 not 000. Please suggest. How can I make it 000?
MERGE INTO mem_src_extn t USING
(
SELECT mse.rowid row_id,
CASE WHEN mse.type_value IS NULL OR mse."TYPE" IS NULL OR mse.VALUE_1 IS NULL or mse.VALUE_2 IS NULL THEN 100
WHEN ( SELECT count(*) FROM cmc_mem_src cms WHERE cms.tn_id = mse.type_value ) = 0 THEN 222
WHEN count(mse.value_1) over ( partition by type_value ) > 1 THEN 333
ELSE 000 int_value_1 <-- here
FROM mem_src_extn mse
) u
ON ( t.rowid = u.row_id )
WHEN MATCHED THEN UPDATE SET t.int_value_1 = u.int_value_1
If your column int_value_1 is varchar2 use quote
MERGE INTO mem_src_extn t USING
(
SELECT mse.rowid row_id,
CASE WHEN mse.type_value IS NULL OR mse."TYPE" IS NULL OR mse.VALUE_1 IS NULL or mse.VALUE_2 IS NULL THEN '100'
WHEN ( SELECT count(*) FROM cmc_mem_src cms WHERE cms.tn_id = mse.type_value ) = 0 THEN '222'
WHEN count(mse.value_1) over ( partition by type_value ) > 1 THEN
'333'
ELSE '000' int_value_1
END
FROM mem_src_extn mse
) u
ON ( t.rowid = u.row_id )
WHEN MATCHED THEN UPDATE SET t.int_value_1 = u.int_value_1
but if you have a number as you say and want just to see 000 insead of 0
You may use to_char with format string
select to_char(int_value_1,'099') from mem_src_extn;
Related
I need to pass a string from a substring of a database column into a variable.
Originally I did this before I realised that this could return more than one row. In the case of this returning more than I row I need to make sure I only return the most recent value (the most recent date).
SELECT SUBSTR (
description
,1
, INSTR (
description
,' ' )
- 1 )
INTO v_value
FROM sdok s
WHERE s.type = 2
AND s.case_no = in_object.case_no;
Which is why I attempted this:
SELECT *
FROM (SELECT SUBSTR (
description
,1
, INSTR (
description
,' ' )
- 1 )
,s.case_no
,s.date
FROM sdok s
WHERE s.type = 2
AND NVL ( s.deleted, 'N' ) <> 'J'
ORDER BY s.date)
WHERE ROWNUM = 1
AND s.case_no = in_object.case_no;
This returns 3 columns of data, which enables me to check I have the right case and date, but I still only really need the value from column 1 (the substring) to pass into my variable. I tried putting INTO after the SUBSTR, but before the call to s.case_no and s.date, but that doesn't work. Yet I need to do the date comparison (to get the most recent date) and get the right case_no, before I can get the first row. I'm guessing there is another way to compare the case_no and order by date, so that I only return the value of the substring?
Help please?
This is how I understood the question:
SELECT x.a_substring
INTO local_variable
FROM (SELECT SUBSTR (description
,1
,INSTR (description, ' ' ) - 1
) a_substring
--
,s.case_no
,s.date
FROM sdok s
WHERE s.type = 2
AND NVL ( s.deleted, 'N' ) <> 'J'
AND s.case_no = 'xxxxxxxxx'
ORDER BY s.date
) x
WHERE ROWNUM = 1
You can do much easier using "first first 1 rows only" or aggregate function max(str)keep(dense_rank first order by s.date):
SELECT
SUBSTR (description
,1
,INSTR (description, ' ' ) - 1
) a_substring INTO local_variable
FROM sdok s
WHERE s.type = 2
AND NVL ( s.deleted, 'N' ) <> 'J'
AND s.case_no = 'xxxxxxxxx'
ORDER BY s.date
fetch first 1 rows only
/
SELECT max(
SUBSTR (description
,1
,INSTR (description, ' ' ) - 1
)
)keep(dense_rank first order by s.date) a_substring
INTO local_variable
FROM sdok s
WHERE s.type = 2
AND NVL ( s.deleted, 'N' ) <> 'J'
AND s.case_no = 'xxxxxxxxx'
/
For every row of my data set, there exist data for the only one of the two options for calculation and the other columns are Null.
My goal is to find simplest way to select not null result of calculation for each row. Expected result:
ROW_NUM result
-------- -------
1 4.5
2 4.56
My code:
With DATASET AS (
-- column 1 is just for row number,
-- column 2 and 3 for caculation option1,
--- columns 4~6 for caculation option2
SELECT 1 ROW_NUM, NULL time1, NULL qty1, 2 time2_1, 2.5 time2_2, 1 qty2
FROM DUAL
UNION
SELECT 2 ROW_NUM, 4.56 time1, 1 qty1, NULL time2_1, NULL time2_2, NULL qty2
FROM DUAL
)
SELECT ROW_NUM, time1/qty1 OPTION1, (time2_1+time2_2)/qty2 OPTION2
FROM DATASET;
Result:
ROW_NUM OPTION1 OPTION2
-------- ------- ---------
1 4.5
2 4.56
You can decode and use different representation when null:
SELECT ROW_NUM, decode(time1/qty1,null,(time2_1+time2_2)/qty2,time1/qty1) result FROM DATASET;
Or nvl
SELECT ROW_NUM, nvl(time1/qty1,(time2_1+time2_2)/qty2,time1/qty1) result FROM DATASET;
NVL lets you replace null (returned as a blank) with a string in the results of a query.
use COALESCE function as following:
With DATASET AS (
--each row contain information for either option1 or 2
SELECT *
FROM
(
--column 1 is just for row number, column 2 and 3 for caculation option1, columns 4~6 for caculation option2
SELECT 1 ROW_NUM, NULL time1, NULL qty1 , 2 time2_1 , 2.5 time2_2, 1 qty2 FROM DUAL
UNION
SELECT 2 ROW_NUM, 4.56 time1 , 1 qty1 , NULL time2_1 , NULL time2_2 , NULL qty2 FROM DUAL
)
)SELECT ROW_NUM, coalesce(time1/qty1,(time2_1+time2_2)/qty2) as result FROM DATASET;
db<>fiddle demo
Cheers!!
I have a following query and it takes 12 hours to execute in HUE. I would like to increase the performance of the query. Let me know what changes I can implement in the query to increase the performance in HUE environment
SELECT ordernum,
Min(distance) mindist,
Min(CASE
WHEN type_name = 'T'
OR ( type_name = 'I'
AND item LIKE '%D%' ) THEN distance
ELSE 9999999
END) min_t,
Min(CASE
WHEN type_name = 'A' THEN distance
ELSE 9999999
END) min_a
FROM (SELECT a.ordernum,
b.id,
b.type_name,
b.item,
Round(Least(Sqrt(Pow(b.sty-a.nrthng, 2)
+ Pow(b.stx-a.estng, 2)),
Sqrt(Pow(b.endy-a.nrthng, 2)
+ Pow(b.endx-a.estng, 2))))
distance
FROM temp_b a,
min_b1 b
WHERE ( ( b.stx BETWEEN ( a.estng - 1000 ) AND ( a.estng + 1000 )
AND b.sty BETWEEN ( a.nrthng - 1000 ) AND
( a.nthing + 1000 ) )
OR ( b.endx BETWEEN ( a.estng - 1000 ) AND ( a.esng + 1000 )
AND b.endy BETWEEN ( a.nrthng - 1000 ) AND
( a.nrthng + 1000 ) ) )) a
GROUP BY ordernum
My concers are about your query join condition.
As I see, you have tables a and b. Are there any key fields so tables could be matched? I mean, field f1 from the table a has the same meaning as field f2 from table b so they could be joined.
You could also create temporary table containing information from both tables to remove overhead for network communication and data transfer as I believe your hadoop cluster contains more than single node.
I have a requirement to validate records present in table.First, we load the records into table and then we validate it using sql query. I am using below query to update the status code but to process 114000, it took around 7 hours. Is it acceptable? I am not sure why it's taking too much time.Please suggest any better idea so that I can minimize the time.
Query :
MERGE INTO mem_src_extn t USING
(
SELECT mse.rowid row_id,
CASE WHEN mse.type_value IS NULL OR mse."TYPE" IS NULL OR mse.VALUE_1 IS NULL or mse.VALUE_2 IS NULL THEN '100'
WHEN ( SELECT count(*) FROM cmc_mem_src cms WHERE cms.tn_id = mse.type_value ) = 0 THEN '222'
WHEN count(mse.value_1) over ( partition by type_value ) > 1 THEN '333'
ELSE '000' int_value_1
FROM mem_src_extn mse
) u
ON ( t.rowid = u.row_id )
WHEN MATCHED THEN UPDATE SET t.int_value_1 = u.int_value_1
The performance problem is probably caused by the SELECT count(*) subquery , not the MERGE.
The MERGE uses a ROWID join, which should be about as fast as any join can get. But it's possible that Oracle is incorrectly optimizing the subquery. Try re-writing the statement with a LEFT JOIN instead of a correlated subquery:
MERGE INTO mem_src_extn t USING
(
SELECT DISTINCT
mse.rowid row_id,
CASE WHEN mse.type_value IS NULL OR mse."TYPE" IS NULL OR mse.VALUE_1 IS NULL or mse.VALUE_2 IS NULL THEN '100'
WHEN cms.tn_id IS NULL THEN '222'
WHEN count(mse.value_1) over ( partition by type_value ) > 1 THEN '333'
ELSE '000' END int_value_1
FROM mem_src_extn mse
LEFT JOIN cmc_mem_src cms
ON mse.type_value = cms.tn_id
) u
ON ( t.rowid = u.row_id )
WHEN MATCHED THEN UPDATE SET t.int_value_1 = u.int_value_1;
But this is only a guess. If I'm wrong, the next step would be to get a query plan. Run explain plan for merge ... and then select * from table(dbms_xplan.display); and post the entire output of that statement in the question.
I have one table : Questionmaster. it stores DisciplineId,QuestionId,QuestionText etc...
Now My Question is:
I need 10 records of particular DisciplineId, 20 records for another DisciplineId and 30 records for Someother DisciplineId.... What should I do for that? How can I club all statement and get just 60(10+20+30) rows selected?
For one Discipline,it is working as shown below:
create or replace function fun_trial(Discipline1,Disc1_NoOfQuestions)
open cur_out for
select getguid() tmp,
QuestionNo,QuestionText,
Option1,Option2,
Option3,Option4,
Correctanswer,Disciplineid
from Questionmaster
where DisciplineId=discipline1
AND rownum <= disc1_NoOfQuestions
order by tmp ;
return (cur_out);
The following query uses the analytic function RANK() to sort the questions within discipline. The outer query then selects the first ten, first twenty and first thirty questions for disciplines 1, 2 and 3 respectively.
select * from (
select getguid() tmp
, QuestionNo
, QuestionText
, Option1
, Option2
, Option3
, Option4
, Correctanswer
, Disciplineid
, rank () over (partition by Disciplineid order by QuestionNo ) as rn
from Questionmaster
where DisciplineId in (1, 2, 3)
)
where ( DisciplineId = 1 and rn <= 10 )
or ( DisciplineId = 2 and rn <= 20 )
or ( DisciplineId = 3 and rn <= 30 )
/