I Written a Query to Identify the Duplicate records.
As Below.
WITH DUPS AS
(SELECT A_SURVEYID,
CAST(e_responsedate AS DATE) AS e_responsedate,
E_LG_VM_SURVEY_TYPE_ENUM
FROM TRANSIENT..INTERIM_NPS_SURVEY_MOBILE_RESULTS_20170909 a
GROUP BY A_SURVEYID,
CAST(e_responsedate AS DATE),
E_LG_VM_SURVEY_TYPE_ENUM
HAVING COUNT(*) > 1
),
RANKED AS
(SELECT R.DRS_RECORD_ID,
R.A_SURVEYID,
R.e_responsedate ,
ROW_NUMBER() OVER ( PARTITION BY R.A_SURVEYID, CAST(R.e_responsedate AS DATE),
R.E_LG_VM_SURVEY_TYPE_ENUM ORDER BY SUBSTR(R.DRS_RECORD_ID, INSTR(':', R.DRS_RECORD_ID, 37) + 1, 14) DESC,
SUBSTR(R.DRS_RECORD_ID, INSTR(':', R.DRS_RECORD_ID, 32) + 1, 4) ASC ) AS DR
FROM TRANSIENT..INTERIM_NPS_SURVEY_MOBILE_RESULTS_20170909 R
INNER JOIN DUPS
ON R.A_SURVEYID = DUPS.A_SURVEYID
AND CAST(R.e_responsedate AS DATE) = DUPS.e_responsedate
AND R.E_LG_VM_SURVEY_TYPE_ENUM = DUPS.E_LG_VM_SURVEY_TYPE_ENUM
)
SELECT *
FROM TRANSIENT..INTERIM_NPS_SURVEY_MOBILE_RESULTS_20170909 F
INNER JOIN RANKED
ON F.DRS_RECORD_ID = RANKED.DRS_RECORD_ID
WHERE RANKED.DR > 1
--
By Using the Above Query am able to retrieve the records.(some 6000+ ).
But am unable to delete those records.
Can you please help me on this.
Regards,
Krish
You are very close. In the last 5 lines, do this instead:
Delete from FROM TRANSIENT..INTERIM_NPS_SURVEY_MOBILE_RESULTS_2017090
Where DRS_RECORD_ID in (
Select DRS_RECORD_ID from RANKED WHERE DR>1)
Should work :)
BTW: I'm pretty sure this can be done with less code, but that is not important...
Related
Output result
I want the running balance in my query. I had wrote the query.
May be i am mistaking any where please let me know.
SELECT
ACC_VMAST.VM_DATE,
ACC_VDET.CHEQUE,
ACC_VMAST.NARRATION,
ACC_VDET.DEBIT,
ACC_VDET.CREDIT,
sum(nvl(ACC_VDET.DEBIT,0) - nvl(ACC_VDET.CREDIT,0) )
over (order by ACC_VMAST.VM_DATE , ACC_VDET.DEBIT ) running_bal
FROM ACC_VMAST,
ACC_VDET,
ACC_COA
WHERE ACC_VMAST.VM_PK=ACC_VDET.VM_PK
AND ACC_COA.COA_PK=ACC_VDET.COA_PK
AND ACC_VMAST.POST_BY IS NOT NULL
AND ACC_VMAST.CANCEL_STATUS IS NULL
AND ACC_VMAST.VM_DATE BETWEEN '07/06/2021' AND '07/07/2021'
AND ACC_VDET.COA_PK= '303'
ORDER BY ACC_VMAST.VM_DATE , ACC_VDET.DEBIT;
If you have rows that have the same values for the ORDER BY clause then when you SUM the values then all the rows with the same ORDER BY value will be grouped together and totalled.
To prevent that, you can add the ROWNUM pseudo-column to the ORDER BY clause of the analytic function so that there will not be any ties:
SELECT m.VM_DATE,
d.CHEQUE,
m.NARRATION,
d.DEBIT,
d.CREDIT,
SUM( COALESCE(d.DEBIT,0) - COALESCE(d.CREDIT,0) )
OVER ( ORDER BY m.VM_DATE, d.DEBIT, ROWNUM ) AS running_bal
FROM ACC_VMAST m
INNER JOIN ACC_VDET d
ON (m.VM_PK = d.VM_PK)
INNER JOIN ACC_COA c
ON (c.COA_PK = d.COA_PK)
WHERE m.POST_BY IS NOT NULL
AND m.CANCEL_STATUS IS NULL
AND m.VM_DATE BETWEEN DATE '2021-07-06' AND DATE '2021-07-07'
AND d.COA_PK = '303'
ORDER BY
m.VM_DATE,
d.DEBIT;
You need to add a row range with "rows between unbounded preceding and current row".
sum(nvl(ACC_VDET.DEBIT,0) - nvl(ACC_VDET.CREDIT,0) )
over (order by ACC_VMAST.VM_DATE , ACC_VDET.DEBIT rows between unbounded preceding and current row ) running_bal
I am using CTX index in oracle column .
I have two table
xyz have 6 million record.
abc has 100k record
What i want to achieve is this.
Scenario - I have to union them and then i need the data in order by of date . And then i need pagination record on them.
select page_loc, blurb_id, article_id, hit_date,val_rank from (
SELECT REGEXP_REPLACE(regexp_replace(page_loc, '^.*mercer\.com'),'\?.*$') AS page_loc,
blurb_id, article_id, hit_date,RANK() OVER (ORDER BY hit_date DESC) AS val_rank
from (
SELECT page_loc, blurb_id, article_id, hit_date
FROM abc u INNER JOIN person p ON u.person_id=p.person_id
WHERE (CONTAINS(u.page_loc,v_pattern) > 0 )
UNION ALL
SELECT e page_loc, blurb_id, article_id, hit_date
FROM xyz u INNER JOIN person p ON u.person_id=p.person_id
WHERE ( CONTAINS(u.page_loc,v_pattern) > 0)
) p
left join
company c on c.company_id = p.company_id
) a
where val_rank between (v_page_no -1) * 10 and v_page_no * 10
v_pattern --> searching text
v_page_no --> page count
It take long time when i have record for some v_pattern more e.g 100k,
for example a word have 2 million record. it take more than 20 min.
Any idea what can i do . or it is not possible with oracle.
NOTE :- when i analysis it i found order by and pagination making it slow . How can i overcome that
Refer Table WORKQUEUELOG with columns (ID,QueueTypeId,CreateDateTime,WorkId,InQ,OutQ)
There is an index available for (QueueTypeId,CreateDateTime,Id)
When trying to read the first X number of records with following query it is going for a full table search.
OPEN a_CursorHandle FOR
SELECT * FROM (
SELECT * FROM WORKQUEUELOG tbl
WHERE EX_CLS_WQLOG.FILTER(
tbl.QueueTypeId,
tbl.CreateDateTime,
NULL) = 1
AND EX_CLS_WQLOG.VALIDATION(
tbl.ID ,
tbl.WorkId ,
tbl.InQ ) = 1
ORDER BY tbl.QueueTypeId ASC,tbl.CreateDateTime ASC,tbl.Id ASC)
WHERE ROWNUM <= a_MaxCount;
We changed the query as following and started using the index for reading.
ie Validaiton function first and filter function as second.
OPEN a_CursorHandle FOR
SELECT * FROM (
SELECT * FROM WORKQUEUELOG tbl
WHERE EX_CLS_WQLOG.VALIDATION(
tbl.ID ,
tbl.WorkId,
tbl.InQ ) = 1
AND EX_CLS_WQLOG.FILTER(
tbl.QueueTypeId,
tbl.CreateDateTime,
NULL) = 1
ORDER BY tbl.QueueTypeId ASC,tbl.CreateDateTime ASC,tbl.Id ASC)
WHERE ROWNUM <= a_MaxCount;
What is the order of processing the SQL statement in oracle .? left to right or right to left.
From the above example this seems to be right to left . is this a consistent behavior. Is there any other efficient way to write this query.?
I have a query that should count the total number of rows returned depending on a column value. For example:
As you can see, the M field should display the total number of rows returned which should be 5 because the FT_LOT are all the same value. Here is the query that I have so far:
SELECT DISTINCT
VBATCH_ID, MAXIM_PN, BAGNUMBER, FT_LOT
, m
, level as n
FROM
(
SELECT
VBATCH_ID, MAXIM_PN, BAGNUMBER, FT_LOT, QTY, DC, PRINTDATE, WS_GREEN, WS_PNR, WS_PCN, MSL, BAKETIME, EXPTIME
, una
, dulo
, (dulo - una) + 1 AS m
FROM
(
SELECT c.containername VBATCH_ID
,pb.productname MAXIM_PN
,bn.wipdatavalue BAGNUMBER
,ln.wipdatavalue FT_LOT
,aw.wipdatavalue QTY
,DECODE(ln.wipdatavalue,la.attr_081,la.attr_083
,la.attr_085,la.attr_087
,la.attr_089,la.attr_091
,la.attr_093,la.attr_095
,la.attr_097,la.attr_099
,la.attr_101,la.attr_103
,la.attr_105,la.attr_107
,la.attr_109,la.attr_111
,la.attr_113,la.attr_116
,la.attr_117,la.attr_119
) DC
,TO_CHAR(SYSDATE,'MM/DD/YYYY HH:MI:SS PM') PRINTDATE
,DECODE(UPPER(la.attr_158),'GREEN','HF',NULL) WS_GREEN
,DECODE(la.attr_140,NULL,NULL,'PNR') WS_PNR
,DECODE(la.Attr_080,NULL,NULL,'PCN') WS_PCN
,p.attr_011 MSL
,P.attr_013 BAKETIME
,p.attr_014 EXPTIME
, CASE
WHEN INSTR(bn.wipdatavalue, '-') = 0 THEN
bn.wipdatavalue
ELSE
SUBSTR(bn.wipdatavalue, 1, INSTR(bn.wipdatavalue, '-')-1)
END AS una
, CASE
WHEN INSTR(bn.wipdatavalue, '-') = 0 THEN
bn.wipdatavalue
ELSE
SUBSTR(bn.wipdatavalue, INSTR(bn.wipdatavalue, '-') + 1)
END AS dulo
FROM Container C
JOIN a_lotattributes la ON c.lotattributesid = la.lotattributesid
JOIN product p ON c.productid=p.productid
JOIN productbase pb ON p.productbaseid=pb.productbaseid
JOIN a_adhocwipdatarecord a ON a.objectrefid=c.containerid
JOIN a_adhocwipdatarecorddetails bn ON a.adhocwipdatarecordid=bn.adhocwipdatarecordid AND bn.wipdatanamename ='TR_BAG_NUMBER'
LEFT JOIN a_adhocwipdatarecorddetails ln ON a.adhocwipdatarecordid=ln.adhocwipdatarecordid AND ln.wipdatanamename ='TR_FT_LOT NUMBER'
LEFT JOIN a_adhocwipdatarecorddetails aw ON A.adhocwipdatarecordid=aw.adhocwipdatarecordid AND aw.wipdatanamename ='TR_FT LOT QTY'
WHERE ln.wipdatavalue = :ftlot AND bn.wipdatavalue LIKE :wip
)
) WHERE level LIKE :n
CONNECT BY LEVEL <= m
ORDER BY BAGNUMBER
Thanks for helping out guys.
Actually GROUP BY is not the solution. Having looked again at your desired output I have realised that what you want is an analytic count.
Your posted query is a bit of a mess and, sorry ,but I'm not prepared to invest time in it. This is the sort of structure you need:
select vbatch_id, maxim_pn, bagnumber, ft_lot
, count(*) over (partition by ft_lot) m
from whatever ...
Find out more.
Not sure why you need the DISTINCT. DISTINCT almost always indicates a failure to get the WHERE clause right.
chaps and chapettes
Just a quick question. I need to return only one row from a stored proc., but no matter where I place the WHERE clause, I get errors. Can somebody take a look at the (cut-down due to sheer length) code and let me know where it should go, please?
SELECT **values**
INTO **variables**
FROM **table**
_WHERE ROWNUM = 1_
INNER JOIN **other table**
ON **join target**
ORDER BY **sort criteria**;
_WHERE ROWNUM = 1_
Thanks
I believe this is the way to structure rownum queries
SELECT * FROM
INTO **Variables * *
( SELECT * FROM X
WHERE Y
ORDER BY Z
)
WHERE ROWNUM = 1;
You were almost correct. You put the WHERE clause after the JOINs, but before the ORDER BY.
SELECT **values**
INTO **variables**
FROM **table**
INNER JOIN **other table**
ON **join target**
_WHERE ROWNUM = 1_
ORDER BY **sort criteria**;
However, this won't do what you might think - the ORDER BY is evaluated AFTER the where clause; which means this will just pick the first record it finds (that satisfies the join criteria), and will then sort that row (which obviously is a no-op).
The other answers (e.g. IvoTops') give ideas of how to get the first record according to the sort criteria.
SELECT **values**
INTO **variables**
FROM
( SELECT **values**
, ROW_MUMBER() OVER (ORDER BY **sort criteria**) AS rn
FROM **table**
INNER JOIN **other table**
ON **join target**
) tmp
WHERE rn = 1 ;
Check also this blog post: Oracle: ROW_NUMBER() vs ROWNUM
little bit late, but I got a similar problem and I solved it like this:
SELECT **values**
INTO **variables**
FROM **table**
WHERE **condition**
ORDER BY **sort criteria**
FETCH FIRST 1 ROW ONLY;
Regards