When i try run the below query i'm getting this error
ORA-00917: missing comma
00917. 00000 - "missing comma"
*Cause:
*Action: Error at Line: 26 Column: 22
Query:
select * from
(select 'india',po.team,pa.NAME,pa.userid,me.amount as RN
from
port po,
switch pa,
merchant me
where po.SEQNO=pa.SEQNO
and (pa.SEQNO=me.SEQNO and RN is not null))
PIVOT (
count(*) for RN in(RN<1,RN-20.01)
)
I want display the percentage columns as <=1, between 10 to 20 and >20 by using the pivot logic
Use a case statement in your initial sub-query:
select *
from (
select *
from (
select 'india',
po.team,
pa.NAME,
pa.userid,
CASE
WHEN me.amount <= 1 THEN '<1'
WHEN me.amount BETWEEN 10 AND 20 THEN '10-20'
WHEN me.amount > 20 THEN '>20'
END as RN
from port po
INNER JOIN switch pa
ON ( po.SEQNO=pa.SEQNO )
INNER JOIN merchant me
ON ( pa.SEQNO=me.SEQNO )
)
where RN is not null
) PIVOT (
count(*) for RN in( '<1', '10-20', '>20' )
);
Related
I have this query where a user defined function is added in the select and group by statement.
The inner select query without the WITH clause runs fine and doesn't give any error. But after adding WITH clause it gives the following error -
ORA-00979: not a GROUP BY expression
00979. 00000 - "not a GROUP BY expression"
*Cause:
*Action: Error at Line: 3 Column: 29
I need the WITH clause to return only a subset of the entire result set based on input ranges.
Query is as follows:
WITH INFO AS (
SELECT
GET_EVAULATED_VALUE(T.C_IMP, T.IMP) AS IMPORTANCE,
count(*) AS NO_OF_PC_AFFECTED
FROM TABLE_NAME T
WHERE T.ACNT_REL_ID = 16
GROUP BY
(GET_EVAULATED_VALUE(T.C_IMP, T.IMP))
ORDER BY IMPORTANCE desc
)
SELECT * FROM
(
SELECT ROWNUM AS RN,
(SELECT COUNT(*) FROM INFO) COUNTS,
IMPORTANCE
FROM INFO
)
WHERE RN > 0 AND RN <= 10;
I am not sure how to use CTE with group by on user defined function. But I realized that I can rewrite the query to remove sub-query and CTE and make it simpler as following (and it works):
select * from (
select a.*, ROWNUM rnum from
(SELECT
count(*) over() as COUNTS,
GET_EVAULATED_VALUE(T.C_IMP, T.IMP) AS IMPORTANCE,
count(*) AS NO_OF_PC_AFFECTED
FROM TABLE_NAME T
WHERE T.ACNT_RELATION_ID = 16
GROUP BY
(GET_EVAULATED_VALUE(T.C_IMP, T.IMP))
ORDER BY importance desc) a
where ROWNUM <= 10 )
where rnum >= 0;
Same issue here, I created a table "TABLE_CTE" instead of using a CTE and it worked.
CREATE TABLE TABLE_CTE
AS
SELECT
USER_DEFINED_FUNCTION(date_1),
COUNT(*)
FROM
TABLE_NAME
GROUP BY
USER_DEFINED_FUNCTION(date_1)
;
SELECT * FROM TABLE_CTE
I have the following setup, which works fine and generates output as expected.
I'm trying to add the locations subquery into the CTE so my output will have a random location_id for each row.
The subquery is straight forward and should work but I am getting syntax errors when I try to place it into the 'data's CTE. I was hoping someone could help me out.
CREATE TABLE employees(
employee_id NUMBER(6),
emp_name VARCHAR2(30)
);
INSERT INTO employees(
employee_id,
emp_name
) VALUES
(1, 'John Doe');
INSERT INTO employees(
employee_id,
emp_name
) VALUES
(2, 'Jane Smith');
INSERT INTO employees(
employee_id,
emp_name
) VALUES
(3, 'Mike Jones');
CREATE TABLE locations AS
SELECT level AS location_id,
'Door ' || level AS location_name
FROM dual
CONNECT BY level <=
with rws as (
select level rn from dual connect by level <= 5 ),
data as ( select e.*,round (dbms_random.value(1,5)
) n from employees e)
select employee_id,
emp_name,
trunc (sysdate) + dbms_random.value (0, 5) AS random_date
from rws
join data d on rn <= n
order by employee_id;
-- trying to make this work
with rws as ( select level rn from dual connect by level <= 5 ),
data as ( select e.*, loc.location_id = (
select location_id
from locations order by dbms_random.value()
fetch first 1 row only
),
round (dbms_random.value(1,5)
) n from employees e )
select employee_id,
emp_name,
trunc (sysdate) + dbms_random.value (0, 5) AS random_date
from rws
join data d on rn <= n
order by employee_id;
You need to alias the subquery column expression, rather than trying to assign it to a [variable] name. So instead of this:
with rws as ( select level rn from dual connect by level <= 5 ),
data as ( select e.*, loc.location_id = (
select location_id
from locations order by dbms_random.value()
fetch first 1 row only
),
round (dbms_random.value(1,5)
) n from employees e )
you would do this:
with rws as (
select level rn
from dual
connect by level <= 5
),
data as (
select e.*,
(
select location_id
from locations
order by dbms_random.value()
fetch first 1 row only
) as location_id,
round (dbms_random.value(1,5)) as n
from employees e
)
db<>fiddle
But yes, you'll get the same location_id for each row, which probably isn't what you want.
There are probably better ways to avoid it (or to approach whatever you're actually trying to achieve) but one option is to force the subquery to be correlated by adding something like:
where location_id != -1 * e.employee_id
db<>fiddle
although that might be expensive. It's probably worth asking a new question about that specific aspect.
I am getting the same location_id for every employee_id, which I don't want either.
The subquery is in the wrong place then; move it to the main query, and correlate against both ID and n:
with rws as (
select level rn
from dual
connect by level <= 5
),
data as (
select e.*,
round (dbms_random.value(1,5)) as n
from employees e
)
select d.employee_id,
d.emp_name,
(
select location_id
from locations
where location_id != -1 * d.employee_id * d.n
order by dbms_random.value()
fetch first 1 row only
) as location_id,
trunc (sysdate) + dbms_random.value (0, 5) AS random_date
from rws r
join data d on r.rn <= d.n
order by d.employee_id;
db<>fiddle
Or move the location part to a new CTE, I suppose, with its own row number; and join that on one of your other generated values.
I have been running the below query without issue:
with Nums (NN) as
(
select 0 as NN
from dual
union all
select NN+1 -- (1)
from Nums
where NN < 30
)
select null as errormsg, trunc(sysdate)-NN as the_date, count(id) as the_count
from Nums
left join
(
SELECT c1.id, trunc(c1.c_date) as c_date
FROM table1 c1
where c1.c_date > trunc(sysdate) - 30
UNION
SELECT c2.id, trunc(c2.c_date)
FROM table2 c2
where c2.c_date > trunc(sysdate) -30
) x1
on x1.c_date = trunc(sysdate)-Nums.NN
group by trunc(sysdate)-Nums.NN
However, when I try to pop this in a proc for SSRS use:
procedure pr_do_the_thing (RefCur out sys_refcursor)
is
oops varchar2(100);
begin
open RefCur for
-- see above query --
;
end pr_do_the_thing;
I get
Error(): PL/SQL: ORA-00932: inconsistent datatypes: expected NUMBER got -
Any thoughts? Like I said above, as a query, there is no issue. As a proc, the error appears at note (1) int eh query.
This seems to be bug 18139621 (see MOS Doc ID 2003626.1). There is a patch available, but if this is the only place you encounter this, it might be simpler to switch to a hierarchical query:
with Nums (NN) as
(
select level - 1
from dual
connect by level <= 31
)
...
You could also calculate the dates inside the CTE (which also fails with a recursive CTE):
with Dates (DD) as
(
select trunc(sysdate) - level + 1
from dual
connect by level <= 31
)
select null as errormsg, DD as the_date, count(id) as the_count
from Dates
left join
(
SELECT c1.id, trunc(c1.c_date) as c_date
FROM table1 c1
where c1.c_date > trunc(sysdate) - 30
UNION
SELECT c2.id, trunc(c2.c_date)
FROM table2 c2
where c2.c_date > trunc(sysdate) -30
) x1
on x1.c_date = DD
group by DD;
I'd probably organise it slightly differently, so the subquery doesn't limit the date range directly:
with dates (dd) as
(
select trunc(sysdate) - level + 1
from dual
connect by level <= 31
)
select errormsg, the_date, count(id) as the_count
from (
select null as errormsg, d.dd as the_date, c1.id
from dates d
left join table1 c1 on c1.c_date >= d.dd and c1.c_date < d.dd + 1
union all
select null as errormsg, d.dd as the_date, c2.id
from dates d
left join table2 c2 on c2.c_date >= d.dd and c2.c_date < d.dd + 1
)
group by errormsg, the_date;
but as always with these things, check the performance of each approach...
Also notice that I've switched from union to union all. If an ID could appear more than once on the same day, in the same table or across both tables, then the counts will be different - you need to decide whether you want to count them once or as many times as they appear. That applies to your original query too.
i'm trying to get total count by using UNION operator but it gives wrong count.
select count(*) as companyRatings from (
select count(*) hrs from (
select distinct hrs from companyA
)
union
select count(*) financehrs from (
select distinct finance_hrs from companyB
)
union
select count(*) hrids from (
select regexp_substr(hr_id,'[^/]+',1,3) hrid from companyZ
)
union
select count(*) cities from (
select regexp_substr(city,'[^/]+',1,3) city from companyY
)
);
individual query's working fine but total count not matching.
individual results here: 12 19 3 6
present total count: 31
Actual total count:40.
so there is any alternate solution without UNION operator?
To add values you'd use +. UNION is to add data sets.
select
(select count(distinct hrs) from companyA)
+
(select count(distinct finance_hrs) from companyB)
+
(select count(regexp_substr(hr_id,'[^/]+',1,3)) from companyZ)
+
(select count(regexp_substr(city,'[^/]+',1,3)) from companyY)
as total
from dual;
But I agree with juergen d; you should not have separate tables per company in the first place.
Edit. Updated query using Sum
select sum(cnt) as companyRatings from
(
select count(*) as cnt from (select distinct hrs from companyA)
union all
select count(*) as cnt from (select distinct finance_hrs from companyB)
union all
select count(*) as cnt from (select regexp_substr(hr_id,'[^/]+',1,3) hrid from companyZ)
union all
select count(*) as cnt from (select regexp_substr(city,'[^/]+',1,3) city from companyY)
)
Previous answer:
Try this
SELECT (
SELECT count(*) hrs
FROM (
SELECT DISTINCT hrs
FROM companyA
)
)
+
(
SELECT count(*) financehrs
FROM (
SELECT DISTINCT finance_hrs
FROM companyB
)
)
+
(
SELECT count(*) hrids
FROM (
SELECT regexp_substr(hr_id, '[^/]+', 1, 3) hrid
FROM companyZ
)
)
+
(
SELECT count(*) cities
FROM (
SELECT regexp_substr(city, '[^/]+', 1, 3) city
FROM companyY
)
)
AS total_count
FROM dual
I stuck an hours to display the data in weekly result, but I'm always fail to display that. I'm new in pivot.
Please help me.
Thank you!
Sample data need to produce:
Data,03/10/2014,200,150,186,172
Data,03/16/2014,144,115,181,157
Data,03/22/2014,130,198,136,393
Here's the script:
select 'Data,'''
||(
select listagg(SIM_id,''',''') within group (order by 1)
from (
select distinct substr(dst_channel,1,14) as SIM_id
from table1
where dst_channel like 'SIP/item__-%'
)
)
from dual;
select 'Data'
||','||dtime_day
||','||g1_cnt
||','||g2_cnt
||','||g3_cnt
||','||g4_cnt
from (
select 'Data'
,to_char(d.dtime_day,'MM/dd/yyyy') as dtime_day
,substr(c.dst_channel,1,14) as dst_channel
from table2 d
left join table1
on c.call_date = d.dtime_day
where c.dst_channel like 'SIP/item%'
and c.status like 'ANSWERED%'
and d.dtime_day between trunc(sysdate, 'IW')-(12*7) and trunc(sysdate) -1
)
pivot(
count(dst_channel) as cnt
for dst_channel in (
'SIP/item01' as g1,'SIP/item02' as g2,'SIP/item03' as g3,'SIP/item04' as g4
)
)
order by dtime_day
Result of the query above:
Data,03/10/2014,147,103,86,76
Data,03/11/2014,144,115,81,57
Data,03/12/2014,130,98,59,39