How to use if clauses on queries in iReport - ireport

I have the following query:
SELECT COLUMN1, COLUMN2, COUNT(*)
FROM TABLE
WHERE COLUMN3 IS NOT NULL
AND COLUMN4 = 1
AND COLUMN5 = 4
AND COLUMN6 = 43
AND COLUMN7 = $P{YEAR}
AND COLUMN8 = $P{IT}
GROUP BY COLUMN1, COLUMN2
ORDER BY COLUMN1 ASC
I wonder if it is possible to do something like:
SELECT COLUMN1, COLUMN2, COUNT(*)
FROM TABLE
WHERE COLUMN3 IS NOT NULL
AND COLUMN4 = 1
AND COLUMN5 = 4
AND COLUMN6 = 43
AND COLUMN7 = $P{YEAR}
IF ($P{IT} != 'ALL') { AND COLUMN8 = $P{IT} }
GROUP BY COLUMN1, COLUMN2
ORDER BY COLUMN1 ASC
In other words, I want to add to the where clause "AND COLUMN8 = $P{IT}" only if "$P{IT}" value is not "ALL". This means if the report must filter by the column "COLUMN8" or not.
Do someone know if this is possible? Is there other approach that accomplish the work?
I tried to execute the above query but I got a 'Compilation running time'.
Thanks in advance.

Yes, you can do that in iReport. But you need to look at it slightly differently. You need a second parameter. Keep $P{IT}, and add $P{IT_SQL}. Give $P{IT_SQL} a default value like this:
$P{IT}.equals("ALL") ? "" : " AND COLUMN8 = '" + $P{IT} + "'"
Then your query should look like this:
SELECT COLUMN1, COLUMN2, COUNT(*)
FROM TABLE
WHERE COLUMN3 IS NOT NULL
AND COLUMN4 = 1
AND COLUMN5 = 4
AND COLUMN6 = 43
AND COLUMN7 = $P{YEAR}
$P!{IT_SQL}
GROUP BY COLUMN1, COLUMN2
ORDER BY COLUMN1 ASC
That will give you the desired SQL (none!) when $P{IT} has the value "ALL", and it will give you the desired SQL (COLUMN8 = 'abc') when $P{IT} has the value "abc".

Related

Oracle CASE WHEN - ORA-00936: missing expression

I am new to oracle and below is my SQL.
SELECT * FROM TABLE1 WHERE COLUMN1 = 'YES'
AND COLUMN2 IN (
CASE WHEN EXISTS(SELECT * FROM TABLE1 WHERE COLUMN1 = 'YES' AND COLUMN2 NOT LIKE '%NO%')
THEN
SELECT COLUMN2 FROM TABLE1 WHERE COLUMN1 = 'YES' AND COLUMN2 NOT LIKE '%YES%'
ELSE
SELECT COLUMN2 FROM TABLE1 WHERE COLUMN1 = 'YES' AND COLUMN2 NOT LIKE '%YES%' END)
it is giving ORA-00936: missing expression at then statement. What am I doing wrong?
The subqueries after THEN and ELSE must be enclosed inside parentheses:
SELECT * FROM TABLE1 WHERE COLUMN1 = 'YES'
AND COLUMN2 IN (
CASE
WHEN EXISTS (SELECT * FROM TABLE1 WHERE COLUMN1 = 'YES' AND COLUMN2 NOT LIKE '%NO%')
THEN (SELECT COLUMN2 FROM TABLE1 WHERE COLUMN1 = 'YES' AND COLUMN2 NOT LIKE '%YES%')
ELSE (SELECT COLUMN2 FROM TABLE1 WHERE COLUMN1 = 'YES' AND COLUMN2 NOT LIKE '%YES%')
END
)
This will work only if these subqueries don't return more than 1 row.
Also, both subqueries are the same. Is this a typo?
And IN can be changed to = since CASE returns only 1 value.

Check for != values when column contains records with NULLs

I have a number of tables that mix 'real' values with nulls in columns. From exerience, issuing a SELECT against these that looks like:
SELECT column1, column2, column3 FROM mytable WHERE column1 != 'a value';
...doesn't return the records I expect. In the current table I am working on, this returns an empty recordset, even though I know I have records in the table with NULLs in column1, and other records in the table that have the value I am "!="ing in column1. I am expecting, in this case, to see the records with NULLs in column1 (and, of course, anything else if there were other not 'a value' values in column1.
Experimenting with NVL in the WHERE clause doesn't seem to give me anything different:
SELECT column1, column2, column3 FROM mytable WHERE NVL(column1, '') != 'a value';
...is also returning an empty recordset.
Using 'IS NULL' will technically give me the correct recordset in my current example, but of course if any records change to something like 'another value' in column1, then IS NULL will exclude those.
NULL can't be compared in the same way that other values can be. You must use IS NULL. If you want to include NULL values, add an OR to the WHERE clause:
SELECT column1, column2, column3 FROM mytable WHERE column1 != 'a value' OR column1 IS NULL
The query doesn't work because SQL handles equality checks (!=) different from checking if null (IS NULL).
What you could do here is something like:
SELECT column1, column2, column3
FROM mytable
WHERE column1 != 'a value' OR column1 is null;
See Not equal <> != operator on NULL.
What you were trying was correct. You just need change it a little. see below-
SELECT column1, column2, column3 FROM mytable WHERE NVL(column1, '0') != 'a value';
Instead of empty string, pass any character in NVL's second argument.
"Experimenting with NVL in the WHERE clause doesn't seem to give me anything different"
That's true:
SQL> select * from mytable;
COLUMN1 COLUMN2 COLUMN3
-------------------- ---------- ---------
a value 1 25-JUL-17
not a value 2 25-JUL-17
whatever 3 25-JUL-17
4 26-JUL-17
SQL> SELECT column1, column2, column3 FROM mytable WHERE NVL(column1, '') != 'a value';
COLUMN1 COLUMN2 COLUMN3
-------------------- ---------- ---------
not a value 2 25-JUL-17
whatever 3 25-JUL-17
SQL>
This is because your experiment didn't go far enough. For historical reasons Oracle treats an empty string as null so your nvl() statement effectively just subs one null for another. But if you had used a proper value in your call you would have got the result you wanted:
SQL> SELECT column1, column2, column3 FROM mytable WHERE NVL(column1, 'meh') != 'a value';
COLUMN1 COLUMN2 COLUMN3
-------------------- ---------- ---------
not a value 2 25-JUL-17
whatever 3 25-JUL-17
4 26-JUL-17
SQL>
The alternative approach is to explicitly test for NULL and test for the excluding value...
SQL> SELECT column1, column2, column3 FROM mytable
2 where column1 is null or column1 != 'a value';
COLUMN1 COLUMN2 COLUMN3
-------------------- ---------- ---------
not a value 2 25-JUL-17
whatever 3 25-JUL-17
4 26-JUL-17
SQL>
The second approach is probably more orthodox.
1) Undocumented Oracle function SYS_OP_MAP_NONNULL. (It exists from oracle10)
with abc as ( select 'a value' as col1 from dual
union all
select '' as col1 from dual)
select * from abc
where SYS_OP_MAP_NONNULL(col1) != SYS_OP_MAP_NONNULL('a value')
;
2) LNNVL - Check table in the documentation for clarification.
with abc as ( select 'a value' as col1 from dual
union all
select '' as col1 from dual)
select * from abc
where lnnvl( col1 = 'a value');
;

Self Join Oracle

I have a table table1 below is how the data looks like.
Column1 is my foreign key of another table.
Column1 Column2 Column3
1 A 06/MAY/14
1 A 05/MAY/14
1 B 06/MAY/14
1 B 01/JAN/00
1 A 01/JAN/00
Now i want to find distinct column1 values where it meets the following condition.
1.atleast one record where column2 should be A and column3 should be (sysdate - 1)
AND
2.atleast one record where column2 should be B and column3 should be (sysdate - 1)
Meaning alteast one A and B should have their column 3 populated with (sysdate - 1)
I have written the below query, please tell if i'm doing anything wrong.
I also want to know if i'm doing the right way of joining. The table contains around 50K records and performance should be fine i guess.
SELECT DISTINCT COLUMN1 FROM
TABLE1 A
JOIN
TABLE1 B ON (A.COLUMN1 = B.COLUMN1)
WHERE
((TRUNC(A.COLUMN3) - TRUNC(A.COLUMN3) = 0)
AND TRUNC(A.COLUMN3) = TRUNC(SYSDATE - 1)
AND TRUNC(B.COLUMN3) = TRUNC(SYSDATE - 1)
AND A.COLUMN2 = 'A'
AND B.COLUMN2 = 'B'
AND TO_CHAR(A.COLUMN3, 'DD-MON-YY') != '01-JAN-00'
AND TO_CHAR(B.COLUMN3, 'DD-MON-YY') != '01-JAN-00'
);
For performance-comparison one with subselects and group:
SELECT COLUMN1 FROM (
SELECT
COLUMN1,
COUNT(COLUMN2) CNT
FROM (
SELECT DISTINCT
COLUMN1,
COLUMN2
FROM TABLE1
WHERE TRUNCATE(COLUMN3) = SYSDATE - 1 AND
(COLUMN2 = 'A' OR COLUMN2 = 'B'))
GOUP BY COLUMN1)
WHERE CNT = 2
This should work
SELECT DISTINCT A.column1 -- Obtain distinct from A
FROM table1 A -- TableA
join table1 B -- TableB
ON A.column1 = B.column1 -- Joining them on Column1
WHERE A.column3 = SYSDATE - 1 -- Yesterdays data on Table A
AND A.column2 = 'A' -- A values
AND B.column2 = 'B'; -- B Values
Note: No distinctness in your test case. So try with a unique key.

change the value of a query when inserting

tab1
column1 =4
column2 = (empty column )
column3 = also empty
what I want is when I insert data into column2
column2 = test
I want column 2 to have the data of column1 , I want to have such result
column 2= 4test
I know I can achieve that with a trigger or procedure , I am asking you if there is another way when I insert it
Try
update tab1 set column2 = column1 || 'test' where condition;

reorder columns based on value

is there any way to reorder columns in a table (or query) based on value.
for example, on every row, FacName should be first, NPI - 2nd, TIN - 3rd and Address - 4th.
thank you!
select col1, col2, col3, col4 from (
select * from (
select
n, str, row_number() over (partition by n order by
decode(substr(str,1,3),'Fac',1,'NPI',2,'TIN',3,'Add',4)) cn
from (
select * from (select t.*, rownum n from t)
unpivot (str for cn in (col1 as 0, col2 as 0, col3 as 0, col4 as 0))
)
)
pivot (min(str) for cn in (1 as col1, 2 as col2, 3 as col3, 4 as col4))
)
order by n
fiddle
The fact that you want to move data between columns strongly implies that the underlying data model is broken and needs to be normalized. Fixing the data model would be much more appropriate than adding another layer of complexity to the system.
That being said, you should be able to do something like
SELECT (CASE WHEN column1 LIKE 'FacilityName%' THEN column1
WHEN column2 LIKE 'FacilityName%' THEN column2
WHEN column3 LIKE 'FacilityName%' THEN column3
WHEN column4 LIKE 'FacilityName%' THEN column4
ELSE null
END) column1,
(CASE WHEN column1 LIKE 'NPI%' THEN column1
WHEN column2 LIKE 'NPI%' THEN column2
WHEN column3 LIKE 'NPI%' THEN column3
WHEN column4 LIKE 'NPI%' THEN column4
ELSE null
END) column2,
(CASE WHEN column1 LIKE 'TIN%' THEN column1
WHEN column2 LIKE 'TIN%' THEN column2
WHEN column3 LIKE 'TIN%' THEN column3
WHEN column4 LIKE 'TIN%' THEN column4
ELSE null
END) column3,
(CASE WHEN column1 LIKE 'Address%' THEN column1
WHEN column2 LIKE 'Address%' THEN column2
WHEN column3 LIKE 'Address%' THEN column3
WHEN column4 LIKE 'Address%' THEN column4
ELSE null
END) column4
FROM( <<your query>> )

Resources