I have a query that doesn't work; can you help me with the transformation?
The original Informix query that I want to transform to Oracle.
SELECT DISTINCT table3.no_cev,
table1.literal,
table1.colid,
table2.repid,
table2.valor,
table2.indicador,
'',
'',
table2.origen,
table2.codi,
table2.no_cia,
table2.num_dcca,
table2.no_aprof,
table2.no_compta
FROM table1,
OUTER table2,
table3
WHERE ( table1.colid = table2.colid) and
( table1.grupid = table2.grupid) and
( table3.no_cev = table2.no_cev) and
( ( table1.grupid = 2) AND
( table2.cod_exp = 99609 ) AND
( table2.indicador = 'S' ) ) AND
( table3.num_dcca = 1);
( table3.codest = 76695);
My transformation of the query from Informix to Oracle — but it looks like it doesn't work:
SELECT DISTINCT table3.no_cev,
table1.literal,
table1.colid,
table2.repid,
table2.valor,
table2.indicador,
'',
'',
table2.origen,
table2.codi,
table2.no_cia,
table2.num_dcca,
table2.no_aprof,
table2.no_compta
FROM table1
LEFT OUTER JOIN (table2
RIGHT OUTER JOIN table3
ON table3.no_cev = table2.no_cev)
ON (( table1.colid = table2.colid)
AND ( table1.grupid = table2.grupid))
WHERE ( ( table1.grupid = '2' )
AND ( table2.cod_exp = '99609' )
AND ( table2.indicador = 'S' ) )
AND ( table3.num_dcca = '1')
AND ( table3.codest = '76695');
You have joined the table with ON clause at wrong place in the code.
Corrected your code now as following:
SELECT DISTINCT
TABLE3.NO_CEV,
TABLE1.LITERAL,
TABLE1.COLID,
TABLE2.REPID,
TABLE2.VALOR,
TABLE2.INDICADOR,
'',
'',
TABLE2.ORIGEN,
TABLE2.CODI,
TABLE2.NO_CIA,
TABLE2.NUM_DCCA,
TABLE2.NO_APROF,
TABLE2.NO_COMPTA
FROM
TABLE1
LEFT OUTER JOIN
-- ( -- removed this bracket
TABLE2 ON ( ( TABLE1.COLID = TABLE2.COLID )
AND ( TABLE1.GRUPID = TABLE2.GRUPID ) ) -- added this ON here
RIGHT OUTER JOIN TABLE3 ON TABLE3.NO_CEV = TABLE2.NO_CEV
-- ) -- removed this bracket
WHERE
TABLE1.GRUPID = '2'
AND TABLE2.COD_EXP = '99609'
AND TABLE2.INDICADOR = 'S'
AND TABLE3.NUM_DCCA = '1'
AND TABLE3.CODEST = '76695' ; -- no need of extra brackets
Cheers!!
It makes life unnecessarily difficult for people who would like to help you when you don't include a more or less minimal outline schema for the tables used in your query, and some sample data, and the expected results. Further, you seem to have converted numbers (integers) in the original Informix query into strings in the Oracle query. It is not clear why. Again, the schema would help explain what's going on.
As I noted in the comments, you should omit the two empty/null fields in the select-list; you could also drop a number of the columns from table2 — candidates for being dropped include all the columns not otherwise named in the query, such as repid, valor, origen, codi, no_cia, no_aprof, no_compta. Keep one or two of them; you don't really need more. However, I've preserved all the named columns in the sample data.
Schema and data
Here is some Informix SQL that appears to match the tables and columns in the query shown in the question. In case of doubt, the column was made into an INTEGER column. All the columns are qualified with NOT NULL.
DROP TABLE IF EXISTS table1;
DROP TABLE IF EXISTS table2;
DROP TABLE IF EXISTS table3;
CREATE TABLE table1
(
grupid INTEGER NOT NULL, -- 2
literal VARCHAR(32) NOT NULL,
colid INTEGER NOT NULL
);
CREATE TABLE table2
(
grupid INTEGER NOT NULL,
no_cev INTEGER NOT NULL,
colid INTEGER NOT NULL,
repid INTEGER NOT NULL,
valor INTEGER NOT NULL,
indicador CHAR(1) NOT NULL, -- 'S'
origen INTEGER NOT NULL,
codi INTEGER NOT NULL,
no_cia INTEGER NOT NULL,
num_dcca INTEGER NOT NULL,
no_aprof INTEGER NOT NULL,
no_compta INTEGER NOT NULL,
cod_exp INTEGER NOT NULL -- 99609
);
CREATE TABLE table3
(
no_cev INTEGER NOT NULL,
num_dcca INTEGER NOT NULL, -- 1
codest INTEGER NOT NULL -- 76695
);
LOAD FROM "table1.unl" INSERT INTO table1;
LOAD FROM "table2.unl" INSERT INTO table2;
LOAD FROM "table3.unl" INSERT INTO table3;
The annotations indicate the value specified in the query for that column; they helped guide the construction of the sample data.
Three sample data files in the Informix (pipe-separated values) UNLOAD format are:
table1.unl
2|Literal value 1|100
2|Literal value 2|123
2|Literal value 3|134
2|Literal value 4|145
table2.unl
2|2345|100|222|333|S|444|555|666|777|888|999|99609
2|2346|123|223|333|S|444|555|666|776|888|999|99609
2|2347|134|224|333|S|444|555|666|775|888|999|99609
2|2348|145|225|333|S|444|555|666|774|888|999|99609
1|2345|100|225|333|S|444|555|666|773|888|999|99609
2|2340|123|226|333|S|444|555|666|772|888|999|99609
3|2347|134|227|333|S|444|555|666|771|888|999|99609
2|2350|145|228|333|S|444|555|666|770|888|999|99609
table3.unl
2345|1|76695
2346|1|88776
2347|2|76695
2348|1|76695
Result of query using Informix-style OUTER join
Assuming that the stray early semicolon in the original query should be an AND (that matches what is written in the proposed Oracle query), removing the two empty string result columns, and removing the superfluous level of parentheses, then the original query looks like:
SELECT DISTINCT
table3.no_cev,
table1.literal,
table1.colid,
table2.repid,
table2.valor,
table2.indicador,
table2.origen,
table2.codi,
table2.no_cia,
table2.num_dcca,
table2.no_aprof,
table2.no_compta
FROM table1,
OUTER table2,
table3
WHERE (table1.colid = table2.colid) AND
(table1.grupid = table2.grupid) AND
(table3.no_cev = table2.no_cev) AND
(table1.grupid = 2) AND
(table2.cod_exp = 99609) AND
(table2.indicador = 'S') AND
(table3.num_dcca = 1) AND
(table3.codest = 76695);
On the sample data shown, using Informix 12.10.FC6 running on a MacBook Pro with macOS 10.14.6 Mojave (not that the o/s is likely to be a factor in the results), this produces:
2345|Literal value 1|100|222|333|S|444|555|666|777|888|999
2345|Literal value 2|123|||||||||
2345|Literal value 3|134|||||||||
2345|Literal value 4|145|||||||||
2348|Literal value 1|100|||||||||
2348|Literal value 2|123|||||||||
2348|Literal value 3|134|||||||||
2348|Literal value 4|145|225|333|S|444|555|666|774|888|999
Why, you ask? Good question! The Informix old-style OUTER join is a complex critter, and doesn't necessarily have a simple translation to modern standard SQL (and hence to Oracle, etc). You can find some description of the way it works at Complex Outer Joins.
There are two groups of tables — table1 and table3 are the dominant tables, and table2 is the only OUTER table here. This means that Informix processes table1 and table3 using inner join, and then outer joins the result with table2. Since there is no direct join between table1 and table3, the result is a cartesian product of the two tables — each of the 4 rows in table1 is joined with each of the 4 rows in table3, yielding 16 rows. However, the filter conditions eliminate the rows from table3 where no_cev is 2346 and 2347. All the remaining 8 rows will be preserved, regardless of the results of the outer join operation. Now the rows are outer joined with table2. The rows with (no_cev, colid) of (2345, 100) and (2348, 145) have matching rows in table3 where the data satisfies the conditions in the WHERE clause. The other rows don't have such matching rows so the columns from table2 for those rows are 'all NULL'. As I said, it is weird — contorted. And explaining is hard work!
A first approximation using standard SQL
This query is a moderate approximation to a direct translation of the Informix query:
SELECT DISTINCT
t3.no_cev,
t1.literal,
t1.colid,
t2.repid,
t2.valor,
t2.indicador,
t2.origen,
t2.codi,
t2.no_cia,
t2.num_dcca,
t2.no_aprof,
t2.no_compta
FROM table1 AS t1
INNER JOIN table3 AS t3 ON 1 = 1
LEFT JOIN table2 AS t2 ON t3.no_cev = t2.no_cev
AND t1.colid = t2.colid
AND t1.grupid = t2.grupid
WHERE t1.grupid = 2
AND t2.cod_exp = 99609
AND t2.indicador = 'S'
AND t3.num_dcca = 1
AND t3.codest = 76695;
The output is:
2345|Literal value 1|100|222|333|S|444|555|666|777|888|999
2348|Literal value 4|145|225|333|S|444|555|666|774|888|999
This is missing the rows with 'null values'.
Achieving the same result using standard INNER and OUTER joins
We can collect those rows by looking for rows where one of the columns in table2 is null (because they're either all null or none null — because the columns are qualified NOT NULL):
SELECT DISTINCT
t3.no_cev,
t1.literal,
t1.colid,
t2.repid,
t2.valor,
t2.indicador,
t2.origen,
t2.codi,
t2.no_cia,
t2.num_dcca,
t2.no_aprof,
t2.no_compta
FROM table1 AS t1
INNER JOIN table3 AS t3 ON 1 = 1
LEFT JOIN table2 AS t2 ON t3.no_cev = t2.no_cev
AND t1.colid = t2.colid
AND t1.grupid = t2.grupid
WHERE t1.grupid = 2
AND ((t2.cod_exp = 99609 AND t2.indicador = 'S') OR t2.cod_exp IS NULL)
AND t3.num_dcca = 1
AND t3.codest = 76695;
This yields the output:
2345|Literal value 1|100|222|333|S|444|555|666|777|888|999
2345|Literal value 2|123|||||||||
2345|Literal value 3|134|||||||||
2345|Literal value 4|145|||||||||
2348|Literal value 1|100|||||||||
2348|Literal value 2|123|||||||||
2348|Literal value 3|134|||||||||
2348|Literal value 4|145|225|333|S|444|555|666|774|888|999
This is the same as the original old-style Informix OUTER join query.
Tejash's proposed solution
The SQL in Tejash's answer (revision 1) yields, on the same data:
2345|Literal value 1|100|222|333|S|\ |\ |444|555|666|777|888|999
2348|Literal value 4|145|225|333|S|\ |\ |444|555|666|774|888|999
The backslash-space values correspond to the empty strings — it's Informix's slightly peculiar way of encoding a zero-length non-null string. It's an area where Oracle may well behave slightly differently, but it is tangential to the problem with the query.
Clearly, this is not the same result as the Informix query. It's probably more reasonable; it works out of the box (I simply did copy'n'paste, quoted numbers and all, and it worked with no editing needed).
I don't know about Informix OUTER syntax, so my answer may be wrong. The WHERE clause, however, lacking any relation between table1 and table3 suggests that this is just a cross join of table1 and table3 and then an outer join of table2.
One way to write this:
select t3.no_cev, t1.literal, t1.colid, t2.*
from table1 t1
cross join table3 t3
left join table2 t2 on t2.colid = t1.colid
and t2.grupid = t1.grupid
and t2.no_cev = t3.no_cev
and t2.cod_exp = 2
and t2.indicador = 'S'
where t1.grupid = 2
and t3.num_dcca = 1
and t3.codest = 76695;
Another is:
with t1 as (select * from table1 where grupid = 2)
, t2 as (select * from table1 where grupid = 2 and cod_exp = 2 and indicador = 'S')
, t3 as (select * from table3 where num_dcca = 1 and codest = 76695)
select t3.no_cev, t1.literal, t1.colid, t2.*
from t1
cross join t3
left join t2 on t2.colid = t1.colid and t2.no_cev = t3.no_cev;
Above queries are standard SQL and supported by Oracle as of version 9i I think.
Related
FOR the ISBN['9780495809135'] if CATEGORY_EXISTS column return as 1234,3454 then query is throwing below error.if it returns single row then its not throwing error.
I want to write in the topmost query say if CATEGORY_EXISTS ='Category Not Found' then FILE_NAME column then should display as 'files not found' otherwise pass the CATEGORY_EXISTS values with comma separated to top most most query.
Please note that this is just pseduo query,in the actual query lot of other tables and joins are there,
ORA-01722: invalid number
01722. 00000 - "invalid number"
*Cause: The specified number was invalid.
*Action: Specify a valid number.
SELECT ISBN ,
(SELECT LISTAGG(ANP.FILE_NAME, ',') WITHIN GROUP (
ORDER BY ANP.FILE_NAME)
FROM TABLE1 T
WHERE T.NODE_ID IN( CATEGORY_EXISTS)
)FILE_NAME
FROM
(SELECT ISBN,
(SELECT (
CASE
WHEN COUNT(DISTINCT AN.ID) > 0
THEN LISTAGG(AN.ID, ',') WITHIN GROUP (
ORDER BY AN.ID)
ELSE 'Category Not Found'
END )
FROM TABLE1 aca
JOIN TABLE2 AN
ON ACA.CHILD_NODE_ID=AN.ID
WHERE PARENT_NODE_ID=GT_CHILD_NODE_ID
) CATEGORY_EXISTS
FROM
(SELECT ISBN,
(SELECT ID FROM TEMP_CHILD_ASSOC ac WHERE CHILD_NODE_NAME=GT.ISBN
) GT_CHILD_NODE_ID
FROM MAIN_TABLE GT
WHERE ISBN='9780495809135'
)
);
The listagg() function generates a string of comma-separated values (if there is more than one ID). The case expression gives you either that generated string, of the fixed text literal (if there are no IDs). You are then trying to compare that string to a number; effectively one of these:
WHERE T.NODE_ID IN ('4321')
WHERE T.NODE_ID IN ('1234,3454')
WHERE T.NODE_ID IN ('Category Not Found')
You are implicitly converting the string to a number to compare it with NODE_ID. The first one will work as the implicit conversion is valid. The second will give you ORA-01722 (unless you have exactly two values, and your NLS decimal separator is a comma; but still won't give a match), and the third will also give that error - because those strings cannot be converted to numbers.
It's possible you are expecting the second one to be magically treated as two numbers inside the IN() clause, but that isn't how it works; it's getting a single string literal, not an actual list of numbers it can understand.
The IN condition does accept a list of multiple comma-separated expressions, but you are passing in a single string. The fact that string happens to consist of comma-separated values is irrelevant: it is itself still just a single expression. And that cannot be converted implicitly to a number.
If you have, or can create, a schema-level table type like:
create type my_number_tab as table of number
/
then you could use the collect() function to convert the IDs into a collection instead of a string, and then use member of to find matches; something like (with a bit of interpretation of your pseudocode):
SELECT ISBN ,
(SELECT LISTAGG(ANP.FILE_NAME, ',') WITHIN GROUP (
ORDER BY ANP.FILE_NAME)
FROM TABLE3 ANP
WHERE ANP.NODE_ID MEMBER OF CATEGORIES -- use collection
)FILE_NAME
FROM
(SELECT ISBN,
(SELECT CAST(COLLECT(AN.ID) AS my_number_tab) -- create collection not string
FROM TABLE1 aca
JOIN TABLE2 AN
ON ACA.CHILD_NODE_ID=AN.ID
WHERE PARENT_NODE_ID=GT_CHILD_NODE_ID
) CATEGORIES
FROM
(SELECT ISBN,
(SELECT ID FROM TEMP_CHILD_ASSOC ac WHERE CHILD_NODE_NAME=GT.ISBN
) GT_CHILD_NODE_ID
FROM MAIN_TABLE GT
WHERE ISBN='9780495809135'
)
);
It looks like you could also join to anp inside the inner query instead, so in that you generate the string list of file names rather than (or as well as) the string list of IDs. It's hard to tell from the pseudocode though; but perhaps something like:
SELECT ISBN,
(SELECT (
CASE
WHEN COUNT(DISTINCT AN.ID) > 0
THEN LISTAGG(ANP.FILE_NAME, ',') WITHIN GROUP (
ORDER BY ANP.FILE_NAME)
ELSE 'Category Not Found'
END )
FROM TABLE1 aca
JOIN TABLE2 AN
ON ACA.CHILD_NODE_ID=AN.ID
JOIN TABLE3 ANP
ON ANP.NODE_ID=AN.ID
WHERE ACA.PARENT_NODE_ID=GT_CHILD_NODE_ID
) FILE_NAME
FROM
(SELECT ISBN,
(SELECT ID FROM TEMP_CHILD_ASSOC ac WHERE CHILD_NODE_NAME=GT.ISBN
) GT_CHILD_NODE_ID
FROM MAIN_TABLE GT
WHERE ISBN='9780495809135'
);
You could probably also do the same thing with left outer joins (though perhaps they don't all need to be), although your comment suggests you have a reason for using subqueries instead:
SELECT GT.ISBN,
CASE WHEN COUNT(AN.ID) = 0 THEN 'files not found'
ELSE LISTAGG(ANP.FILE_NAME, ',') WITHIN GROUP (ORDER BY ANP.FILE_NAME)
END AS file_name
FROM MAIN_TABLE GT
LEFT JOIN TEMP_CHILD_ASSOC ac ON CHILD_NODE_NAME=GT.ISBN
LEFT JOIN table1 aca ON aca.parent_node_id = ac.id
LEFT JOIN table2 an on an.id = ACA.CHILD_NODE_ID
LEFT JOIN table3 anp on anp.node_id = an.id
WHERE GT.ISBN = '9780495809135'
GROUP BY GT.ISBN;
or something like that; again hard to tell from the pseudocode...
I am new to Oracle. I am trying to update the values of a table with the values from a SELECT DISTINCT statement using the MERGE INTO method. I want to update the values for a table based on what is in the USING table conditionally.
A quick diagram of what I am essentially going for is
MERGE
INTO update_table ut
USING
(SELECT DISTINCT
t1.column_1
,t2.column_2
FROM table_1 t1
INNER JOIN table_2 t2
ON t1.foreign_key = t2.primary_key) st
ON (ut.pk = st.column_1)
WHEN MATCH UPATE
SET(ut.update_column = st.column_2
WHERE st.column_1 = 1
AND st.column_2 = 1
,ut.update_column = st.column_2
WHERE st.column_1 = 2
AND st.column_2 = 2);
However, when I do so I get the INVALID COLUMN SPECIFICATION error on the line where I use SET. How can I work around this to successfully update the table, preferably in ANSI standard?
You can include the conditions that you have added in where clause in the selected column list in using clause itself. Like This. (Not tested. Your conditions in where clause were not appropriate)
MERGE
INTO update_table ut
USING (SELECT DISTINCT
t1.column_1 ,
CASE
WHEN t1.column_1 = 1
AND t2.column_2 = 1
THEN t2.column_1
WHEN t1.column_1 = 2
AND t2.column_2 = 2
THEN t2.column_2
END column_2
FROM
table_1 t1
INNER JOIN table_2 t2 ON t1.foreign_key = t2.primary_key
) st
ON (ut.pk = st.column_1)
WHEN MATCHED THEN
UPDATE SET ut.update_column = st.column_2 ;
In a oracle i'm using a Decode Statement as below. For security reasons i don't want the code to have the hardcoded values and i plan to create a new lookup table.
Select CONCAT( Decode(A.COUNTRY_INITIAL,
'A','America',
'B','Brazil',
'F','FINLAND',
NULL),
Decode(A.ADD_VALUE,
'M','YES',
NULL))
from (
Select SUBSTR(COUNTRY_BASE, -1,1) as COUNTRY_INITIAL,
SUBSTR(IS_VALUED, -1,1) as ADD_VALUE
from TBL1
)A
Refernece Table
*******************
Clmn1 Clmn2 Clmn3
--------------------------
cntry1 A America
cntry2 B Brazil
cntry3 F Finland
Value1 M YES
Could you please let me know how i can incorporate this in the decode logic. Also fyi im using this CODE SNIPPET in a Oracle Function.
If you're going to store the lookup information in a table, you wouldn't use a DECODE. You'd join the two tables
SELECT ref.clmn3
FROM tbl1 t1
LEFT OUTER JOIN <<reference table>> ref
ON( substr(t1.country_base, -1, 1) = ref.clmn2 )
Since your DECODE has a NULL, I'm guessing that you are expecting that some rows of tbl1 will not have a matching row in the reference table so I'm guessing that you want a LEFT OUTER JOIN rather than an INNER JOIN.
If you have these details in a table you can simply use a join to get desired output.
select t1.COUNTRY_BASE,ref.Clmn3,ref1.Clmn3
frorm TBL1 t1
left outer join reftable ref
on SUBSTR(t1.COUNTRY_BASE, -1,1)=ref.Clmn2
left outer join reftable ref1
on SUBSTR(t1.IS_VALUED, -1,1)=ref.Clmn2;
If your are using separate table to store the values, you don't need decode function. You can simply join the two tables.
select a.country_base,
a.is_valued,
b.clmn3,
c.clmn3
from tbl1 a left outer join reference_table b
on (substr(a.country_base, -1, 1) = b.clmn2
and b.clmn1 like 'cntry%' --extra clause needed if there are same country and value codes
)
left outer join reference_table c
on (substr(a.is_valued, -1, 1) = c.clmn2
and c.clmn1 like 'Value%' --extra clause needed if there are same country and value codes
);
I've faced with a weird problem now. The query itself is huge so I'm not going to post it here (I could post however in case someone needs to see). Now I have a table ,TABLE1, with a CHAR(1) column, COL1. This table column is queried as part of my query. When I filter the recordset for this column I say:
WHERE TAB1.COL1=1
This way the query runs and returns a very big resultset. I've recently updated one of the subqueries to speed up the query. But after this when I write WHERE TAB1.COL1=1 it does not return anything, but if I change it to WHERE TAB1.COL1='1' it gives me the records I need. Notice the WHERE clause with quotes and w/o them. So to make it more clear, before updating one of the sub-queries I did not have to put quotes to check against COL1 value, but after updating I have to. What feature of Oracle is it that I'm not aware of?
EDIT: I'm posting the tw versions of the query in case someone might find it useful
Version 1:
SELECT p.ssn,
pss.pin,
pd.doc_number,
p.surname,
p.name,
p.patronymic,
to_number(p.sex, '9') as sex,
citiz_c.short_name citizenship,
p.birth_place,
p.birth_day as birth_date,
coun_c.short_name as country,
di.name as leg_city,
trim( pa.settlement
|| ' '
|| pa.street) AS leg_street,
pd.issue_date,
pd.issuing_body,
irs.irn,
irs.tpn,
irs.reg_office,
to_number(irs.insurer_type, '9') as insurer_type,
TO_CHAR(sa.REG_CODE)
||CONVERT_INT_TO_DOUBLE_LETTER(TO_NUMBER(SUBSTR(TO_CHAR(sa.DOSSIER_NR, '0999999'), 2, 3)))
||SUBSTR(TO_CHAR(sa.DOSSIER_NR, '0999999'), 5, 4) CONVERTED_SSN_DOSSIER_NR,
fa.snr
FROM
(SELECT pss_t.pin,
pss_t.ssn
FROM EHDIS_INSURANCE.pin_ssn_status pss_t
WHERE pss_t.difference_status < 5
) pss
INNER JOIN SSPF_CENTRE.file_archive fa
ON fa.ssn = pss.ssn
INNER JOIN SSPF_CENTRE.persons p
ON p.ssn = fa.ssn
INNER JOIN
(SELECT pd_2.ssn,
pd_2.type,
pd_2.series,
pd_2.doc_number,
pd_2.issue_date,
pd_2.issuing_body
FROM
--The changed subquery starts here
(SELECT ssn,
MIN(type) AS type
FROM SSPF_CENTRE.person_documents
GROUP BY ssn
) pd_1
INNER JOIN SSPF_CENTRE.person_documents pd_2
ON pd_2.type = pd_1.type
AND pd_2.ssn = pd_1.ssn
) pd
--The changed subquery ends here
ON pd.ssn = p.ssn
INNER JOIN SSPF_CENTRE.ssn_archive sa
ON p.ssn = sa.ssn
INNER JOIN SSPF_CENTRE.person_addresses pa
ON p.ssn = pa.ssn
INNER JOIN
(SELECT i_t.irn,
irs_t.ssn,
i_t.tpn,
i_t.reg_office,
(
CASE i_t.insurer_type
WHEN '4'
THEN '1'
ELSE i_t.insurer_type
END) AS insurer_type
FROM sspf_centre.irn_registered_ssn irs_t
INNER JOIN SSPF_CENTRE.insurers i_t
ON i_t.irn = irs_t.new_irn
OR i_t.old_irn = irs_t.old_irn
WHERE irs_t.is_registration IS NOT NULL
AND i_t.is_real IS NOT NULL
) irs ON irs.ssn = p.ssn
LEFT OUTER JOIN SSPF_CENTRE.districts di
ON di.code = pa.city
LEFT OUTER JOIN SSPF_CENTRE.countries citiz_c
ON p.citizenship = citiz_c.numeric_code
LEFT OUTER JOIN SSPF_CENTRE.countries coun_c
ON pa.country_code = coun_c.numeric_code
WHERE pa.address_flag = '1'--Here's the column value with quotes
AND fa.form_type = 'Q3';
And Version 2:
SELECT p.ssn,
pss.pin,
pd.doc_number,
p.surname,
p.name,
p.patronymic,
to_number(p.sex, '9') as sex,
citiz_c.short_name citizenship,
p.birth_place,
p.birth_day as birth_date,
coun_c.short_name as country,
di.name as leg_city,
trim( pa.settlement
|| ' '
|| pa.street) AS leg_street,
pd.issue_date,
pd.issuing_body,
irs.irn,
irs.tpn,
irs.reg_office,
to_number(irs.insurer_type, '9') as insurer_type,
TO_CHAR(sa.REG_CODE)
||CONVERT_INT_TO_DOUBLE_LETTER(TO_NUMBER(SUBSTR(TO_CHAR(sa.DOSSIER_NR, '0999999'), 2, 3)))
||SUBSTR(TO_CHAR(sa.DOSSIER_NR, '0999999'), 5, 4) CONVERTED_SSN_DOSSIER_NR,
fa.snr
FROM
(SELECT pss_t.pin,
pss_t.ssn
FROM EHDIS_INSURANCE.pin_ssn_status pss_t
WHERE pss_t.difference_status < 5
) pss
INNER JOIN SSPF_CENTRE.file_archive fa
ON fa.ssn = pss.ssn
INNER JOIN SSPF_CENTRE.persons p
ON p.ssn = fa.ssn
INNER JOIN
--The changed subquery starts here
(SELECT ssn,
type,
series,
doc_number,
issue_date,
issuing_body
FROM
(SELECT ssn,
type,
series,
doc_number,
issue_date,
issuing_body,
ROW_NUMBER() OVER (partition BY ssn order by type) rn
FROM SSPF_CENTRE.person_documents
)
WHERE rn = 1
) pd --
--The changed subquery ends here
ON pd.ssn = p.ssn
INNER JOIN SSPF_CENTRE.ssn_archive sa
ON p.ssn = sa.ssn
INNER JOIN SSPF_CENTRE.person_addresses pa
ON p.ssn = pa.ssn
INNER JOIN
(SELECT i_t.irn,
irs_t.ssn,
i_t.tpn,
i_t.reg_office,
(
CASE i_t.insurer_type
WHEN '4'
THEN '1'
ELSE i_t.insurer_type
END) AS insurer_type
FROM sspf_centre.irn_registered_ssn irs_t
INNER JOIN SSPF_CENTRE.insurers i_t
ON i_t.irn = irs_t.new_irn
OR i_t.old_irn = irs_t.old_irn
WHERE irs_t.is_registration IS NOT NULL
AND i_t.is_real IS NOT NULL
) irs ON irs.ssn = p.ssn
LEFT OUTER JOIN SSPF_CENTRE.districts di
ON di.code = pa.city
LEFT OUTER JOIN SSPF_CENTRE.countries citiz_c
ON p.citizenship = citiz_c.numeric_code
LEFT OUTER JOIN SSPF_CENTRE.countries coun_c
ON pa.country_code = coun_c.numeric_code
WHERE pa.address_flag = 1--Here's the column value without quotes
AND fa.form_type = 'Q3';
I've put separating comments for the changed subqueries and the WHERE clause in both queries. Both versions of the subqueries return the same result, one of them is just slower, which is why I decided to update it.
With the most simplistic example I can't reproduce your problem on 11.2.0.3.0 or 11.2.0.1.0.
SQL> create table tmp_test ( a char(1) );
Table created.
SQL> insert into tmp_test values ('1');
1 row created.
SQL> select *
2 from tmp_test
3 where a = 1;
A
-
1
If I then insert a non-numeric value into the table I can confirm Chris' comment "that Oracle will rewrite tab1.col1 = 1 to to_number(tab1.col1) = 1", which implies that you only have numeric characters in the column.
SQL> insert into tmp_test values ('a');
1 row created.
SQL> select *
2 from tmp_test
3 where a = 1;
ERROR:
ORA-01722: invalid number
no rows selected
If you're interested in tracking this down you should gradually reduce the complexity of the query until you have found a minimal, reproducible, example. Oracle can pre-compute a conversion to be used in a JOIN, which as your query is complex seems like a possible explanation of what's happening.
Oracle explicitly recommends against using implicit conversion so it's wiser not to use it at all; as you're finding out. For a start there's no guarantees that your indexes will be used correctly.
Oracle recommends that you specify explicit conversions, rather than rely on implicit or automatic conversions, for these reasons:
SQL statements are easier to understand when you use explicit data type conversion functions.
Implicit data type conversion can have a negative impact on performance, especially if the data type of a column value is converted to that of a constant rather than the other way around.
Implicit conversion depends on the context in which it occurs and may not work the same way in every case. For example, implicit conversion from a datetime value to a VARCHAR2 value may return an unexpected year depending on the value of the NLS_DATE_FORMAT
parameter.
Algorithms for implicit conversion are subject to change across software releases and among Oracle products. Behavior of explicit conversions is more predictable.
If you do only have numeric characters in the column I would highly recommend changing this to a NUMBER(1) column and I would always recommend explicit conversion to avoid a lot of pain in the longer run.
It's hard to tell without the actual query. What I would expect is that TAB1.COL1 is in some way different before and after the refactoring.
Candidates differences are Number vs. CHAR(1) vs. CHAR(x>1) vs VARCHAR2
It is easy to introduce differences like this with subqueries where you join two tables which have different types in the join column and you return different columns in your subquery.
To hunt that issue down you might want to check the exact datatypes of your query. Not sure how to do that right now .. but an idea would be to put it in a view and use sqlplus desc on it.
I am attempting to figure out if a column in Oracle is populated from a sequence. My impression of how Oracle handles sequencing is that the sequence and column are separate entities and one needs to either manually insert the next sequence value like:
insert into tbl1 values(someseq.nextval, 'test')
or put it into a table trigger. Meaning that it is non-trivial to tell if a column is populated from a sequence. Is that correct? Any ideas about how I might go about figuring out if a column is populated from a sequence?
You are correct; the sequence is separate from the table, and a single sequence can be used to populate any table, and the values in a column in some table may mostly come from a sequence (or set of sequences), except for the values manually generated.
In other words, there is no mandatory connection between a column and a sequence - and therefore no way to discover such a relationship from the schema.
Ultimately, the analysis will be of the source code of all applications that insert or update data in the table. Nothing else is guaranteed. You can reduce the scope of the search if there is a stored procedure that is the only way to make modifications to the table, or if there is a trigger that sets the value, or other such things. But the general solution is the 'non-solution' of 'analyze the source'.
If the sequence is used in a trigger, it is possible to find which tables it populates:
SQL> select t.table_name, d.referenced_name as sequence_name
2 from user_triggers t
3 join user_dependencies d
4 on d.name = t.trigger_name
5 where d.referenced_type = 'SEQUENCE'
6 and d.type = 'TRIGGER'
7 /
TABLE_NAME SEQUENCE_NAME
------------------------------ ------------------------------
EMP EMPNO_SEQ
SQL>
You can vary this query to find stored procedures, etc that make use of the sequence.
There are no direct metadata links between Oracle sequences and any use in the database. You could make an intelligent guess if a column's values are related to a sequence by querying the USER_SEQUENCES metadata and comparing the LAST_NUMBER column to the data for the column.
select t.table_name,
d.referenced_name as sequence_name,
d.REFERENCED_OWNER as "OWNER",
c.COLUMN_NAME
from user_trigger_cols t, user_dependencies d, user_tab_cols c
where d.name = t.trigger_name
and t.TABLE_NAME = c.TABLE_NAME
and t.COLUMN_NAME = c.COLUMN_NAME
and d.referenced_type = 'SEQUENCE'
and d.type = 'TRIGGER'
As Jonathan pointed out: there is no direct way to relate both objects. However, if you "keep a standard" for primary keys and sequences/triggers you could find out by finding the primary key and then associate the constraint to the table sequence.
I was in need of something similar since we are building a multi-db product and I tried to replicate some classes with properties found in a DataTable object from .Net which has AutoIncrement, IncrementSeed and IncrementStep which can only be found in the sequences.
So, as I said, if you, for your tables, use a PK and always have a sequence associated with a trigger for inserts on a table then this may come handy:
select tc.table_name,
case tc.nullable
when 'Y' then 1
else 0
end as is_nullable,
case ac.constraint_type
when 'P' then 1
else 0
end as is_identity,
ac.constraint_type,
seq.increment_by as auto_increment_seed,
seq.min_value as auto_increment_step,
com.comments as caption,
tc.column_name,
tc.data_type,
tc.data_default as default_value,
tc.data_length as max_length,
tc.column_id,
tc.data_precision as precision,
tc.data_scale as scale
from SYS.all_tab_columns tc
left outer join SYS.all_col_comments com
on (tc.column_name = com.column_name and tc.table_name = com.table_name)
LEFT OUTER JOIN SYS.ALL_CONS_COLUMNS CC
on (tc.table_name = cc.table_name and tc.column_name = cc.column_name and tc.owner = cc.owner)
LEFT OUTER JOIN SYS.ALL_CONSTRAINTS AC
ON (ac.constraint_name = cc.constraint_name and ac.owner = cc.owner)
LEFT outer join user_triggers trg
on (ac.table_name = trg.table_name and ac.owner = trg.table_owner)
LEFT outer join user_dependencies dep
on (trg.trigger_name = dep.name and dep.referenced_type='SEQUENCE' and dep.type='TRIGGER')
LEFT outer join user_sequences seq
on (seq.sequence_name = dep.referenced_name)
where tc.table_name = 'TABLE_NAME'
and tc.owner = 'SCHEMA_NAME'
AND AC.CONSTRAINT_TYPE = 'P'
union all
select tc.table_name,
case tc.nullable
when 'Y' then 1
else 0
end as is_nullable,
case ac.constraint_type
when 'P' then 1
else 0
end as is_identity,
ac.constraint_type,
seq.increment_by as auto_increment_seed,
seq.min_value as auto_increment_step,
com.comments as caption,
tc.column_name,
tc.data_type,
tc.data_default as default_value,
tc.data_length as max_length,
tc.column_id,
tc.data_precision as precision,
tc.data_scale as scale
from SYS.all_tab_columns tc
left outer join SYS.all_col_comments com
on (tc.column_name = com.column_name and tc.table_name = com.table_name)
LEFT OUTER JOIN SYS.ALL_CONS_COLUMNS CC
on (tc.table_name = cc.table_name and tc.column_name = cc.column_name and tc.owner = cc.owner)
LEFT OUTER JOIN SYS.ALL_CONSTRAINTS AC
ON (ac.constraint_name = cc.constraint_name and ac.owner = cc.owner)
LEFT outer join user_triggers trg
on (ac.table_name = trg.table_name and ac.owner = trg.table_owner)
LEFT outer join user_dependencies dep
on (trg.trigger_name = dep.name and dep.referenced_type='SEQUENCE' and dep.type='TRIGGER')
LEFT outer join user_sequences seq
on (seq.sequence_name = dep.referenced_name)
where tc.table_name = 'TABLE_NAME'
and tc.owner = 'SCHEMA_NAME'
AND AC.CONSTRAINT_TYPE is null;
That would give you the list of columns for a schema/table with:
Table name
If column is nullable
Constraint type (only for PK's)
Increment seed (from the sequence)
Increment step (from the sequence)
Column comments
Column name, of course :)
Data type
Default value, if any
Length of column
Index (column id)
Precision (for numbers)
Scale (for numbers)
I'm pretty sure that code can be optimized but it works for me, I use it to "load metadata" for tables and then represent that metadata as entities on my frontend.
Note that I'm filtering only primary keys and not retrieving compound key constraints since I don't care about those. If you do you'll have to modify the code to do so and make sure that you filter duplicates since you could get one column twice (one for the PK constraint, another for the compound key).