Does Oracle implicit conversion depend on joined tables or views - oracle

I've faced with a weird problem now. The query itself is huge so I'm not going to post it here (I could post however in case someone needs to see). Now I have a table ,TABLE1, with a CHAR(1) column, COL1. This table column is queried as part of my query. When I filter the recordset for this column I say:
WHERE TAB1.COL1=1
This way the query runs and returns a very big resultset. I've recently updated one of the subqueries to speed up the query. But after this when I write WHERE TAB1.COL1=1 it does not return anything, but if I change it to WHERE TAB1.COL1='1' it gives me the records I need. Notice the WHERE clause with quotes and w/o them. So to make it more clear, before updating one of the sub-queries I did not have to put quotes to check against COL1 value, but after updating I have to. What feature of Oracle is it that I'm not aware of?
EDIT: I'm posting the tw versions of the query in case someone might find it useful
Version 1:
SELECT p.ssn,
pss.pin,
pd.doc_number,
p.surname,
p.name,
p.patronymic,
to_number(p.sex, '9') as sex,
citiz_c.short_name citizenship,
p.birth_place,
p.birth_day as birth_date,
coun_c.short_name as country,
di.name as leg_city,
trim( pa.settlement
|| ' '
|| pa.street) AS leg_street,
pd.issue_date,
pd.issuing_body,
irs.irn,
irs.tpn,
irs.reg_office,
to_number(irs.insurer_type, '9') as insurer_type,
TO_CHAR(sa.REG_CODE)
||CONVERT_INT_TO_DOUBLE_LETTER(TO_NUMBER(SUBSTR(TO_CHAR(sa.DOSSIER_NR, '0999999'), 2, 3)))
||SUBSTR(TO_CHAR(sa.DOSSIER_NR, '0999999'), 5, 4) CONVERTED_SSN_DOSSIER_NR,
fa.snr
FROM
(SELECT pss_t.pin,
pss_t.ssn
FROM EHDIS_INSURANCE.pin_ssn_status pss_t
WHERE pss_t.difference_status < 5
) pss
INNER JOIN SSPF_CENTRE.file_archive fa
ON fa.ssn = pss.ssn
INNER JOIN SSPF_CENTRE.persons p
ON p.ssn = fa.ssn
INNER JOIN
(SELECT pd_2.ssn,
pd_2.type,
pd_2.series,
pd_2.doc_number,
pd_2.issue_date,
pd_2.issuing_body
FROM
--The changed subquery starts here
(SELECT ssn,
MIN(type) AS type
FROM SSPF_CENTRE.person_documents
GROUP BY ssn
) pd_1
INNER JOIN SSPF_CENTRE.person_documents pd_2
ON pd_2.type = pd_1.type
AND pd_2.ssn = pd_1.ssn
) pd
--The changed subquery ends here
ON pd.ssn = p.ssn
INNER JOIN SSPF_CENTRE.ssn_archive sa
ON p.ssn = sa.ssn
INNER JOIN SSPF_CENTRE.person_addresses pa
ON p.ssn = pa.ssn
INNER JOIN
(SELECT i_t.irn,
irs_t.ssn,
i_t.tpn,
i_t.reg_office,
(
CASE i_t.insurer_type
WHEN '4'
THEN '1'
ELSE i_t.insurer_type
END) AS insurer_type
FROM sspf_centre.irn_registered_ssn irs_t
INNER JOIN SSPF_CENTRE.insurers i_t
ON i_t.irn = irs_t.new_irn
OR i_t.old_irn = irs_t.old_irn
WHERE irs_t.is_registration IS NOT NULL
AND i_t.is_real IS NOT NULL
) irs ON irs.ssn = p.ssn
LEFT OUTER JOIN SSPF_CENTRE.districts di
ON di.code = pa.city
LEFT OUTER JOIN SSPF_CENTRE.countries citiz_c
ON p.citizenship = citiz_c.numeric_code
LEFT OUTER JOIN SSPF_CENTRE.countries coun_c
ON pa.country_code = coun_c.numeric_code
WHERE pa.address_flag = '1'--Here's the column value with quotes
AND fa.form_type = 'Q3';
And Version 2:
SELECT p.ssn,
pss.pin,
pd.doc_number,
p.surname,
p.name,
p.patronymic,
to_number(p.sex, '9') as sex,
citiz_c.short_name citizenship,
p.birth_place,
p.birth_day as birth_date,
coun_c.short_name as country,
di.name as leg_city,
trim( pa.settlement
|| ' '
|| pa.street) AS leg_street,
pd.issue_date,
pd.issuing_body,
irs.irn,
irs.tpn,
irs.reg_office,
to_number(irs.insurer_type, '9') as insurer_type,
TO_CHAR(sa.REG_CODE)
||CONVERT_INT_TO_DOUBLE_LETTER(TO_NUMBER(SUBSTR(TO_CHAR(sa.DOSSIER_NR, '0999999'), 2, 3)))
||SUBSTR(TO_CHAR(sa.DOSSIER_NR, '0999999'), 5, 4) CONVERTED_SSN_DOSSIER_NR,
fa.snr
FROM
(SELECT pss_t.pin,
pss_t.ssn
FROM EHDIS_INSURANCE.pin_ssn_status pss_t
WHERE pss_t.difference_status < 5
) pss
INNER JOIN SSPF_CENTRE.file_archive fa
ON fa.ssn = pss.ssn
INNER JOIN SSPF_CENTRE.persons p
ON p.ssn = fa.ssn
INNER JOIN
--The changed subquery starts here
(SELECT ssn,
type,
series,
doc_number,
issue_date,
issuing_body
FROM
(SELECT ssn,
type,
series,
doc_number,
issue_date,
issuing_body,
ROW_NUMBER() OVER (partition BY ssn order by type) rn
FROM SSPF_CENTRE.person_documents
)
WHERE rn = 1
) pd --
--The changed subquery ends here
ON pd.ssn = p.ssn
INNER JOIN SSPF_CENTRE.ssn_archive sa
ON p.ssn = sa.ssn
INNER JOIN SSPF_CENTRE.person_addresses pa
ON p.ssn = pa.ssn
INNER JOIN
(SELECT i_t.irn,
irs_t.ssn,
i_t.tpn,
i_t.reg_office,
(
CASE i_t.insurer_type
WHEN '4'
THEN '1'
ELSE i_t.insurer_type
END) AS insurer_type
FROM sspf_centre.irn_registered_ssn irs_t
INNER JOIN SSPF_CENTRE.insurers i_t
ON i_t.irn = irs_t.new_irn
OR i_t.old_irn = irs_t.old_irn
WHERE irs_t.is_registration IS NOT NULL
AND i_t.is_real IS NOT NULL
) irs ON irs.ssn = p.ssn
LEFT OUTER JOIN SSPF_CENTRE.districts di
ON di.code = pa.city
LEFT OUTER JOIN SSPF_CENTRE.countries citiz_c
ON p.citizenship = citiz_c.numeric_code
LEFT OUTER JOIN SSPF_CENTRE.countries coun_c
ON pa.country_code = coun_c.numeric_code
WHERE pa.address_flag = 1--Here's the column value without quotes
AND fa.form_type = 'Q3';
I've put separating comments for the changed subqueries and the WHERE clause in both queries. Both versions of the subqueries return the same result, one of them is just slower, which is why I decided to update it.

With the most simplistic example I can't reproduce your problem on 11.2.0.3.0 or 11.2.0.1.0.
SQL> create table tmp_test ( a char(1) );
Table created.
SQL> insert into tmp_test values ('1');
1 row created.
SQL> select *
2 from tmp_test
3 where a = 1;
A
-
1
If I then insert a non-numeric value into the table I can confirm Chris' comment "that Oracle will rewrite tab1.col1 = 1 to to_number(tab1.col1) = 1", which implies that you only have numeric characters in the column.
SQL> insert into tmp_test values ('a');
1 row created.
SQL> select *
2 from tmp_test
3 where a = 1;
ERROR:
ORA-01722: invalid number
no rows selected
If you're interested in tracking this down you should gradually reduce the complexity of the query until you have found a minimal, reproducible, example. Oracle can pre-compute a conversion to be used in a JOIN, which as your query is complex seems like a possible explanation of what's happening.
Oracle explicitly recommends against using implicit conversion so it's wiser not to use it at all; as you're finding out. For a start there's no guarantees that your indexes will be used correctly.
Oracle recommends that you specify explicit conversions, rather than rely on implicit or automatic conversions, for these reasons:
SQL statements are easier to understand when you use explicit data type conversion functions.
Implicit data type conversion can have a negative impact on performance, especially if the data type of a column value is converted to that of a constant rather than the other way around.
Implicit conversion depends on the context in which it occurs and may not work the same way in every case. For example, implicit conversion from a datetime value to a VARCHAR2 value may return an unexpected year depending on the value of the NLS_DATE_FORMAT
parameter.
Algorithms for implicit conversion are subject to change across software releases and among Oracle products. Behavior of explicit conversions is more predictable.
If you do only have numeric characters in the column I would highly recommend changing this to a NUMBER(1) column and I would always recommend explicit conversion to avoid a lot of pain in the longer run.

It's hard to tell without the actual query. What I would expect is that TAB1.COL1 is in some way different before and after the refactoring.
Candidates differences are Number vs. CHAR(1) vs. CHAR(x>1) vs VARCHAR2
It is easy to introduce differences like this with subqueries where you join two tables which have different types in the join column and you return different columns in your subquery.
To hunt that issue down you might want to check the exact datatypes of your query. Not sure how to do that right now .. but an idea would be to put it in a view and use sqlplus desc on it.

Related

Transforming an Informix query to Oracle?

I have a query that doesn't work; can you help me with the transformation?
The original Informix query that I want to transform to Oracle.
SELECT DISTINCT table3.no_cev,
table1.literal,
table1.colid,
table2.repid,
table2.valor,
table2.indicador,
'',
'',
table2.origen,
table2.codi,
table2.no_cia,
table2.num_dcca,
table2.no_aprof,
table2.no_compta
FROM table1,
OUTER table2,
table3
WHERE ( table1.colid = table2.colid) and
( table1.grupid = table2.grupid) and
( table3.no_cev = table2.no_cev) and
( ( table1.grupid = 2) AND
( table2.cod_exp = 99609 ) AND
( table2.indicador = 'S' ) ) AND
( table3.num_dcca = 1);
( table3.codest = 76695);
My transformation of the query from Informix to Oracle — but it looks like it doesn't work:
SELECT DISTINCT table3.no_cev,
table1.literal,
table1.colid,
table2.repid,
table2.valor,
table2.indicador,
'',
'',
table2.origen,
table2.codi,
table2.no_cia,
table2.num_dcca,
table2.no_aprof,
table2.no_compta
FROM table1
LEFT OUTER JOIN (table2
RIGHT OUTER JOIN table3
ON table3.no_cev = table2.no_cev)
ON (( table1.colid = table2.colid)
AND ( table1.grupid = table2.grupid))
WHERE ( ( table1.grupid = '2' )
AND ( table2.cod_exp = '99609' )
AND ( table2.indicador = 'S' ) )
AND ( table3.num_dcca = '1')
AND ( table3.codest = '76695');
You have joined the table with ON clause at wrong place in the code.
Corrected your code now as following:
SELECT DISTINCT
TABLE3.NO_CEV,
TABLE1.LITERAL,
TABLE1.COLID,
TABLE2.REPID,
TABLE2.VALOR,
TABLE2.INDICADOR,
'',
'',
TABLE2.ORIGEN,
TABLE2.CODI,
TABLE2.NO_CIA,
TABLE2.NUM_DCCA,
TABLE2.NO_APROF,
TABLE2.NO_COMPTA
FROM
TABLE1
LEFT OUTER JOIN
-- ( -- removed this bracket
TABLE2 ON ( ( TABLE1.COLID = TABLE2.COLID )
AND ( TABLE1.GRUPID = TABLE2.GRUPID ) ) -- added this ON here
RIGHT OUTER JOIN TABLE3 ON TABLE3.NO_CEV = TABLE2.NO_CEV
-- ) -- removed this bracket
WHERE
TABLE1.GRUPID = '2'
AND TABLE2.COD_EXP = '99609'
AND TABLE2.INDICADOR = 'S'
AND TABLE3.NUM_DCCA = '1'
AND TABLE3.CODEST = '76695' ; -- no need of extra brackets
Cheers!!
It makes life unnecessarily difficult for people who would like to help you when you don't include a more or less minimal outline schema for the tables used in your query, and some sample data, and the expected results. Further, you seem to have converted numbers (integers) in the original Informix query into strings in the Oracle query. It is not clear why. Again, the schema would help explain what's going on.
As I noted in the comments, you should omit the two empty/null fields in the select-list; you could also drop a number of the columns from table2 — candidates for being dropped include all the columns not otherwise named in the query, such as repid, valor, origen, codi, no_cia, no_aprof, no_compta. Keep one or two of them; you don't really need more. However, I've preserved all the named columns in the sample data.
Schema and data
Here is some Informix SQL that appears to match the tables and columns in the query shown in the question. In case of doubt, the column was made into an INTEGER column. All the columns are qualified with NOT NULL.
DROP TABLE IF EXISTS table1;
DROP TABLE IF EXISTS table2;
DROP TABLE IF EXISTS table3;
CREATE TABLE table1
(
grupid INTEGER NOT NULL, -- 2
literal VARCHAR(32) NOT NULL,
colid INTEGER NOT NULL
);
CREATE TABLE table2
(
grupid INTEGER NOT NULL,
no_cev INTEGER NOT NULL,
colid INTEGER NOT NULL,
repid INTEGER NOT NULL,
valor INTEGER NOT NULL,
indicador CHAR(1) NOT NULL, -- 'S'
origen INTEGER NOT NULL,
codi INTEGER NOT NULL,
no_cia INTEGER NOT NULL,
num_dcca INTEGER NOT NULL,
no_aprof INTEGER NOT NULL,
no_compta INTEGER NOT NULL,
cod_exp INTEGER NOT NULL -- 99609
);
CREATE TABLE table3
(
no_cev INTEGER NOT NULL,
num_dcca INTEGER NOT NULL, -- 1
codest INTEGER NOT NULL -- 76695
);
LOAD FROM "table1.unl" INSERT INTO table1;
LOAD FROM "table2.unl" INSERT INTO table2;
LOAD FROM "table3.unl" INSERT INTO table3;
The annotations indicate the value specified in the query for that column; they helped guide the construction of the sample data.
Three sample data files in the Informix (pipe-separated values) UNLOAD format are:
table1.unl
2|Literal value 1|100
2|Literal value 2|123
2|Literal value 3|134
2|Literal value 4|145
table2.unl
2|2345|100|222|333|S|444|555|666|777|888|999|99609
2|2346|123|223|333|S|444|555|666|776|888|999|99609
2|2347|134|224|333|S|444|555|666|775|888|999|99609
2|2348|145|225|333|S|444|555|666|774|888|999|99609
1|2345|100|225|333|S|444|555|666|773|888|999|99609
2|2340|123|226|333|S|444|555|666|772|888|999|99609
3|2347|134|227|333|S|444|555|666|771|888|999|99609
2|2350|145|228|333|S|444|555|666|770|888|999|99609
table3.unl
2345|1|76695
2346|1|88776
2347|2|76695
2348|1|76695
Result of query using Informix-style OUTER join
Assuming that the stray early semicolon in the original query should be an AND (that matches what is written in the proposed Oracle query), removing the two empty string result columns, and removing the superfluous level of parentheses, then the original query looks like:
SELECT DISTINCT
table3.no_cev,
table1.literal,
table1.colid,
table2.repid,
table2.valor,
table2.indicador,
table2.origen,
table2.codi,
table2.no_cia,
table2.num_dcca,
table2.no_aprof,
table2.no_compta
FROM table1,
OUTER table2,
table3
WHERE (table1.colid = table2.colid) AND
(table1.grupid = table2.grupid) AND
(table3.no_cev = table2.no_cev) AND
(table1.grupid = 2) AND
(table2.cod_exp = 99609) AND
(table2.indicador = 'S') AND
(table3.num_dcca = 1) AND
(table3.codest = 76695);
On the sample data shown, using Informix 12.10.FC6 running on a MacBook Pro with macOS 10.14.6 Mojave (not that the o/s is likely to be a factor in the results), this produces:
2345|Literal value 1|100|222|333|S|444|555|666|777|888|999
2345|Literal value 2|123|||||||||
2345|Literal value 3|134|||||||||
2345|Literal value 4|145|||||||||
2348|Literal value 1|100|||||||||
2348|Literal value 2|123|||||||||
2348|Literal value 3|134|||||||||
2348|Literal value 4|145|225|333|S|444|555|666|774|888|999
Why, you ask? Good question! The Informix old-style OUTER join is a complex critter, and doesn't necessarily have a simple translation to modern standard SQL (and hence to Oracle, etc). You can find some description of the way it works at Complex Outer Joins.
There are two groups of tables — table1 and table3 are the dominant tables, and table2 is the only OUTER table here. This means that Informix processes table1 and table3 using inner join, and then outer joins the result with table2. Since there is no direct join between table1 and table3, the result is a cartesian product of the two tables — each of the 4 rows in table1 is joined with each of the 4 rows in table3, yielding 16 rows. However, the filter conditions eliminate the rows from table3 where no_cev is 2346 and 2347. All the remaining 8 rows will be preserved, regardless of the results of the outer join operation. Now the rows are outer joined with table2. The rows with (no_cev, colid) of (2345, 100) and (2348, 145) have matching rows in table3 where the data satisfies the conditions in the WHERE clause. The other rows don't have such matching rows so the columns from table2 for those rows are 'all NULL'. As I said, it is weird — contorted. And explaining is hard work!
A first approximation using standard SQL
This query is a moderate approximation to a direct translation of the Informix query:
SELECT DISTINCT
t3.no_cev,
t1.literal,
t1.colid,
t2.repid,
t2.valor,
t2.indicador,
t2.origen,
t2.codi,
t2.no_cia,
t2.num_dcca,
t2.no_aprof,
t2.no_compta
FROM table1 AS t1
INNER JOIN table3 AS t3 ON 1 = 1
LEFT JOIN table2 AS t2 ON t3.no_cev = t2.no_cev
AND t1.colid = t2.colid
AND t1.grupid = t2.grupid
WHERE t1.grupid = 2
AND t2.cod_exp = 99609
AND t2.indicador = 'S'
AND t3.num_dcca = 1
AND t3.codest = 76695;
The output is:
2345|Literal value 1|100|222|333|S|444|555|666|777|888|999
2348|Literal value 4|145|225|333|S|444|555|666|774|888|999
This is missing the rows with 'null values'.
Achieving the same result using standard INNER and OUTER joins
We can collect those rows by looking for rows where one of the columns in table2 is null (because they're either all null or none null — because the columns are qualified NOT NULL):
SELECT DISTINCT
t3.no_cev,
t1.literal,
t1.colid,
t2.repid,
t2.valor,
t2.indicador,
t2.origen,
t2.codi,
t2.no_cia,
t2.num_dcca,
t2.no_aprof,
t2.no_compta
FROM table1 AS t1
INNER JOIN table3 AS t3 ON 1 = 1
LEFT JOIN table2 AS t2 ON t3.no_cev = t2.no_cev
AND t1.colid = t2.colid
AND t1.grupid = t2.grupid
WHERE t1.grupid = 2
AND ((t2.cod_exp = 99609 AND t2.indicador = 'S') OR t2.cod_exp IS NULL)
AND t3.num_dcca = 1
AND t3.codest = 76695;
This yields the output:
2345|Literal value 1|100|222|333|S|444|555|666|777|888|999
2345|Literal value 2|123|||||||||
2345|Literal value 3|134|||||||||
2345|Literal value 4|145|||||||||
2348|Literal value 1|100|||||||||
2348|Literal value 2|123|||||||||
2348|Literal value 3|134|||||||||
2348|Literal value 4|145|225|333|S|444|555|666|774|888|999
This is the same as the original old-style Informix OUTER join query.
Tejash's proposed solution
The SQL in Tejash's answer (revision 1) yields, on the same data:
2345|Literal value 1|100|222|333|S|\ |\ |444|555|666|777|888|999
2348|Literal value 4|145|225|333|S|\ |\ |444|555|666|774|888|999
The backslash-space values correspond to the empty strings — it's Informix's slightly peculiar way of encoding a zero-length non-null string. It's an area where Oracle may well behave slightly differently, but it is tangential to the problem with the query.
Clearly, this is not the same result as the Informix query. It's probably more reasonable; it works out of the box (I simply did copy'n'paste, quoted numbers and all, and it worked with no editing needed).
I don't know about Informix OUTER syntax, so my answer may be wrong. The WHERE clause, however, lacking any relation between table1 and table3 suggests that this is just a cross join of table1 and table3 and then an outer join of table2.
One way to write this:
select t3.no_cev, t1.literal, t1.colid, t2.*
from table1 t1
cross join table3 t3
left join table2 t2 on t2.colid = t1.colid
and t2.grupid = t1.grupid
and t2.no_cev = t3.no_cev
and t2.cod_exp = 2
and t2.indicador = 'S'
where t1.grupid = 2
and t3.num_dcca = 1
and t3.codest = 76695;
Another is:
with t1 as (select * from table1 where grupid = 2)
, t2 as (select * from table1 where grupid = 2 and cod_exp = 2 and indicador = 'S')
, t3 as (select * from table3 where num_dcca = 1 and codest = 76695)
select t3.no_cev, t1.literal, t1.colid, t2.*
from t1
cross join t3
left join t2 on t2.colid = t1.colid and t2.no_cev = t3.no_cev;
Above queries are standard SQL and supported by Oracle as of version 9i I think.

ORA-00947 not enough values with function returning table of records

So I'm trying to build a function that returns the records of items that are included in some client subscription.
So I've been building up the following:
2 types:
CREATE OR REPLACE TYPE PGM_ROW AS OBJECT
(
pID NUMBER(10),
pName VARCHAR2(300)
);
CREATE OR REPLACE TYPE PGM_TAB AS TABLE OF PGM_ROW;
1 function:
CREATE OR REPLACE FUNCTION FLOGIN (USER_ID NUMBER) RETURN PGM_TAB
AS
SELECTED_PGM PGM_TAB;
BEGIN
FOR RESTRICTION
IN ( SELECT (SELECT LISTAGG (ID_CHANNEL, ',')
WITHIN GROUP (ORDER BY ID_CHANNEL)
FROM (SELECT DISTINCT CHA2.ID_CHANNEL
FROM CHANNELS_ACCESSES CHA2
JOIN CHANNELS CH2
ON CH2.ID = CHA2.ID_CHANNEL
WHERE CHA2.ID_ACCESS = CMPA.ID_ACCESS
AND CH2.ID_CHANNELS_GROUP = CG.ID))
AS channels,
(SELECT LISTAGG (ID_SUBGENRE, ',')
WITHIN GROUP (ORDER BY ID_SUBGENRE)
FROM (SELECT DISTINCT SGA2.ID_SUBGENRE
FROM SUBGENRES_ACCESSES SGA2
JOIN CHANNELS_ACCESSES CHA2
ON CHA2.ID_ACCESS = SGA2.ID_ACCESS
JOIN CHANNELS CH2
ON CH2.ID = CHA2.ID_CHANNEL
WHERE SGA2.ID_ACCESS = CMPA.ID_ACCESS
AND CH2.ID_CHANNELS_GROUP = CG.ID))
AS subgenres,
CG.NAME,
A.BEGIN_DATE,
A.END_DATE,
CMP.PREVIEW_ACCESS
FROM USERS U
JOIN COMPANIES_ACCESSES CMPA
ON U.ID_COMPANY = CMPA.ID_COMPANY
JOIN COMPANIES CMP ON CMP.ID = CMPA.ID_COMPANY
JOIN ACCESSES A ON A.ID = CMPA.ID_ACCESS
JOIN CHANNELS_ACCESSES CHA
ON CHA.ID_ACCESS = CMPA.ID_ACCESS
JOIN SUBGENRES_ACCESSES SGA
ON SGA.ID_ACCESS = CMPA.ID_ACCESS
JOIN CHANNELS CH ON CH.ID = CHA.ID_CHANNEL
JOIN CHANNELS_GROUPS CG ON CG.ID = CH.ID_CHANNELS_GROUP
WHERE U.ID = USER_ID
GROUP BY CG.NAME,
A.BEGIN_DATE,
A.END_DATE,
CMPA.ID_ACCESS,
CG.ID,
CMP.PREVIEW_ACCESS)
LOOP
SELECT PFT.ID_PROGRAM, PFT.LOCAL_TITLE
BULK COLLECT INTO SELECTED_PGM
FROM PROGRAMS_FT PFT
WHERE PFT.ID_CHANNEL IN
( SELECT TO_NUMBER (
REGEXP_SUBSTR (RESTRICTION.CHANNELS,
'[^,]+',
1,
ROWNUM))
FROM DUAL
CONNECT BY LEVEL <=
TO_NUMBER (
REGEXP_COUNT (RESTRICTION.CHANNELS,
'[^,]+')))
AND PFT.ID_SUBGENRE IN
( SELECT TO_NUMBER (
REGEXP_SUBSTR (RESTRICTION.SUBGENRES,
'[^,]+',
1,
ROWNUM))
FROM DUAL
CONNECT BY LEVEL <=
TO_NUMBER (
REGEXP_COUNT (RESTRICTION.SUBGENRES,
'[^,]+')))
AND (PFT.LAUNCH_DATE BETWEEN RESTRICTION.BEGIN_DATE
AND RESTRICTION.END_DATE);
END LOOP;
RETURN SELECTED_PGM;
END FLOGIN;
I expect the function tu return a table with 2 columns containing all the records from table PROGRAMS_FT that are included in the user access.
For some reason, I'm getting compilation warning ORA-000947.
My understanding of the error code is that it occurs when the values inserted does not match the type of the object receiving the values, and I can't see how this can be the case here.
You're selecting two scalar values and trying to put them into an object. That doesn't happen automatically, you need to convert them to an object:
...
LOOP
SELECT PGM_ROW(PFT.ID_PROGRAM, PFT.LOCAL_TITLE)
BULK COLLECT INTO SELECTED_PGM
FROM PROGRAMS_FT PFT
...
(It's an unhelpful quirk of PL/SQL that it says 'not enough values' rather than 'too many values', as you might expect when you try to put two things into one; I'm sure I came up with a fairly convincing explanation/excuse for that once but it escapes me at the moment...)
I'm not sure your loop makes sense though. Assuming your cursor query returns multiple rows, each time around the loop you're replacing the contents of the SELECTED_PGM collection - you might think you are appending to it, but that's not how it works. So you will end up returning a collection based only on the final iteration of the loop.
Aggregating and then splitting the data seems like a lot of work too. You could maybe use collections for those; but you can probably get rid of the cursor and loop and combine the cursor query with the inner query, which would be more efficient and would allow you to do a single bulk-collect for all the combined data.

INSERT ALL statement incredibly slow, even when no records to insert

This is on 11g. I have an INSERT ALL statement which uses a SELECT to build up the values to insert. The select has some subqueries that check that the record doesn't already exist. The problem is that the insert is taking over 30 minutes, even when there are zero rows to insert. The select statement on its own runs instantly, so the problem seems to be when it is used in conjunction with the INSERT ALL. I rewrote the statement to use MERGE but it was just as bad.
The table has no triggers. There is a primary key index, and a unique constraint on two of the columns, but nothing else that looks like it might be causing an issue. It currently has about 15000 rows, so definitely not big.
Has anyone a suggestion for what might be causing this, or how to go about debugging it?
Here's the INSERT ALL statement.
insert all
into template_letter_merge_fields (merge_field_id, letter_type_id,table_name,field_name,pretty_name, tcl_proc)
values (template_let_mrg_fld_sequence.nextval,letter_type_id,table_name,field_name, pretty_name, tcl_proc)
select lt.letter_type_id,
i.object_type as table_name,
i.interface_key as field_name,
i.pretty_name as pretty_name,
case
when w.widget = 'dynamic_select' then
'dbi::'||i.interface_key||'::get_name'
when w.widget = 'category_tree' and
i.interface_key not like '%_name' and
i.interface_key not like '%_desc' then
'dbi::'||i.interface_key||'::get_name'
else
'dbi::'||i.interface_key||'::get_value'
end as tcl_proc
from template_letter_types lt,
dbi_interfaces i
left outer join acs_attributes aa on (aa.object_type||'_'||aa.attribute_name = i.interface_key
and decode(aa.object_type,'person','party','aims_organisation','party',aa.object_type) = i.object_type)
left outer join flexbase_attributes fa on fa.acs_attribute_id = aa.attribute_id
left outer join flexbase_widgets w on w.widget_name = fa.widget_name
where i.object_type IN (select linked_object_type
from template_letter_object_map lom
where lom.interface_object_type = lt.interface_object_type
union select lt.interface_object_type from dual
union select 'template_letter' from dual)
and lt.interface_object_type = lt.interface_object_type
and not exists (select 1
from template_letter_merge_fields m
where m.sql_code is null
and m.field_name = i.interface_key
and m.letter_type_id = lt.letter_type_id)
and not exists (select 1
from template_letter_merge_fields m2
where m2.pretty_name = i.pretty_name
and m2.letter_type_id = lt.letter_type_id)

Oracle Table Variables

Using Oracle PL/SQL is there a simple equivalent of the following set of T-SQL statements? It seems that everything I am finding is either hopelessly outdated or populates a table data type with no explanation on how to use the result other than writing values to stdout.
declare #tempSites table (siteid int)
insert into #tempSites select siteid from site where state = 'TX'
if 10 > (select COUNT(*) from #tempSites)
begin
insert into #tempSites select siteid from site where state = 'OK'
end
select * from #tempSites ts inner join site on site.siteId = ts.siteId
As #AlexPoole points out in his comment, this is a fairly contrived example.
What I am attempting to do is get all sites that meet a certain set of criteria, and if there are not enough matches, then I am looking to use a different set of criteria.
Oracle doesn't have local temporary tables, and global temporary tables don't look appropriate here.
You could use a common table expression (subquery factoring):
with tempSites (siteId) as (
select siteid
from site
where state = 'TX'
union all
select siteid
from site
where state = 'OK'
and (select count(*) from site where state = 'TX') < 10
)
select s.*
from tempSites ts
join site s on s.siteid = ts.siteid;
That isn't quite the same thing, but gets all the TX IDs, and only includes the OK ones if the count of TX ones - which has to be repeated - is less than 10. The CTE is then joined back to the original table, which all seems a bit wasteful; you're hitting the same table three times.
You could use a subquery directly in a filter instead:
select *
from site
where state = 'TX'
or (state = 'OK'
and (select count(*) from site where state = 'TX') < 10);
but again the TX sites have to be retrieved (or at least counted) a second time.
You can do this with a single hit of the table using an inline view (or CTE if you prefer) with an analytic count - which add the count of TX rows to the columns in the actual table, so you'd probably want to exclude that dummy column from the final result set (but using * is bad practice anyway):
select * -- but list columns, excluding tx_count
from (
select s.*,
count(case when state = 'TX' then state end) over (partition by null) as tx_count
from site s
where s.state in ('TX', 'OK')
)
where state = 'TX'
or (state = 'OK' and tx_count < 10);
From your description of your research it sounds like what you've been looking at involved PL/SQL code populating a collection, which you could still do, but it's probably overkill unless your real situation is much more complicated.

Find if a column in Oracle has a sequence

I am attempting to figure out if a column in Oracle is populated from a sequence. My impression of how Oracle handles sequencing is that the sequence and column are separate entities and one needs to either manually insert the next sequence value like:
insert into tbl1 values(someseq.nextval, 'test')
or put it into a table trigger. Meaning that it is non-trivial to tell if a column is populated from a sequence. Is that correct? Any ideas about how I might go about figuring out if a column is populated from a sequence?
You are correct; the sequence is separate from the table, and a single sequence can be used to populate any table, and the values in a column in some table may mostly come from a sequence (or set of sequences), except for the values manually generated.
In other words, there is no mandatory connection between a column and a sequence - and therefore no way to discover such a relationship from the schema.
Ultimately, the analysis will be of the source code of all applications that insert or update data in the table. Nothing else is guaranteed. You can reduce the scope of the search if there is a stored procedure that is the only way to make modifications to the table, or if there is a trigger that sets the value, or other such things. But the general solution is the 'non-solution' of 'analyze the source'.
If the sequence is used in a trigger, it is possible to find which tables it populates:
SQL> select t.table_name, d.referenced_name as sequence_name
2 from user_triggers t
3 join user_dependencies d
4 on d.name = t.trigger_name
5 where d.referenced_type = 'SEQUENCE'
6 and d.type = 'TRIGGER'
7 /
TABLE_NAME SEQUENCE_NAME
------------------------------ ------------------------------
EMP EMPNO_SEQ
SQL>
You can vary this query to find stored procedures, etc that make use of the sequence.
There are no direct metadata links between Oracle sequences and any use in the database. You could make an intelligent guess if a column's values are related to a sequence by querying the USER_SEQUENCES metadata and comparing the LAST_NUMBER column to the data for the column.
select t.table_name,
d.referenced_name as sequence_name,
d.REFERENCED_OWNER as "OWNER",
c.COLUMN_NAME
from user_trigger_cols t, user_dependencies d, user_tab_cols c
where d.name = t.trigger_name
and t.TABLE_NAME = c.TABLE_NAME
and t.COLUMN_NAME = c.COLUMN_NAME
and d.referenced_type = 'SEQUENCE'
and d.type = 'TRIGGER'
As Jonathan pointed out: there is no direct way to relate both objects. However, if you "keep a standard" for primary keys and sequences/triggers you could find out by finding the primary key and then associate the constraint to the table sequence.
I was in need of something similar since we are building a multi-db product and I tried to replicate some classes with properties found in a DataTable object from .Net which has AutoIncrement, IncrementSeed and IncrementStep which can only be found in the sequences.
So, as I said, if you, for your tables, use a PK and always have a sequence associated with a trigger for inserts on a table then this may come handy:
select tc.table_name,
case tc.nullable
when 'Y' then 1
else 0
end as is_nullable,
case ac.constraint_type
when 'P' then 1
else 0
end as is_identity,
ac.constraint_type,
seq.increment_by as auto_increment_seed,
seq.min_value as auto_increment_step,
com.comments as caption,
tc.column_name,
tc.data_type,
tc.data_default as default_value,
tc.data_length as max_length,
tc.column_id,
tc.data_precision as precision,
tc.data_scale as scale
from SYS.all_tab_columns tc
left outer join SYS.all_col_comments com
on (tc.column_name = com.column_name and tc.table_name = com.table_name)
LEFT OUTER JOIN SYS.ALL_CONS_COLUMNS CC
on (tc.table_name = cc.table_name and tc.column_name = cc.column_name and tc.owner = cc.owner)
LEFT OUTER JOIN SYS.ALL_CONSTRAINTS AC
ON (ac.constraint_name = cc.constraint_name and ac.owner = cc.owner)
LEFT outer join user_triggers trg
on (ac.table_name = trg.table_name and ac.owner = trg.table_owner)
LEFT outer join user_dependencies dep
on (trg.trigger_name = dep.name and dep.referenced_type='SEQUENCE' and dep.type='TRIGGER')
LEFT outer join user_sequences seq
on (seq.sequence_name = dep.referenced_name)
where tc.table_name = 'TABLE_NAME'
and tc.owner = 'SCHEMA_NAME'
AND AC.CONSTRAINT_TYPE = 'P'
union all
select tc.table_name,
case tc.nullable
when 'Y' then 1
else 0
end as is_nullable,
case ac.constraint_type
when 'P' then 1
else 0
end as is_identity,
ac.constraint_type,
seq.increment_by as auto_increment_seed,
seq.min_value as auto_increment_step,
com.comments as caption,
tc.column_name,
tc.data_type,
tc.data_default as default_value,
tc.data_length as max_length,
tc.column_id,
tc.data_precision as precision,
tc.data_scale as scale
from SYS.all_tab_columns tc
left outer join SYS.all_col_comments com
on (tc.column_name = com.column_name and tc.table_name = com.table_name)
LEFT OUTER JOIN SYS.ALL_CONS_COLUMNS CC
on (tc.table_name = cc.table_name and tc.column_name = cc.column_name and tc.owner = cc.owner)
LEFT OUTER JOIN SYS.ALL_CONSTRAINTS AC
ON (ac.constraint_name = cc.constraint_name and ac.owner = cc.owner)
LEFT outer join user_triggers trg
on (ac.table_name = trg.table_name and ac.owner = trg.table_owner)
LEFT outer join user_dependencies dep
on (trg.trigger_name = dep.name and dep.referenced_type='SEQUENCE' and dep.type='TRIGGER')
LEFT outer join user_sequences seq
on (seq.sequence_name = dep.referenced_name)
where tc.table_name = 'TABLE_NAME'
and tc.owner = 'SCHEMA_NAME'
AND AC.CONSTRAINT_TYPE is null;
That would give you the list of columns for a schema/table with:
Table name
If column is nullable
Constraint type (only for PK's)
Increment seed (from the sequence)
Increment step (from the sequence)
Column comments
Column name, of course :)
Data type
Default value, if any
Length of column
Index (column id)
Precision (for numbers)
Scale (for numbers)
I'm pretty sure that code can be optimized but it works for me, I use it to "load metadata" for tables and then represent that metadata as entities on my frontend.
Note that I'm filtering only primary keys and not retrieving compound key constraints since I don't care about those. If you do you'll have to modify the code to do so and make sure that you filter duplicates since you could get one column twice (one for the PK constraint, another for the compound key).

Resources