I'm trying to execute the following statement:
INSERT INTO mySchema.ODI_PRICELIST_THREAD_TABLE
(
src_table,
thread_id,
creation_date
)
SELECT DISTINCT
source_table AS src_table,
num_thread_seq.nextval AS THREAD_ID,
create_date AS CREATION_DATE
FROM mySchema.nb_pricelist_ctrl
I need the THREAD_ID field to be a number from 1 to X where X is defined in runtime therefore I've used a sequence from 1 to X (I'm using ODI).
However, I keep having the ORA-02287 Sequence not allowed error...
I've read this question and I still can't figure how I can fix my problem.
I've been seaching but I'm having no luck with finding a solution. Please help
Keyword distinct is incompatible with sequence querying. If you really need it, try something like
INSERT INTO mySchema.ODI_PRICELIST_THREAD_TABLE (
src_table,
thread_id,
creation_date)
select
a.src_table,
num_thread_seq.nextval,
a.create_date
from
(select distinct src_table, create_date from mySchema.nb_pricelist_ctrl) a
From OraFaq :
The following are the cases where you can't use a sequence:
For a SELECT Statement:
In a WHERE clause
In a GROUP BY or ORDER BY clause
In a DISTINCT clause
Along with a UNION or INTERSECT or MINUS
In a sub-query
http://www.orafaq.com/wiki/ORA-02287
Try this
INSERT INTO mySchema.ODI_PRICELIST_THREAD_TABLE
(
src_table,
thread_id,
creation_date
)
SELECT DISTINCT
source_table AS src_table,
num_thread_seq.nextval() AS THREAD_ID,
create_date AS CREATION_DATE
FROM mySchema.nb_pricelist_ctrl
Related
So I'm attempting to place the results of a distinct, single-column query into a JSON array so it can be used on my web server. I have it set up something like this:
SELECT JSON_OBJECT(
'ArrayKey' VALUE JSON_ARRAYAGG( col )
) AS jsonResult
FROM(SELECT DISTINCT column_name AS col
FROM tbl_name);
However, when this query returns results, the array it generates in JSON contains all values from my column and ignores the DISTINCT clause in the subquery somehow. Whenever I get rid of the JSON_ARRAYAGG clause and output the results directly, the result are unique, but somehow the command is ignored when I add it back in. I've also attempted to place the DISTINCT clause inside the JSON_ARRAYAGG as well, like so:
SELECT JSON_OBJECT(
'ArrayKey' VALUE JSON_ARRAYAGG( DISTINCT col )
) AS jsonResult
FROM(SELECT DISTINCT column_name AS col
FROM tbl_name);
to no avail. Does anyone know what's going wrong in my code that's causing the array to output all values instead of distinct ones?
Interesting... Looks like a bug to me. The optimizer seems to push down too eagerly.
As workaround you can use the NO_MERGE hint on the subquery.
SELECT /*+NO_MERGE(x)*/
json_object('ArrayKey'
VALUE json_arrayagg(column_name)) jsonresult
FROM (SELECT DISTINCT
column_name
FROM tbl_name) x;
A CTE and a MATERIALIZE hint seem to work too.
WITH cte
AS
(
SELECT /*+MATERIALIZE*/
DISTINCT
column_name
FROM tbl_name
)
SELECT json_object('ArrayKey'
VALUE json_arrayagg(column_name)) jsonresult
FROM cte;
db<>fiddle
This was a bug, we fixed it. You can try it out on live SQL
create table tbl_name (column_name number);
insert into tbl_name values(1);
insert into tbl_name values(1);
insert into tbl_name values(2);
SELECT JSON_OBJECT(
'ArrayKey' VALUE JSON_ARRAYAGG( col )
) AS jsonResult
FROM(SELECT DISTINCT column_name AS col
FROM tbl_name);
{"ArrayKey" : [1,2]}
The bug is
Bug 27757725 - JSON GENERATION AGGREGATION FUNCTIONS IGNORE DISTINCT
you can request a backport from Oracle Support Services
I've found this hack to work:
SELECT JSON_OBJECT(
'ArrayKey' VALUE JSON_ARRAYAGG( col )
) AS jsonResult
FROM(SELECT DISTINCT column_name AS col
FROM tbl_name)
HAVING COUNT(*) = COUNT(*);
See also: Oracle bug produces duplicate aggregate values in JSON_ARRAYAGG
A query, like so:
SELECT SUM(col1 * col3) AS total, col2
FROM table1
GROUP BY col2
works as expected when run individually.
For reference:
table1.col1 -- float
table1.col2 -- varchar2
table1.col3 -- float
When this query is moved to a subquery, I get an ORA-01722 error, with reference to the "col2" position in the select clause. The larger query looks like this:
SELECT col3, subquery1.total
FROM table3
LEFT JOIN (
SELECT SUM(table1.col1 * table1.col3) AS total, table.1col2
FROM table1
GROUP BY table1.col2
) subquery1 ON table3.col3 = subquery1.col2
For reference:
table3.col3 -- varchar2
It may also be worth noting that I have another query, from table2 that has the same structure as table1. If I use the subquery from table2, it works. It never works when using table1.
There is no concatenation, the data types match, the query works by itself... I'm at a loss here. What else should I be looking for? What painfully obvious problem is staring me in the face?
(I didn't choose or make the table structures and can't change them, so answers to that end will unfortunately not be helpful.)
try using a proper cast of float to char ..
SELECT col3, subquery1.total
FROM table3
LEFT JOIN (
SELECT SUM(table1.col1 * table1.col3) AS total, table.1col2
FROM table1
GROUP BY table1.col2
) subquery1 ON to_char(table3.col3) = subquery1.col2
Consider the generic query:
SELECT * FROM (
SELECT COL_1, COL_2, COL_3 FROM TABLE_1
WHERE COL_1 IN ('item1', 'item2')
AND COL_2 = 100
ORDER BY COL_3
) subquery1
INNER JOIN
(
SELECT COL_A, MAX(COL_B) FROM TABLE_2
GROUP BY COL_A
HAVING COUNT(COL_B) > 2
) subquery2
ON subquery1.COL_1 = subquery2.COL_A
Assume that the query itself is optimized. If I wanted to create the an 'optimized' index around a query like this, what indexes should I be creating? In particular, which order are the indexes' columns be?
From my understanding, the first columns should be the columns used in the WHERE clause, then the ORDER BY clause, and lastly the SELECT clause. Is this true? What about the others, such as GROUP BY's, HAVING's, and JOIN's - when should they be considered?
Also if necessary, assume that this is an Oracle database. (But I imagine column ordering would be the same for other platforms.)
I'm getting the classic error:
ORA-00918: column ambiguously defined
Usually, I know how to solve it but my problem now is that I'm working with a 700 row query.
Is there a way to identify the column?
Have you tried to do a binary search?
e.g.
If your original query looks like
Select col1
,col2
,col3
,col4
from MyTable
you can start with commenting the 2nd half
Select col1
,col2
/*,col3
,col4 */
from MyTable
If you still get the error, run the query again commenting some column from the other half:
Select col1
/*,col2 */
,col3
,col4
from MyTable
If you still get an error then your problem is with col1, otherwise you need to change col2.
The ambiguous column error message indicates that you have joined two (or more) columns in your query which share the same column name.
The proper way to solve this is to give each table in the query an alias and then prefix all column references with the appropriate alias. I agree that won't be fun for such a large query but I'm afraid you will have to pay the price of your predecessor's laxness.
In Oracle, you can use all_tab_cols to query the columns names of your tables. The following query will return the common column names between TABLE1 and TABLE2. Then you just need to prefix those common columns instead of all 100 column references.
select column_name from all_tab_cols
where table_name='TABLE1' and owner ='OWNER1'
and column_name in (
select column_name from all_tab_cols
where table_name='TABLE2' and owner ='OWNER2')
For posterity's sake:
I had this issue when I selected columns TABLE1.DES and TABLE2.DES in a query without aliasing the result. When I ran it alone my SQL editor turned these into DES and DES_1, no complaint.
However when I turned the same query into a subquery
SELECT a.col1, a.col2, a.col3, b.*
from TABLE3 a
INNER JOIN (
--that query as a subquery
) b
on a.PK=b.FK`
it threw the same ORA-00918 error message you described. Changing the SELECT in my subquery to
SELECT TABLE1.DES AS T1_DES, TABLE2.DES AS T2_DES ...
fixed the issue.
you can check common columns by using :
select COLUMN_NAME from ALL_TAB_COLS where TABLE_NAME = 'tablenamefirst'
intersect
select COLUMN_NAME from ALL_TAB_COLS where TABLE_NAME = 'tablenamesecond';
Oracle:-
I have around 850 records in an table, that need to be assigned UUID.
I am using the following query.
select substr(sys_guid(),1,3)||'-'||
substr(sys_guid(),4,4)||'-'||
substr(sys_guid(),8,4)||'-'||
substr(sys_guid(),13)
from (select sys_guid() as mygid from dual)
I need to generate multiple/850 records in one go.
Any suggestions ?
Should I loop over?
If you really need select, use hierarchical query:
SELECT Substr(mygid,1,3)||'-'||
Substr(mygid,4,4)||'-'||
Substr(mygid,8,4)||'-'||
Substr(mygid,12)
FROM (
SELECT Sys_GUID() AS mygid FROM dual
CONNECT BY Level <= :desired_number_of_records
)
But what's wrong with usual update ?
UPDATE your_tab
SET gid_col = (
SELECT Substr(mygid,1,3)||'-'||
Substr(mygid,4,4)||'-'||
Substr(mygid,8,4)||'-'||
Substr(mygid,12)
FROM( SELECT Sys_Guid() AS mygid FROM dual )
)
Not sure that format is really what you want as you are missing 9 of the 32 characters, buy you could modify the format as needed. Here is an example that shows how to format like XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX:
UPDATE MY_TABLE
SET GUID_COL = (
select regexp_replace((rawtohex(sys_guid()), '([A-F0-9]{8})([A-F0-9]{4})([A-F0-9]{4})([A-F0-9]{4})([A-F0-9]{12})', '\1-\2-\3-\4-\5') as FORMATTED_GUID from dual
)