ORA-01722: invalid number when executing query through pro*C - oracle

I am facing ORA-1772: invalid number again and again when the query is run through pro*c file. Same query works fine when executed directly on pl/sql developer.
SELECT queue_entry_id ,queue_urgency,
TO_CHAR (chg_dt, 'MM-DD-YY'),
TO_CHAR (queue_after_dt, 'MM-DD-YY'),
chg_who ,
upd_cnt FROM BSD_QUEUE_WORK WHERE queue_entry_id IN (
SELECT queue_entry_id FROM (
SELECT /*+ INDEX (bsd_queue_work bsd_q_wrk_dt_idx) */
BSD_QUEUE_WORK.queue_entry_id
FROM BSD_QUEUE_WORK WHERE BSD_QUEUE_WORK.queue_after_dt <= SYSDATE
order by BSD_QUEUE_WORK.queue_urgency DESC, BSD_QUEUE_WORK.queue_after_dt)
WHERE rownum <=20) FOR UPDATE;
Old query looked like -
SELECT
/*+ INDEX (bsd_queue_work bsd_q_wrk_dt_idx) +*/
BSD_QUEUE_WORK.queue_entry_id,
TO_CHAR(BSD_QUEUE_WORK.chg_dt,:b0) ,
TO_CHAR( BSD_QUEUE_WORK.queue_after_dt,:b0) ,
BSD_QUEUE_WORK.chg_who ,
BSD_QUEUE_WORK.upd_cnt
FROM BSD_QUEUE_WORK
WHERE (BSD_QUEUE_WORK.queue_after_dt<=SYSDATE
AND ROWNUM <=:b2)
ORDER BY BSD_QUEUE_WORK.queue_urgency DESC BSD_QUEUE_WORK.queue_after_dt FOR UPDATE
This query was first selecting 20 rows and then sorting them on queue_urgency. This lead to high urgency rows waiting long for their turn if we had total of 10k rows. And, User wanted to find out high queue_urgency rows first and process them in chunks of 20 rows. Hence, I created the new query which is giving errors as above.
Table structure is as-
SQL> desc bsd_queue_work
Name Null? Type
----------------------------------------- -------- ----------------------------
QUEUE_ENTRY_ID NOT NULL NUMBER(18)
QUEUE_AFTER_DT NOT NULL DATE
UPD_CNT NOT NULL NUMBER(6)
CHG_WHO NOT NULL VARCHAR2(32)
CHG_DT NOT NULL DATE
QUEUE_URGENCY NOT NULL NUMBER(38)

Related

Different Output with same Input for ORACLE MD5 Function

At a given time I stored the result of the following ORACLE SQL Query :
SELET col , TO_CHAR( LOWER( STANDARD_HASH( col , 'MD5' ) ) AS hash_col FROM MyTable ;
A week later, I executed the same query on the same data ( same values for column col ).
I thought the resulting hash_col column would have the same values as the values from the former execution but it was not the case.
Is it possible for ORACLE STANDARD_HASH function to deliver over time the same result for identical input data ?
It does if the function is called twice the same day.
All we have about the data changing (or not) and the hash changing (or not) is your assertion.
You could create and populate a log table:
create table hash_log (
sample_time timestamp,
hashed_string varchar2(200),
hashed_string_dump varchar2(200),
hash_value varchar2(200)
);
Then on a daily basis:
insert into hash_log values
(select systimestamp,
source_column,
dump(source_column),
STANDARD_HASH(source_column , 'MD5' )
from source_table
);
Then, to spot changes:
select distinct hashed_string ||
hashed_string_dump ||
hash_value
from hash_log;

Merge with select with multiple rows

I have a query which every time runs, selects the rows of user_triggers which are related to a table(p_table_name_in). I want to run this procedure every day and I want to just insert new rows, not all rows again. but when I install this oackage , I get this error:
ORA-00932 (130: 21): PL / SQL: ORA-00932: Inconsistent data types:
CLOB expected, LONG received (line 31)
and when I try to change TRIGGER_BODY AS BODY_TRIGGER to TO_LOB(TRIGGER_BODY) AS BODY_TRIGGER I get this error:
ORA-00932 (111: 29): PL / SQL: ORA-00932: Inconsistent data types: -
expected, LONG received (line 12)
procedure:
PROCEDURE save_trigger_definitions ( p_table_name_in in VARCHAR2 ) IS
BEGIN
MERGE INTO hot_utils_reload_triggers t1
USING
(
SELECT TRIGGER_NAME ,
TABLE_NAME ,
STATUS ,
DESCRIPTION,
TRIGGER_BODY AS BODY_TRIGGER,
WHEN_CLAUSE
FROM user_triggers
)t2
ON(t2.TABLE_NAME like upper(p_table_name_in))
WHEN MATCHED THEN UPDATE SET
t1.DESCRIPTION = t2.DESCRIPTION,
t1.WHEN_CLAUSE = t2.WHEN_CLAUSE
WHEN NOT MATCHED THEN
INSERT (TRIGGER_NAME,
TABLE_NAME,
STATUS,
DESCRIPTION,
BODY_TRIGGER,
WHEN_CLAUSE)
VALUES (t2.TRIGGER_NAME,
t2.TABLE_NAME,
t2.STATUS,
t2.DESCRIPTION,
t2.BODY_TRIGGER,
t2.WHEN_CLAUSE);
commit;
END save_trigger_definitions;
It's also interesting to me that Oracle does not allow to use TO_LOB within a SELECT or MERGE Statement, while does for INSERT. Thus you can seperately use INSERT and MERGE with only the part containing MATCHED part such as
CREATE OR REPLACE PROCEDURE save_trigger_definitions ( p_table_name_in in VARCHAR2 ) IS
BEGIN
INSERT INTO hot_utils_reload_triggers
(trigger_name,
table_name,
status,
description,
body_trigger,
when_clause)
SELECT trigger_name,
table_name,
status,
description,
TO_LOB(trigger_body),
when_clause
FROM user_triggers
WHERE table_name LIKE UPPER(p_table_name_in)
AND NOT EXISTS ( SELECT 1
FROM hot_utils_reload_triggers
WHERE trigger_name = u.trigger_name
AND table_name = u.table_name
AND status = u.status );
UPDATE hot_utils_reload_triggers h
SET h.description = description, h.when_clause = when_clause
WHERE table_name LIKE UPPER(p_table_name_in);
COMMIT;
END;
/
assuming that you don't want duplicated rows for some columns such as trigger_name,table_name,status, I have added a subquery for them after NOT EXISTS clause.
Ref1
Ref2
Using DBMS_REDEFINITION.START_REDEF_TABLE() might be another alternative for LONG to LOB conversion cases.

Oracle CLOB column and LAG

I'm facing a problem when I try to use LAG function on CLOB column.
So let's assume we have a table
create table test (
id number primary key,
not_clob varchar2(255),
this_is_clob clob
);
insert into test values (1, 'test1', to_clob('clob1'));
insert into test values (2, 'test2', to_clob('clob2'));
DECLARE
x CLOB := 'C';
BEGIN
FOR i in 1..32767
LOOP
x := x||'C';
END LOOP;
INSERT INTO test(id,not_clob,this_is_clob) values(3,'test3',x);
END;
/
commit;
Now let's do a select using non-clob columns
select id, lag(not_clob) over (order by id) from test;
It works fine as expected, but when I try the same with clob column
select id, lag(this_is_clob) over (order by id) from test;
I get
ORA-00932: inconsistent datatypes: expected - got CLOB
00932. 00000 - "inconsistent datatypes: expected %s got %s"
*Cause:
*Action:
Error at Line: 1 Column: 16
Can you tell me what's the solution of this problem as I couldn't find anything on that.
The documentation says the argument for any analytic function can be any datatype but it seems unrestricted CLOB is not supported.
However, there is a workaround:
select id, lag(dbms_lob.substr(this_is_clob, 4000, 1)) over (order by id)
from test;
This is not the whole CLOB but 4k should be good enough in many cases.
I'm still wondering what is the proper way to overcome the problem
Is upgrading to 12c an option? The problem is nothing to do with CLOB as such, it's the fact that Oracle has a hard limit for strings in SQL of 4000 characters. In 12c we have the option to use extended data types (providing we can persuade our DBAs to turn it on!). Find out more.
Some of the features may not work properly in SQL when using CLOBs(like DISTINCT , ORDER BY GROUP BY etc. Looks like LAG is also one of them but, I couldn't find anywhere in docs.
If your values in the CLOB columns are always less than 4000 characters, you may use TO_CHAR
select id, lag( TO_CHAR(this_is_clob)) over (order by id) from test;
OR
convert it into an equivalent SELF JOIN ( may not be as efficient as LAG )
SELECT a.id,
b.this_is_clob AS lagging
FROM test a
LEFT JOIN test b ON b.id < a.id;
Demo
I know this is an old question, but I think I found an answer which eliminates the need to restrict the CLOB length and wanted to share it. Utilizing CTE and recursive subqueries, we can replicate the lag functionality with CLOB columns.
First, let's take a look at my "original" query:
WITH TEST_TABLE AS
(
SELECT LEVEL ORDER_BY_COL,
TO_CLOB(LEVEL) AS CLOB_COL
FROM DUAL
CONNECT BY LEVEL <= 10
)
SELECT tt.order_by_col,
tt.clob_col,
LAG(tt.clob_col) OVER (ORDER BY tt.order_by_col)
FROM test_table tt;
As expected, I get the following error:
ORA-00932: inconsistent datatypes: expected - got CLOB
Now, lets look at the modified query:
WITH TEST_TABLE AS
(
SELECT LEVEL ORDER_BY_COL,
TO_CLOB(LEVEL) AS CLOB_COL
FROM DUAL
CONNECT BY LEVEL <= 10
),
initial_pull AS
(
SELECT tt.order_by_col,
LAG(tt.order_by_col) OVER (ORDER BY tt.order_by_col) AS PREV_ROW,
tt.clob_col
FROM test_table tt
),
recursive_subquery (order_by_col, prev_row, clob_col, prev_clob_col) AS
(
SELECT ip.order_by_col, ip.prev_row, ip.clob_col, NULL
FROM initial_pull ip
WHERE ip.prev_row IS NULL
UNION ALL
SELECT ip.order_by_col, ip.prev_row, ip.clob_col, rs.clob_col
FROM initial_pull ip
INNER JOIN recursive_subquery rs ON ip.prev_row = rs.order_by_col
)
SELECT rs.order_by_col, rs.clob_col, rs.prev_clob_col
FROM recursive_subquery rs;
So here is how it works.
I create the TEST_TABLE, this really is only for the example as you should already have this table somewhere in your schema.
I create a CTE of the data I want to pull, plus a LAG function on the primary key (or a unique column) in the table partitioned and ordered in the same way I would have in my original query.
Create a recursive subquery using the initial row as the root and descending row by row joining on the lagged column. Returning both the CLOB column from the current row and the CLOB column from its parent row.

create a table with additional null column in Oracle

I am trying to create a table and wrote following code
create table trial as(
SELECT l2_group AS Customer
, null AS Contact
FROM ACCT_MASKED_sep17_V1) ;
It's giving me an error when I run with create table where as select query runs.
How can I get the result
You need to specify a data type for that NULL column. For example:
create table t1 as
select 1 as c1
, cast(null as number) as c2
from dual
Table created.
If as a datatype for the NULL column you choose VARCHAR2(length) datatype, the length needs to be greater than 0.

Oracle JSON document select query performance tuning

Table description
COLUMN DATA_TYPE NULLABLE DEFAULT_VALUE
ID VARCHAR2(16) No
UPDATED_DATE TIMESTAMP(6) Yes
DETAILS CLOB Yes
TX_STATUS VARCHAR2(10) Yes
TX_USER VARCHAR2(16) Yes
PREMIUM NUMBER(10,2) Yes JSON_VALUE("DETAILS" FORMAT JSON , '$.policy.premium' RETURNING NUMBER(10,2) NULL ON ERROR)
Where,
DETAILS - JSON Document
PREMIUM - column is virtual column.
If i select virtual column with order by clause, query execution is taking too much time to run a select query.
The below query is taking 32.23secs. PREMIUM is the virtual column here
select id,tx_status,updated_date,tx_user, PREMIUM from J_MARINE_CERT j order by j.UPDATED_DATE desc
After removing PREMIUM, it is taking 0.009secs.
select id,tx_status,updated_date,tx_user from J_MARINE_CERT j order by j.UPDATED_DATE desc
Even after indexing PREMIUM, updated_date it is taking same amount of time(32.23) to execute.
I had the same issue and the only good solution was creating a Materialized View for values from json.
CREATE MATERIALIZED VIEW mv_for_query_rewrite
BUILD IMMEDIATE
REFRESH FAST ON STATEMENT WITH PRIMARY KEY
AS SELECT tbl.id, jt.*
FROM jour_table tbl,
json_table(tbl.json_document, '$' ERROR ON ERROR NULL ON EMPTY
COLUMNS (
some_number NUMBER PATH '$.PONumber',
userid VARCHAR2(10) PATH '$.User'
)) jt;
Reason for performance drop is that Oracle takes whole json in memory to select one value from it.
From Oracle documentation

Resources