Error when selecting timestamp from XMLType column in Oracle 11g - oracle

I have 2 Oracle 11g databases with a table containing a XMLType column and some test data differing only in the separator (.,) for the milliseconds of the timestamp:
create table TEST_TIMESTAMP (
ID number(19,0) constraint "NN_TEST_TIMESTAMP_ID" not null,
DOC xmltype constraint "NN_TEST_TIMESTAMP_DOC" not null
);
insert into TEST_TIMESTAMP values ( 1, xmltype('<?xml version="1.0" encoding="utf-8"?><test><ts>2015-04-08T04:55:33.11</ts></test>'));
insert into TEST_TIMESTAMP values ( 2, xmltype('<?xml version="1.0" encoding="utf-8"?><test><ts>2015-04-08T04:55:33,11</ts></test>'));
When I try to extract the timestamp with the following statements, it fails either with the first document on one database or with the second document on the other database.
select x.*
from TEST_TIMESTAMP t,
xmltable(
'/test'
passing t.DOC
columns
ORIGINAL varchar2(50) path 'ts',
RESULT timestamp with time zone path 'ts'
) x
where t.ID = 1;
select x.*
from TEST_TIMESTAMP t,
xmltable(
'/test'
passing t.DOC
columns
ORIGINAL varchar2(50) path 'ts',
RESULT timestamp with time zone path 'ts'
) x
where t.ID = 2;
The error I get:
ORA-01858: a non-numeric character was found where a numeric was expected
01858. 00000 - "a non-numeric character was found where a numeric was expected"
*Cause: The input data to be converted using a date format model was
incorrect. The input data did not contain a number where a number was
required by the format model.
*Action: Fix the input data or the date format model to make sure the
elements match in number and type. Then retry the operation.
The only differences between those databases I've found are:
DB1: version=11.2.0.1.0, NLS_CHARACTERSET=AL32UTF8 -> fails on document 2
DB2: version=11.2.0.2.0, NLS_CHARACTERSET=WE8MSWIN1252 -> fails on document 1
DB1 has the behaviour that I would expect. Does anybody know why those databases behave differently and how to fix the issue in DB2?
Thanks in advance,
Oliver

My guess is that the nls_timestamp_format is different between the two databases.
However, rather than forcing the implicit conversion down at the XMLTABLE level, I would do an explicit conversion in the select list:
with test_timestamp as (select 1 id, xmltype('<?xml version="1.0" encoding="utf-8"?><test><ts>2015-04-08T04:55:33.11</ts></test>') doc from dual union all
select 2 id, xmltype('<?xml version="1.0" encoding="utf-8"?><test><ts>2015-04-08T04:55:33,11</ts></test>') doc from dual)
select x.original,
to_timestamp(x.original, 'yyyy-mm-dd"T"hh24:mi:ss,ff2') result
from test_timestamp t,
xmltable('/test' passing t.doc
columns original varchar2(50) path 'ts') x;
ORIGINAL RESULT
-------------------------------------------------- --------------------------------------------------
2015-04-08T04:55:33.11 08/04/2015 04:55:33.110000000
2015-04-08T04:55:33,11 08/04/2015 04:55:33.110000000
N.B. I found that using "ss.ff2" errored, but "ss,ff2" handled both cases just fine. I'm not sure if that's reliant on some other nls setting or not, though.

Related

Getting error while running session as not avalid month

I have a oracle view in which we have column called dayofset which defined from subtraction of two date columns like(to_date(date_column1)-to_date(date_column2)) and it is stored as number(38) datatype.
2.so,when I run session in informatica to get data from oracle view to redshift.im getting error like "not a valid month".
3.Input values for that column is like (25-JAN-21,10-APR-13)
4.im getting the output values like 1,2,3,4... Like this all are integer values.(this column just do the datediff operation) and provide the difference between two dates.
Could you guys please help on this.
I have a oracle view in which we have column called dayofset which defined from subtraction of two date columns like to_date(date_column1)-to_date(date_column2) and it is stored as number(38) datatype.
Never use TO_DATE on a column that is already a DATE data type. Just use.
CREATE VIEW your_view (dayofset)
SELECT date_column1 - date_column2
FROM your_table;
If you use TO_DATE then it takes a string as the first argument so you are effectively performing an implicit conversion to a string to convert it back to a date and your code is the equivalent of:
CREATE VIEW your_view (dayofset)
SELECT TO_DATE(
TO_CHAR(
date_column1,
(SELECT value FROM NLS_SESSION_PARAMETERS WHERE parameter = 'NLS_DATE_FORMAT')
),
(SELECT value FROM NLS_SESSION_PARAMETERS WHERE parameter = 'NLS_DATE_FORMAT')
)
-
TO_DATE(
TO_CHAR(
date_column2,
(SELECT value FROM NLS_SESSION_PARAMETERS WHERE parameter = 'NLS_DATE_FORMAT')
),
(SELECT value FROM NLS_SESSION_PARAMETERS WHERE parameter = 'NLS_DATE_FORMAT')
)
FROM your_table;
Depending on your NLS_DATE_FORMAT session parameter, this could just be a waste of time or it could truncate the date and give you an unexpected result; however, any user can change their session parameters at any time so you may get different results for different users so you should NEVER rely on implicit conversions.
If your columns are not a DATE data-type but are strings then use an explicit format model (and, if required, language) in the conversion:
CREATE VIEW your_view (dayofset)
SELECT TO_DATE(string_column1, 'DD-MON-RR', 'NLS_DATE_LANGUAGE=English')
- TO_DATE(string_column2, 'DD-MON-RR', 'NLS_DATE_LANGUAGE=English')
FROM your_table;

Converting Big Clob into Table using XMLTable

I´m trying to convert a input clob variable from a store procedure into a xmltype and then, using XMLTable, trying to join it with other tables on my db, here is my code:
with clientes_data as (SELECT clientes.identificacion,clientes.tipoDoc,clientes.cuentas
from xmltable('/Clientes/Cliente'
passing xmltype(to_clob('<Clientes>
<Cliente><NumeroIdentificacion>94406495</NumeroIdentificacion><TipoIdentificacion>CC</TipoIdentificacion></Cliente>
<Cliente><NumeroIdentificacion>1136881480</NumeroIdentificacion><TipoIdentificacion>CC</TipoIdentificacion></Cliente>
</Clientes>'))
columns
identificacion varchar2(10) path 'NumeroIdentificacion',
tipoDoc varchar2(2) path 'TipoIdentificacion',
cuentas xmltype path 'Cuentas') clientes)
,cuentas_Data as (
SELECT cl.identificacion,cl.tipoDoc,cuentasT.*
from clientes_data cl
LEFT JOIN
xmltable('/Cuentas/Cuenta'
passing cl.cuentas
columns
numCta varchar2(10) path 'Numero',
tipoCta varchar2(3) path 'Tipo') cuentasT ON 1=1)
select * from cuentas_Data;
--select count(*) from cuentas_Data;
But I´m gettin this error: String literal too long...The string literal is longer tahn 4000 characters when the input (the passing section) is longer than 4000 characters,so, I´m a little bit confused, cause it´s suposed that XMLTable have a parameter xmltype (kind of a clob size), but, I´m assuming that actually is a varchar2(4000)?
Thanks for any "light" on this.

Oracle CLOB column and LAG

I'm facing a problem when I try to use LAG function on CLOB column.
So let's assume we have a table
create table test (
id number primary key,
not_clob varchar2(255),
this_is_clob clob
);
insert into test values (1, 'test1', to_clob('clob1'));
insert into test values (2, 'test2', to_clob('clob2'));
DECLARE
x CLOB := 'C';
BEGIN
FOR i in 1..32767
LOOP
x := x||'C';
END LOOP;
INSERT INTO test(id,not_clob,this_is_clob) values(3,'test3',x);
END;
/
commit;
Now let's do a select using non-clob columns
select id, lag(not_clob) over (order by id) from test;
It works fine as expected, but when I try the same with clob column
select id, lag(this_is_clob) over (order by id) from test;
I get
ORA-00932: inconsistent datatypes: expected - got CLOB
00932. 00000 - "inconsistent datatypes: expected %s got %s"
*Cause:
*Action:
Error at Line: 1 Column: 16
Can you tell me what's the solution of this problem as I couldn't find anything on that.
The documentation says the argument for any analytic function can be any datatype but it seems unrestricted CLOB is not supported.
However, there is a workaround:
select id, lag(dbms_lob.substr(this_is_clob, 4000, 1)) over (order by id)
from test;
This is not the whole CLOB but 4k should be good enough in many cases.
I'm still wondering what is the proper way to overcome the problem
Is upgrading to 12c an option? The problem is nothing to do with CLOB as such, it's the fact that Oracle has a hard limit for strings in SQL of 4000 characters. In 12c we have the option to use extended data types (providing we can persuade our DBAs to turn it on!). Find out more.
Some of the features may not work properly in SQL when using CLOBs(like DISTINCT , ORDER BY GROUP BY etc. Looks like LAG is also one of them but, I couldn't find anywhere in docs.
If your values in the CLOB columns are always less than 4000 characters, you may use TO_CHAR
select id, lag( TO_CHAR(this_is_clob)) over (order by id) from test;
OR
convert it into an equivalent SELF JOIN ( may not be as efficient as LAG )
SELECT a.id,
b.this_is_clob AS lagging
FROM test a
LEFT JOIN test b ON b.id < a.id;
Demo
I know this is an old question, but I think I found an answer which eliminates the need to restrict the CLOB length and wanted to share it. Utilizing CTE and recursive subqueries, we can replicate the lag functionality with CLOB columns.
First, let's take a look at my "original" query:
WITH TEST_TABLE AS
(
SELECT LEVEL ORDER_BY_COL,
TO_CLOB(LEVEL) AS CLOB_COL
FROM DUAL
CONNECT BY LEVEL <= 10
)
SELECT tt.order_by_col,
tt.clob_col,
LAG(tt.clob_col) OVER (ORDER BY tt.order_by_col)
FROM test_table tt;
As expected, I get the following error:
ORA-00932: inconsistent datatypes: expected - got CLOB
Now, lets look at the modified query:
WITH TEST_TABLE AS
(
SELECT LEVEL ORDER_BY_COL,
TO_CLOB(LEVEL) AS CLOB_COL
FROM DUAL
CONNECT BY LEVEL <= 10
),
initial_pull AS
(
SELECT tt.order_by_col,
LAG(tt.order_by_col) OVER (ORDER BY tt.order_by_col) AS PREV_ROW,
tt.clob_col
FROM test_table tt
),
recursive_subquery (order_by_col, prev_row, clob_col, prev_clob_col) AS
(
SELECT ip.order_by_col, ip.prev_row, ip.clob_col, NULL
FROM initial_pull ip
WHERE ip.prev_row IS NULL
UNION ALL
SELECT ip.order_by_col, ip.prev_row, ip.clob_col, rs.clob_col
FROM initial_pull ip
INNER JOIN recursive_subquery rs ON ip.prev_row = rs.order_by_col
)
SELECT rs.order_by_col, rs.clob_col, rs.prev_clob_col
FROM recursive_subquery rs;
So here is how it works.
I create the TEST_TABLE, this really is only for the example as you should already have this table somewhere in your schema.
I create a CTE of the data I want to pull, plus a LAG function on the primary key (or a unique column) in the table partitioned and ordered in the same way I would have in my original query.
Create a recursive subquery using the initial row as the root and descending row by row joining on the lagged column. Returning both the CLOB column from the current row and the CLOB column from its parent row.

Oracle query looking for particular string format

Am working with an Oracle 11g db in which I need to run a query that returns all rows with values in a specific format.
I.e., I need to find all rows with a value like 'abc1234'. The actual values aren't important....what I need is to find all rows with the first 3 characters being alpha and the subsequent 4 characters all being numeric.
Any input would be much appreciated.
Might not be exact since I don't have an Oracle server handy to test but something like this should get you started in the right direction:
SELECT * FROM your_table WHERE REGEXP_LIKE(your_column, '([a-z]\3[0123456789]\4', 'i')
This will check all rows in your table where the specified column has any alphabet character A to Z three times followed by 4 numbers and return that list. The 'i' at the end just says ignore case on the alphabet part. You can choose to not have that option if you so wish.
Here are a couple links as well with more information on the regexp_like syntax:
Oracle SQL - REGEXP_LIKE contains characters other than a-z or A-Z
https://docs.oracle.com/cd/B28359_01/server.111/b28286/conditions007.htm#SQLRF00501
Try it this will work definitely:
Let's take a sample example:
For example, create a sample table
CREATE TABLE bob (
empno VARCHAR2(10) NOT NULL,
ename VARCHAR2(15) NOT NULL,
ssn VARCHAR2(10) NOT NULL);
Insert data into that table
Insert into bob ( empno,ename,ssn) values ('abC0123458','zXe5378023','0pl4783202');
Insert into bob ( empno,ename,ssn) values ('0123abcdef','a12dldfddd','Rns0101010');
Query to select the all the columns
select * from bob
where length(trim(translate(substr(empno,1,3),'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ',' '))) IS NULL
AND
length(trim(translate(substr(empno,4,4),'0123456789',' '))) IS NULL;
To do the same thing on multiple columns use union operator
select * from bob
where length(trim(translate(substr(empno,1,3),'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ',' '))) IS NULL
AND
length(trim(translate(substr(empno,4,4),'0123456789',' '))) IS NULL;
union
select * from bob
where length(trim(translate(substr(ename,1,3),'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ',' '))) IS NULL
AND
length(trim(translate(substr(ename,4,4),'0123456789',' '))) IS NULL;

Oracle PL/SQL Use Merge command on data from XML Table

I have a PL/SQL procedure that currently gets data from an XML service and only does inserts.
xml_data := xmltype(GET_XML_F('http://test.example.com/mywebservice');
--GET_XML_F gets the XML text from the site
INSERT INTO TEST_READINGS (TEST, READING_DATE, CREATE_DATE, LOCATION_ID)
SELECT round(avg(readings.reading_val), 2),
to_date(substr(readings.reading_dt, 1, 10),'YYYY-MM-DD'), SYSDATE,
p_location_id)
FROM XMLTable(
XMLNamespaces('http://www.example.com' as "ns1"),
'/ns1:test1/ns1:series1/ns1:values1/ns1:value'
PASSING xml_data
COLUMNS reading_val VARCHAR2(50) PATH '.',
reading_dt VARCHAR2(50) PATH '#dateTime') readings
GROUP BY substr(readings.reading_dt,1,10), p_location_id;
I would like to be able to insert or update the data using a merge statement in the event that it needs to be re-run on the same day to find added records. I'm doing this in other procedures using the code below.
MERGE INTO TEST_READINGS USING DUAL
ON (LOCATION_ID = p_location_id AND READING_DATE = p_date)
WHEN NOT MATCHED THEN INSERT
(TEST_reading_id, site_id, test, reading_date, create_date)
VALUES (TEST_readings_seq.nextval, p_location_id,
p_value, p_date, SYSDATE)
WHEN MATCHED THEN UPDATE
SET TEST = p_value;
The fact that I'm pulling it from an XMLTable is throwing me off. Is there way to get the data from the XMLTable while still using the (much cleaner) merge syntax? I would just delete the data beforehand and re-import or use lots of conditional statements, but I would like to avoid doing so if possible.
Can't you simply put your SELECT into MERGE statement?
I believe, this should look more less like this:
MERGE INTO TEST_READINGS USING (
SELECT
ROUND(AVG(readings.reading_val), 2) AS test
,TO_DATE(SUBSTR(readings.reading_dt, 1, 10),'YYYY-MM-DD') AS reading_date
,SYSDATE AS create_date
,p_location_id AS location_id
FROM
XMLTable(
XMLNamespaces('http://www.example.com' as "ns1")
,'/ns1:test1/ns1:series1/ns1:values1/ns1:value'
PASSING xml_data
COLUMNS
reading_val VARCHAR2(50) PATH '.',
reading_dt VARCHAR2(50) PATH '#dateTime'
) readings
GROUP BY
SUBSTR(readings.reading_dt,1,10)
,p_location_id
) readings ON (
LOCATION_ID = readings.location_id
AND READING_DATE = readings.reading_date
)
WHEN NOT MATCHED THEN
...
WHEN MATCHED THEN
...
;

Resources