Insert a record having timestamp(6) field from Oracle to Postgresql via dblink bug?, lose timestamp precision? - oracle

I am in trouble while inserting a new record to postgresql from oracle database server. After insert a new record into postgresql, I lose precision of timestamp fields, all digits which refers to microsecond had been lose. Here is my sample code:
declare
v_date timestamp(6):=to_timestamp('2013-06-04 12:03:01.123456','YYYY-MM-DD HH24:MI:SS.FF6');
begin
dbms_output.put_line (v_date);
insert into "public"."DAS_ITEM"#PG_LINK
("DOCKID","CANDY_ITM_NBR","MODIFIED_ON") VALUES (1,3, v_date);
commit;
end;
After running the pl/sql, I would like to query the data from postgres directly
select "DOCKID","CANDY_ITM_NBR", to_char("MODIFIED_ON", 'YYYY-MM-DD HH24:MI:SS.US') from "DAS_ITEM";
and here is the result:
DOCKID | CANDY_ITM_NBR | MODIFIED_ON
-----------+---------------+---------------------------
1 | 3 | 2013-06-04 12:03:01.000000
Currently, the value of MODIFIED_ON field is '2013-06-04 12:03:01.000000', I expect the value of MODIFIED_ON was 2013-06-04 12:03:01.123456
Please help me, I am in trouble for 36 hours.

Related

View Performance

I have requirement of performing some calculation on a column of a table with large date set ( 300 GB). and return that value.
Basically I need to create a View on that table. Table has data of 21 years and It is partitioned on date column (Daily). We can not put date condition on View's query and User will put filter on runtime while execution of the view.
For example:
Create view v_view as
select * from table;
Noe I want to query View like
Select * v_view where ts_date between '1-Jan-19' and '1-Jan-20'
How Internally Oracle execute above statement? Will it execute view query first and then put date filter on that?
If so will there not be performance issue ? and how to resolve this?
oracle first generates the view and then applies the filter. you can create a function that input may inserted by user. the function results a create query and if yo run the query then the view will be created. just run:
create or replace function fnc_x(where_condition in varchar2)
return varchar2
as
begin
return ' CREATE OR REPLACE VIEW sup_orders AS
SELECT suppliers.supplier_id, orders.quantity, orders.price
FROM suppliers
INNER JOIN orders
ON suppliers.supplier_id = orders.supplier_id
'||where_condition||' ';
end fnc_x;
this function should be run. input the function is a string like this:
''WHERE suppliers.supplier_name = Microsoft''
then you should run a block like this to run the function's result:
cl scr
set SERVEROUTPUT ON
declare
szSql varchar2(3000);
crte_vw varchar2(3000);
begin
szSql := 'select fnc_x(''WHERE suppliers.supplier_name = Microsoft'') from dual';
dbms_output.put_line(szSql);
execute immediate szSql into crte_vw; -- generate 'create view' command that is depended on user's where_condition
dbms_output.put_line(crte_vw);
execute immediate crte_vw ; -- create the view
end;
In this manner, you just need received where_condition from user.
Oracle can "push" the predicates inside simple views and can then use those predicates to enable partition pruning for optimal performance. You almost never need to worry about what Oracle will run first - it will figure out the optimal order for you. Oracle does not need to mindlessly build the first step of a query, and then send all of the results to the second step. The below sample schema and queries demonstrate how only the minimal amount of partitions are used when a view on a partitioned table is queried.
--drop table table1;
--Create a daily-partitioned table.
create table table1(id number, ts_date date)
partition by range(ts_date)
interval (numtodsinterval(1, 'day'))
(
partition p1 values less than (date '2000-01-01')
);
--Insert 1000 values, each in a separate day and partition.
insert into table1
select level, date '2000-01-01' + level
from dual
connect by level <= 1000;
--Create a simple view on the partitioned table.
create or replace view v_view as select * from table1;
The following explain plan shows "Pstart" and "Pstop" set to 3 and 4, which means that only 2 of the many partitions are used for this query.
--Generate an explain plan for a simple query on the view.
explain plan for
select * from v_view where ts_date between date '2000-01-02' and date '2000-01-03';
--Show the explain plan.
select * from table(dbms_xplan.display(format => 'basic +partition'));
Plan hash value: 434062308
-----------------------------------------------------------
| Id | Operation | Name | Pstart| Pstop |
-----------------------------------------------------------
| 0 | SELECT STATEMENT | | | |
| 1 | PARTITION RANGE ITERATOR| | 3 | 4 |
| 2 | TABLE ACCESS FULL | TABLE1 | 3 | 4 |
-----------------------------------------------------------
However, partition pruning and predicate pushing do not always work when we may think they should. One thing we can do to help the optimizer is to use date literals instead of strings that look like dates. For example, replace
'1-Jan-19' with date '2019-01-01'. When we use ANSI date literals, there is no ambiguity and Oracle is more likely to use partition pruning.

insert a String into timestamp(6) column

I have timestamps looking like this: 2019-06-13 13:22:30.521000000
I am using Spark/Scala scripts to insert them into an Oracle table. Column in Oracle is Timestamp(6) and should stay like that.
This is what I do:
what I have in Spark is a df containing a column with my timestamps:
+-----------------------------+
| time |
+-----------------------------+
|2019-06-13 13:22:30.521000000|
+-----------------------------+
I do the following:
df.withColumn("time", (unix_timestamp(substring(col("time"), 1, 23), "yyyy-MM-dd HH:mm:ss.SSS") + substring(col("time"), -6, 6).cast("float") / 1000000).cast(TimestampType))
and I insert using a connexion to Oracle (insert script was tested and works fine).
But in Oracle I only see the following in my table:
+--------------------------+
| time |
+--------------------------+
|2019-06-13 13:22:30.000000|
+--------------------------+
The milliseconds aren't included. Any help please? Thank you!
If your time column is a timestamp type, you can try date_format:
https://sparkbyexamples.com/spark/spark-sql-how-to-convert-date-to-string-format/
I thank everyone that tried to help me.
This is what I did to get desired output:
df.withColumn("time", (unix_timestamp(substring(col("time"), 1, 23), "yyyy-MM-dd HH:mm:ss.SSS") + substring(col("time"), -9, 9).cast("float") / 1000000000).cast(TimestampType))
all other solutions kept returning null or timestamps without milliseconds.
Hope it helps someone.
I don't know tools you use, but - if it were only Oracle, then to_timestamp with appropriate format mask does the job. See if it helps.
SQL> create table test (col timestamp(6));
Table created.
SQL> insert into test (col) values
2 (to_timestamp('2019-06-13 13:22:30.521000000', 'yyyy-mm-dd hh24:mi:ss.ff'));
1 row created.
SQL> select * From test;
COL
---------------------------------------------------------------------------
13.06.19 13:22:30,521000
SQL>
[EDIT, as you can't read my mind (at least, I hope so]
As you (AbderrahmenM) said that you have a string but still want to insert a timestamp, perhaps you could use a stored procedure. Here's an example:
SQL> create or replace procedure p_test (par_time in varchar2)
2 is
3 begin
4 insert into test (col) values
5 (to_timestamp(par_time, 'yyyy-mm-dd hh24:mi:ss.ff'));
6 end;
7 /
Procedure created.
SQL> exec p_test('2019-06-13 13:22:30.521000000');
PL/SQL procedure successfully completed.
SQL> select * from test;
COL
-------------------------------------------------------------------
13.06.19 13:22:30,521000
SQL>
Now, the only thing I can't help with is how to call a procedure from Spark. If you know how, then simply pass that string you have and it should be properly inserted into the database; pay attention to correct format mask!

Trigger to update week of the year

I want to write a trigger so that when decom_date is inserted or updated the week of the year is updated to the corresponding value.
This is what I have so far, but after inserting a date the week is still null.
create or replace trigger test_trigger
before insert on check_decom
for each row
begin
if inserting then
update check_decom set decom_week= (select to_char(to_date(decom_date,'DD-
MON-YY'),'WW') as week from check_decom) ;
end if;
end;
/
SQL> select * from check_decom;
DECOM_DATE DECOM_WEEK
------------------------------ ----------
23-JUN-17
What am I doing wrong?
Example for Week of a year
SQL> select to_char(to_date(sysdate,'DD-MON-YY'),'WW') as week from dual;
WE
--
28
You're doing a couple of things wrong, starting with date handling. Your decom_date column should be defined as a DATE column - it looks like it might be a string in your sample output. But your handling with sysdate is also wrong, as you're implicitly converting to a string in order to convert it back to a date, which is both pointless and prone to error as this might happen in a session which has different NLS settings. If your column is actually a DATE then you should not be calling to_date() against that either; and if it is a string then that conversion is valid but it should be a DATE.
Then your trigger is querying and trying to update the table that the trigger is against. With no data that doesn't error but doesn't do anything as there is no existing row to update - the one you are inserting doesn't exist yet. If there was data you would get a mutating table error, if you didn't get a too-many-rows exception from the select part.
Row-level triggers can access NEW and OLD pseudorecords to see and manipulate the affected row; you don't need to (and generally can't) use DML queries to access the data in the row you're manipulating.
If your table was defined with a date column and a number column:
create table check_decom(decom_date date, decom_week number);
then your trigger might look something like:
create or replace trigger test_trigger
before insert on check_decom
for each row
begin
if inserting then
:new.decom_week := to_number(to_char(:new.decom_date, 'WW'));
end if;
end;
/
although the if inserting check is a bit pointless as the trigger will only fire on insert anyway. Which in itself might be an issue; you perhaps want it to be set on update as well, but the logic the same, so would be:
create or replace trigger test_trigger
before insert or update on check_decom
for each row
begin
:new.decom_week := to_number(to_char(:new.decom_date, 'WW'));
end;
/
which does what you want:
insert into check_decom (decom_date) values (date '2017-06-23');
1 row inserted.
select * from check_decom;
DECOM_DAT DECOM_WEEK
--------- ----------
23-JUN-17 25
But I wouldn't do this with a trigger at all. From Oracle 11g you can use a virtual column instead:
create table check_decom (
decom_date date,
decom_week generated always as (to_number(to_char(decom_date, 'WW')))
);
Table CHECK_DECOM created.
insert into check_decom (decom_date) values (date '2017-06-23');
1 row inserted.
select * from check_decom;
DECOM_DAT DECOM_WEEK
--------- ----------
23-JUN-17 25

Does storing date without time use less bytes?

I saw this link and this:
If you specify a date value without a time component, then the default
time is midnight. If you specify a date value without a date, then the
default date is the first day of the current month.
Oracle DATE columns always contain fields for both date and time. If
your queries use a date format without a time portion, then you must
ensure that the time fields in the DATE column are set to midnight.
The solution is to put a constraint on column with the date data-type and create a trigger (with TRUNC()) when inserting or updating a row in table.
If I use this solution do I have the warranty that Oracle does store less bytes for a date without the time?
With this standard datetime type Oracle create ambiguity. It is so hard to create a date type (only containing a date)? This is my opinion (I came from MSSQL).
No, you do not have any warranty whatsoever... whatever happens Oracle is going to store the fact that it's midnight. You cannot store a date without a time.
If you create the following table:
create table a ( dt date);
insert into a values(sysdate);
insert into a values(trunc(sysdate));
and then run this query:
select dt, dump(dt) from a
SQL Fiddle
The values returned are:
+-----------------------------+------------------------------------+
| DT | DUMP(DT) |
+-----------------------------+------------------------------------+
| June, 12 2013 18:03:15+0000 | Typ=12 Len=7: 120,113,6,12,19,4,16 |
| June, 12 2013 00:00:00+0000 | Typ=12 Len=7: 120,113,6,12,1,1,1 |
+-----------------------------+------------------------------------+
DUMP() returns the datatype, the length in bytes and the internal representation of the data.
In other words, a date with a time, and a date that have been truncated, both have 7 bytes. They're the same length.
As a little aside I would recommend against destroying potentially useful data because you're worried about space.
Storing only dates may save space if you use table compression.
Here's an example showing that storing only dates can reduce the segment size:
create table a (dt date) compress;
create table b (dt date) compress;
--Insert 20 million rows, with time
begin
for i in 1 .. 20 loop
insert /*+ append */ into a
select sysdate + numToDSInterval(level, 'second')
from dual connect by level <= 1000000;
commit;
end loop;
end;
/
--Insert 20 million rows, date only
begin
for i in 1 .. 20 loop
insert /*+ append */ into b
select trunc(sysdate + numToDSInterval(level, 'second'))
from dual connect by level <= 1000000;
commit;
end loop;
end;
/
select segment_name, bytes/1024/1024 MB
from dba_segments
where segment_name in ('A', 'B')
order by segment_name;
SEGMENT_NAME MB
------------ --
A 256
B 224
Oracle basic table compression only compresses entire values, and if there are fewer distinct values then compression can work better. But never fully believe any compression demo - you need to try it on your own data to be sure. This may be a best-case scenario, it is possible that compression will not help your data at all.
Table compression has many downsides - it requires enterprise edition, DML is slower, you cannot add a column to the table, etc.
Also, as Ben suggested, you should enforce the date-only rule with a check constraint instead of a trigger. It will be simpler, faster, and will not prevent direct-path writes, which are necessary to use basic table compression.

Trying to export a Oracle via PL/SQL gives a date of 0000-00-00

I have inherited an Oracle .dmp file which I'm trying to get into CSV so that I can load it into MySQL.
The general approach I'm using is described here. I'm having a problem with one row though. It contains a date of 5544-09-14 like so:
alter session set nls_date_format = 'dd-MON-yyyy';
select OID, REF, TRADING_DATE From LOAN WHERE REF = 'XXXX';
OID REF TRADING_DATE
--- -------------------- ------------
1523 XXXX 14-SEP-5544
This is garbage data from the legacy system which didn't validate the input dates. I'm wondering why my PL/SQL function to export the data chokes on this value though?
It exports that row with a TRADING_DATE value of '0000-00-00T00:00:00' and I'm not sure why?
SELECT dump(TRADING_DATE) FROM LOAN WHERE REF = 'XXXX';
DUMP(TRADING_DATE)
--------------------------------------------------------------------------------
Typ=12 Len=7: 44,156,9,14,1,1,1
and
SELECT to_char(trading_date, 'YYYYMMDDHH24MISS') FROM LOAN WHERE REF = 'XXXX';
TO_CHAR(TRADIN
--------------
00000000000000
The value stored in that column is not a valid date. The first byte of the dump should be the century, which according to Oracle support note 69028.1 is stored in 'excess-100' notation, which means it should have a value of 100 + the actual century; so 1900 would be 119, 2000 would be 120, and 5500 would be 155. So 44 would represent -5600; the date you have stored appears to actually represent 5544-09-14 BC. As Oracle only supports dates with years between -4713 and +9999, this isn't recognised.
You can recreate this fairly easily; the trickiest bit is getting the invalid date into the database in the first place:
create table t42(dt date);
Table created.
declare
d date;
begin
dbms_stats.convert_raw_value('2c9c090e010101', d);
insert into t42 (dt) values (d);
end;
/
PL/SQL procedure successfully completed.
select dump(dt), dump(dt, 1016) from t42;
DUMP(DT)
--------------------------------------------------------------------------------
DUMP(DT,1016)
--------------------------------------------------------------------------------
Typ=12 Len=7: 45,56,9,14,1,1,1
Typ=12 Len=7: 2d,38,9,e,1,1,1
So this has a single row with the same data you do. Using alter session I can see what looks like a valid date:
alter session set nls_date_format = 'DD-Mon-YYYY';
select dt from t42;
DT
-----------
14-Sep-5544
alter session set nls_date_format = 'YYYYMMDDHH24MISS';
select dt from t42;
DT
--------------
55440914000000
But if I use an explicit date mask it just gets zeros:
select to_char(dt, 'DD-Mon-YYYY'), to_char(dt, 'YYYYMMDDHH24MISS') from t42;
TO_CHAR(DT,'DD-MON-Y TO_CHAR(DT,'YY
-------------------- --------------
00-000-0000 00000000000000
And if I run your procedure:
exec dump_table_to_csv('T42');
The resultant CSV has:
"DT"
"0000-00-00T00:00:00"
I think the difference is that those that attempt to show the date are sticking with internal date data type 12, while those that show zeros are using external data type 13, as mentioned in note 69028.1.
So in short, your procedure isn't doing anything wrong, the date it's trying to export is invalid internally. Unless you know what date it was supposed to be, which seems unlikely given your starting point, I don't think there's much you can do about it other than guess or ignore it. Unless, perhaps, you know how the data was inserted and can work out how it got corrupted.
I think it's more likely to be from an OCI program than what I did here; this 'raw' trick was originally from here. You might also want to look at note 331831.1. And this previous question is somewhat related.

Resources