I saw this link and this:
If you specify a date value without a time component, then the default
time is midnight. If you specify a date value without a date, then the
default date is the first day of the current month.
Oracle DATE columns always contain fields for both date and time. If
your queries use a date format without a time portion, then you must
ensure that the time fields in the DATE column are set to midnight.
The solution is to put a constraint on column with the date data-type and create a trigger (with TRUNC()) when inserting or updating a row in table.
If I use this solution do I have the warranty that Oracle does store less bytes for a date without the time?
With this standard datetime type Oracle create ambiguity. It is so hard to create a date type (only containing a date)? This is my opinion (I came from MSSQL).
No, you do not have any warranty whatsoever... whatever happens Oracle is going to store the fact that it's midnight. You cannot store a date without a time.
If you create the following table:
create table a ( dt date);
insert into a values(sysdate);
insert into a values(trunc(sysdate));
and then run this query:
select dt, dump(dt) from a
SQL Fiddle
The values returned are:
+-----------------------------+------------------------------------+
| DT | DUMP(DT) |
+-----------------------------+------------------------------------+
| June, 12 2013 18:03:15+0000 | Typ=12 Len=7: 120,113,6,12,19,4,16 |
| June, 12 2013 00:00:00+0000 | Typ=12 Len=7: 120,113,6,12,1,1,1 |
+-----------------------------+------------------------------------+
DUMP() returns the datatype, the length in bytes and the internal representation of the data.
In other words, a date with a time, and a date that have been truncated, both have 7 bytes. They're the same length.
As a little aside I would recommend against destroying potentially useful data because you're worried about space.
Storing only dates may save space if you use table compression.
Here's an example showing that storing only dates can reduce the segment size:
create table a (dt date) compress;
create table b (dt date) compress;
--Insert 20 million rows, with time
begin
for i in 1 .. 20 loop
insert /*+ append */ into a
select sysdate + numToDSInterval(level, 'second')
from dual connect by level <= 1000000;
commit;
end loop;
end;
/
--Insert 20 million rows, date only
begin
for i in 1 .. 20 loop
insert /*+ append */ into b
select trunc(sysdate + numToDSInterval(level, 'second'))
from dual connect by level <= 1000000;
commit;
end loop;
end;
/
select segment_name, bytes/1024/1024 MB
from dba_segments
where segment_name in ('A', 'B')
order by segment_name;
SEGMENT_NAME MB
------------ --
A 256
B 224
Oracle basic table compression only compresses entire values, and if there are fewer distinct values then compression can work better. But never fully believe any compression demo - you need to try it on your own data to be sure. This may be a best-case scenario, it is possible that compression will not help your data at all.
Table compression has many downsides - it requires enterprise edition, DML is slower, you cannot add a column to the table, etc.
Also, as Ben suggested, you should enforce the date-only rule with a check constraint instead of a trigger. It will be simpler, faster, and will not prevent direct-path writes, which are necessary to use basic table compression.
Related
We need to store a value "13:45" in the column "Start_Time" of an Oracle table.
Value can be read as 45 minutes past 13:00 hours
Which datatype to be used while creating the table? Also, once queried, we would like to see only the value "13:45".
I would make it easier:
create table t_time_only (
time_col varchar2(5),
time_as_interval INTERVAL DAY TO SECOND invisible
generated always as (to_dsinterval('0 '||time_col||':0')),
constraint check_time
check ( VALIDATE_CONVERSION(time_col as date,'hh24:mi')=1 )
);
Check constraint allows you to validate input strings:
SQL> insert into t_time_only values('25:00');
insert into t_time_only values('25:00')
*
ERROR at line 1:
ORA-02290: check constraint (CHECK_TIME) violated
And invisible virtual generated column allows you to make simple arithmetic operations:
SQL> insert into t_time_only values('15:30');
1 row created.
SQL> select trunc(sysdate) + time_as_interval as res from t_time_only;
RES
-------------------
2020-09-21 15:30:00
Your best option is to store the data in a DATE type column. If you are going to be any comparisons against the times (querying, sorting, etc.), you will want to make sure that all of the times are using the same day. It doesn't matter which day as long as they are all the same.
CREATE TABLE test_time
(
time_col DATE
);
INSERT INTO test_time
VALUES (TO_DATE ('13:45', 'HH24:MI'));
INSERT INTO test_time
VALUES (TO_DATE ('8:45', 'HH24:MI'));
Test Query
SELECT time_col,
TO_CHAR (time_col, 'HH24:MI') AS just_time,
24 * (time_col - LAG (time_col) OVER (ORDER BY time_col)) AS difference_in_hours
FROM test_time
ORDER BY time_col;
Test Results
TIME_COL JUST_TIME DIFFERENCE_IN_HOURS
____________ ____________ ______________________
01-SEP-20 08:45
01-SEP-20 13:45 5
Table Definition using INTERVAL
create table tab
(tm INTERVAL DAY(1) to SECOND(0));
Input value as literal
insert into tab (tm) values (INTERVAL '13:25' HOUR TO MINUTE );
Input value dynamically
insert into tab (tm) values ( (NUMTODSINTERVAL(13, 'hour') + NUMTODSINTERVAL(26, 'minute')) );
Output
you may either EXTRACT the hour and minute
EXTRACT(HOUR FROM tm) int_hour,
EXTRACT(MINUTE FROM tm) int_minute
or use formatted output with a trick by adding some fixed DATE
to_char(DATE'2000-01-01'+tm,'hh24:mi') int_format
which gives
13:25
13:26
Please see this answer for other formating options HH24:MI
The used INTERVAL definition may store seconds as well - if this is not acceptable, add CHECK CONSTRAINT e.g. as follows (adjust as requiered)
tm INTERVAL DAY(1) to SECOND(0)
constraint "wrong interval" check (tm <= INTERVAL '23:59' HOUR TO MINUTE and EXTRACT(SECOND FROM tm) = 0 )
This rejects the following as invalid input
insert into tab (tm) values (INTERVAL '13:25:30' HOUR TO SECOND );
-- ORA-02290: check constraint (X.wrong interval) violated
Oracle 12C, non partitioned, no ASM.
This is the background. I have a table with multiple columns, 3 of them being -
TRAN_DATE DATE
TRAN_TIME TIMESTAMP(6)
FINAL_DATETIME NOT NULL TIMESTAMP(6)
The table has around 70 million records. What I want to do is concatenate the tran_date and the tran_time field and update the final_datetime field with that output, for all 70 million records.
This is the query I have -
update MYSCHEMA.MYTAB set FINAL_DATETIME = (to_timestamp( (to_char(tran_date, 'YYYY/MM/DD') || ' ' ||to_char(TRAN_TIME,'HH24MISS') ), 'YYYY-MM-DD HH24:MI:SS.FF'))
Eg:
At present (for one record)
TRAN_DATE=01-DEC-16
TRAN_TIME=01-JAN-70 12.35.58.000000 AM /*I need only the time part from this*/
FINAL_DATETIME=22-OCT-18 04.37.18.870000 PM
Post the query - the FINAL_DATETIME needs to be
01-DEC-16 12.35.58.000000 AM
The to_timestamp does require 2 character strings and I fear this will slow down the update a lot. Any suggestions?
What more can I do to increase performance? No one else will be using the table at this point, so, I do have the option to
Drop indices
Turn off logging
and anything more anyone can suggest.
Any help is appreciated.
I would prefer CTAS method and your job would be simpler if you didn't have indexes, triggers and constraints on your table.
Create a new table for the column to be modified.
CREATE TABLE mytab_new
NOLOGGING
AS
SELECT /*+ FULL(mytab) PARALLEL(mytab, 10) */ --hint to speed up the select.
CAST(tran_date AS TIMESTAMP) + ( tran_time - trunc(tran_time) ) AS final_datetime
FROM mytab;
I have included only one(the final) column in your new table because storing the other two in the new table is waste of resources. You may include other columns in select apart from the two now redundant ones.
Read logging/nologging to know about NOLOGGING option in the select.
Next step is to rebuild indexes, triggers and constraints for the new table new_mytab using the definition from mytab for other columns if they exist.
Then rename the tables
rename mytab to mytab_bkp;
rename mytab_new to mytab;
You may drop the table mytab_bkp after validating the new table or later when you feel you no longer need it.
Demo
I have Big table in vertica which has time_stamp (int) as unix timestamp. I want to partition this table on week basis (week start day Monday).
Is there a better way to do this in one step rather than converting time_stamp from unix to TIMESTAMP (Vertica) then doing partitions ?
Optimally, you should be using the date/time type. You won't be able to use non-deterministic functions such as TO_TIMESTAMP in the PARTITION BY expression. The alternative is to use math to logically create the partitions:
Using a Unix timestamp to partition by:
Divide By
Minutes 60
Hours 60 * 60 (3600)
Days 60 * 60 * 24 (86400)
Weeks 60 * 60 * 24 * 7 (604800)
If we use 604800, this will give you the week number from January 1, 1970 00:00:00 UTC.
Let's set up a test table:
CREATE TABLE public.test (
time_stamp int NOT NULL
);
INSERT INTO public.test (time_stamp) VALUES (1404305559);
INSERT INTO public.test (time_stamp) VALUES (1404305633);
INSERT INTO public.test (time_stamp) VALUES (1404305705);
INSERT INTO public.test (time_stamp) VALUES (1404305740);
INSERT INTO public.test (time_stamp) VALUES (1404305778);
COMMIT;
Let's create the partition:
ALTER TABLE public.test PARTITION BY FLOOR(time_stamp/604800) REORGANIZE;
We then get:
NOTICE 4954: The new partitioning scheme will produce 1 partitions
WARNING 6100: Using PARTITION expression that returns a Numeric value
HINT: This PARTITION expression may cause too many data partitions. Use of an expression that returns a more accurate value, such as a regular VARCHAR or INT, is encouraged
NOTICE 4785: Started background repartition table task
ALTER TABLE
You'll also want to be mindful of how many partitions this creates. Vertica recommends keeping the number of partitions between 10-20.
I have a table that has a StartDate and EndDate field, and also a ton of other fields. I need to break out each record by all the days between and including StartDate & EndDate into another table that looks exactly like the original except it has a CurrentDate field and 2 calculated fields. The CurrentDate field is the current date between StartDate and EndDate that I'm interating on.
My question is, since there are a ton of fields in this, is there any easy way from within my stored proc, to insert the entire row the cursor is currently on AND this 1 new column, without having to list out every single row in the insert statement? It's so tedious.
If your source and destination tables fit this profile:
Destination table columns are the same as your source table's columns, and
The new destination column is at the end
... then you could do something like this:
INSERT INTO dest_table
SELECT Source_Table.*, new_value
FROM Source_Table
WHERE Source_Table.PKValue = cursor.PKValue
If it's a case of your cursor resembling the destination table, something like this may work but note I haven't tested it:
CREATE PROCEDURE whatever IS
destRow dest_table%ROWTYPE;
CURSOR fromSourceTable IS
SELECT <your existing select list>, NULL AS new_value
FROM <the rest of your cursor query>;
BEGIN
FOR destRow IN fromSourceTable LOOP
destRow.new_value = <the split date>;
INSERT INTO dest_table VALUES destRow;
END LOOP;
END whatever;
I'm going out on a limb with the NULL AS new_value. If you have trouble try CAST(NULL AS DATE) AS new_value instead, and if you still have trouble try something like SYSDATE AS new_value. Again, this isn't tested but if you think it's promising and have trouble implementing I'd be happy to test it.
It's easy enough to densify the data in a single SQL statement. Assuming that you know a reasonable minimum and maximum range for your begin_date and end_date (I'll assume Jan 1, 2000 - Dec 31, 2020 for the moment but you can obviously adjust that)
WITH all_days AS (
SELECT date '2000-01-01' + level dt
FROM dual
CONNECT BY level <= date '2020-12-31' - date '2000-01-01'
)
SELECT <<list of colums from your table>>,
all_days.dt current_date
FROM your_table actual
JOIN all_days ON (actual.begin_date <= all_days.dt AND
actual.end_date >= all_days.dt)
If you don't want to hard-code the starting and ending dates, you can fetch them from your table as well. That just requires that you hit the table a second time which will generally be less efficient.
I have inherited an Oracle .dmp file which I'm trying to get into CSV so that I can load it into MySQL.
The general approach I'm using is described here. I'm having a problem with one row though. It contains a date of 5544-09-14 like so:
alter session set nls_date_format = 'dd-MON-yyyy';
select OID, REF, TRADING_DATE From LOAN WHERE REF = 'XXXX';
OID REF TRADING_DATE
--- -------------------- ------------
1523 XXXX 14-SEP-5544
This is garbage data from the legacy system which didn't validate the input dates. I'm wondering why my PL/SQL function to export the data chokes on this value though?
It exports that row with a TRADING_DATE value of '0000-00-00T00:00:00' and I'm not sure why?
SELECT dump(TRADING_DATE) FROM LOAN WHERE REF = 'XXXX';
DUMP(TRADING_DATE)
--------------------------------------------------------------------------------
Typ=12 Len=7: 44,156,9,14,1,1,1
and
SELECT to_char(trading_date, 'YYYYMMDDHH24MISS') FROM LOAN WHERE REF = 'XXXX';
TO_CHAR(TRADIN
--------------
00000000000000
The value stored in that column is not a valid date. The first byte of the dump should be the century, which according to Oracle support note 69028.1 is stored in 'excess-100' notation, which means it should have a value of 100 + the actual century; so 1900 would be 119, 2000 would be 120, and 5500 would be 155. So 44 would represent -5600; the date you have stored appears to actually represent 5544-09-14 BC. As Oracle only supports dates with years between -4713 and +9999, this isn't recognised.
You can recreate this fairly easily; the trickiest bit is getting the invalid date into the database in the first place:
create table t42(dt date);
Table created.
declare
d date;
begin
dbms_stats.convert_raw_value('2c9c090e010101', d);
insert into t42 (dt) values (d);
end;
/
PL/SQL procedure successfully completed.
select dump(dt), dump(dt, 1016) from t42;
DUMP(DT)
--------------------------------------------------------------------------------
DUMP(DT,1016)
--------------------------------------------------------------------------------
Typ=12 Len=7: 45,56,9,14,1,1,1
Typ=12 Len=7: 2d,38,9,e,1,1,1
So this has a single row with the same data you do. Using alter session I can see what looks like a valid date:
alter session set nls_date_format = 'DD-Mon-YYYY';
select dt from t42;
DT
-----------
14-Sep-5544
alter session set nls_date_format = 'YYYYMMDDHH24MISS';
select dt from t42;
DT
--------------
55440914000000
But if I use an explicit date mask it just gets zeros:
select to_char(dt, 'DD-Mon-YYYY'), to_char(dt, 'YYYYMMDDHH24MISS') from t42;
TO_CHAR(DT,'DD-MON-Y TO_CHAR(DT,'YY
-------------------- --------------
00-000-0000 00000000000000
And if I run your procedure:
exec dump_table_to_csv('T42');
The resultant CSV has:
"DT"
"0000-00-00T00:00:00"
I think the difference is that those that attempt to show the date are sticking with internal date data type 12, while those that show zeros are using external data type 13, as mentioned in note 69028.1.
So in short, your procedure isn't doing anything wrong, the date it's trying to export is invalid internally. Unless you know what date it was supposed to be, which seems unlikely given your starting point, I don't think there's much you can do about it other than guess or ignore it. Unless, perhaps, you know how the data was inserted and can work out how it got corrupted.
I think it's more likely to be from an OCI program than what I did here; this 'raw' trick was originally from here. You might also want to look at note 331831.1. And this previous question is somewhat related.