Storing unix timestamp versus string with UTC - time

I have a conversation today with my engineering coworkers about application and database design in Fintech.
How do we handle time? We all know that we need to store the datetime information normalized in UTC. But our debate is among whether we should store in 1) unix epoch timestamp which is an integer in our database, for example, 1596507157. which is 08/04/2020 # 2:12am UTC or 2) store in ISO 8601 format in string 2020-08-04T02:12:37+00:00 or 2020-08-04T02:12:37.123456789Z
The downside of unix timestamp is obviously not immediate human-readable.
I am here looking for some design advices on whether we should adopt either approach.

In my opinion, you should store it as integer rather than string, one of many reasons is obvious, int just needs 4 byte while string as ISO 8601 takes a lot more.
What you see as downside, it doesn't impact the backend, it's just a make up (only needed on user view).

Most of database have datetime data type. I guess your 'epoch timestamp' means this datetime or timestamp data type. It's easier (and better performance) to handle datetime type for database engineer. and database also have datetime formatting function.
(Oracle/MS SQL Server - convert, MySQL - date_format, PostgreSQL - to_char ...)
In my opinion, store and handle with datetime type and use formatting function for human.

Related

Changing format of date without using to_char - Oracle

I have to get the max payment date on an invoice and I am having trouble with the date format. I do not need the max in this formula as I am using the format in a reporting tool that is pulling the max from what it finds for me.
Using "to_char({datefield},'mm/dd/yyyy')" works for displaying that date the way we would like BUT when you use summary function MAX it does not pull the correct date because it is looking at a string and not a date (it will think 12/3/21 is larger than 3/2/22).
Another thing I have tried is trunc - "trunc({datefield})" which gives us the correct max date but it changes the formatting. For example if the date prior to the formula being applied is "8/12/21 12:00:00:000" the trunc formula will display it as 12-08-21 which is horribly wrong.
Long story short is I need a way to change a date/time to date with the format of 'mmmm/dd/yyyy' WITHOUT converting it to a string with something like to_char. Thank you!!!!
A DATE is a binary data type consisting of 7 bytes representing: century, year-of-century, month, day, hour, minute and second. It ALWAYS has all of those components and it is NEVER stored with any (human-readable) format.
What you are seeing when a date is displayed is the client application you are using to access the database making a decision to be helpful to you, the user, and display the binary DATE provided by the database in a human-readable format.
If you want to change how the DATE is displayed then you either need to:
Change the settings on the client application that controls how it formats dates when it displays them to you; or
Change the data-type so that it is no longer a DATE (which does not have a format) to a data type where the values of the date can be formatted (such as a string). You can do this using TO_CHAR.
If you want to find the maximum then do it BEFORE applying the formatting:
SELECT TO_CHAR(MAX({datefield}),'mm/dd/yyyy')
FROM your_table;

TO_TIMESTAMP_TZ Functionality

Any one have any idea how 'TO_TIMESTAMP_TZ' function works internally in oracle.
I want to know how it converts timestamp to timezone
The documentation is very handy for questions like this.
TO_TIMESTAMP_TZ converts a string into a timestamp with timezone information. It doesn't convert something that's already a timestamp into a timestamp with timezone information, without first having a conversion back into a string - which, as I'm sure you're aware, you should always do explicitly.

How to convert a cassandra date object into epoch timestamp in java

I am creating a java plugin for moving data from cassandra database to elastic search. I am getting all the data but the date which I am getting from the database is in human readable form ie Row[Fri Jul 25 11:36:10 IST 2014].I want this to be converted to epoch timestamp format like 1414386721.
I do not know Cassandra DB, but according to this doc your driver should be translating the date-time value in Cassandra to a java.util.Date object in Java.
You may be confused about how a java.util.Date object works. The j.u.Date class is confusing and difficult in many ways, one of which is that while a Date has no time zone its toString implementation on-the-fly applies the JVM’s current default time zone as it generates the string.
You may also be new to date-time work and therefore confusing a date-time object with its String representation. Consider that 1.4 is a number and should not be confused with its representation as a String in the format of a price €1.40. Likewise a date-time object is not a String but can be represented as a String generated any number of formats.
Lastly, if you are indeed getting a java.util.Date object, learn to convert that to either the Joda-Time library or the java.time library. The java.util.Date and .Calendar classes are notoriously troublesome.

hibernate JDBC type not found

Does hibernate have any mapping for this oracle data type:(10G)
TIMESTAMP(6) WITH TIME ZONE
I am getting:
No Dialect mapping for JDBC type: -101
My manager does not want to do the: registerHibernateType(-101, Hibernate.getText().getname())
He thinks it is too much.:)
What alternative can I have?
The answer you provide to yourself is more like a workaround than a proper solution. For the sake of the visitors looking for an answer, I'll provide my view on this:
1) Database date-based fields should be always set to UTC, never with a specific timezone. Date calculation with timezone information is an unneeded complexity. Remember that timezones usually changes twice a year for a lot of countries in the world ("daylight saving time"). There's a reason why only a few RDMBS' supports this, and there's a reason why Hibernate developers refuse to support this data-type. The patch for Hibernate is simple enough (one line of code), the implications aren't.
2) Converting your "timestamp with timezone" to a String will only cause problems later. Once you retrieve it as String, you'll need to convert it again to a Date/Calendar object, an unneeded overhead. Not to mention the risks associated with this operation.
3) If you need to know in which timezone is some user, just store the String representing the timezone offset (like "Europe/Prague"). You can use this in Java to build a Calendar with date/time and timezone, as it'll take care of DST for you.
For now, I solved the problem by:
`select TO_CHAR(TRUNC(field)) from table` //field is the one having type= timestamp with timezone
This ensures that when the query returns, the field has datatype 'String'

Significance of date 02/31/2157?

I work in a large scale IT support environment. Twice now we have seen an invalid date of 02/31/2157 being inserted in an Oracle DATE column. So far I have not been able to reproduce this problem, but it appears to be happening occasionally when a user attempts to save '00/00/0000' into the column. I believe the value is originating from a PowerBuilder DataWindow update.
The application uses myriad libraries for all sorts of technologies, so this question may be a bit vague, but...
Has anyone seen the date 02/31/2157 in some established library that Oracle could be defaulting to when some other invalid date is entered? Perhaps an end-of-time concept analogous to the beginning-of-time date of 1/1/1970?
From http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/datatype.htm#i1847"
Oracle uses its own internal format to
store dates. Date data is stored in
fixed-length fields of seven bytes
each, corresponding to century, year,
month, day, hour, minute, and second.
2157-256 = 1901, which seems suspiciously close to a possible epoch of 1/1/1900 (or 12/13/1901 - which is the rollover date for the Year 2038 Problem)
I'd guess that it is storing either 0x00 or 0xFF in the date bytes, then getting confused when it decodes it. (How does it deal with month 255?)
Turns out this was a powerbuilder issue. The field was created in the datawindow as required, but was programmatically changed to be non-required before saving. So a null value was being saved to a non-null database column, and powerbuilder inserted some dummy date instead of just throwing an error.
I remember getting a weird value when saving an invalid date. IIRC it was in PB 9 and we had to get an EBF for it. It was a problem with Date Editmasks and entering an invalid date that wasn't rejected. Sorry I don't have more details.

Resources