Store date and time in tarantool - tarantool

Suppose I need a field in tuple which should be date with time. Tarantool doesn't support date and time types out of the box.
I see two solutions:
Store date and time as string and parse it.
Store date and time in epoch seconds and convert it when needed.
What is the best solution to work with dates and times in Tarantool?

You should use UNIX time format (seconds since the beginning of the UNIX epoch) for two reasons:
- it's compact
- the desired ordering can be achieved with TREE index on 'unsigned' type.
If you deal with multiple time zones, it's best to convert to UNIX time before inserting into the database, and store the time zone in a separate field.

Support of datetime is an upcoming feature in Tarantool 2.10.0.

Related

What field type would give a better index perfomance in oracle db?

I have a field that contains time of order creation(order_time). Naturally, the best data type for that field is TIMESTAMP, but I want to create index and I'm not sure that TIMESTAMP index would be better than any numerical index. What's the best practice here?
I'm using oracle database
Always use the most appropriate data-type for the data:
If the data has date and time components and has a time-zone then use TIMESTAMP WITH TIME ZONE;
If the data has date and time components with fractional seconds and no time-zone then use TIMESTAMP;
If the data has date and time components with no fractional seconds and no time-zone then use DATE; and
If your data is an instant measured, for example, as the number of milliseconds (or seconds) since 1970-01-01 00:00:00 UTC and you almost entirely use it in its numeric form (i.e. you never, or very rarely, convert it to a human readable format such as YYYY-MM-DD HH:MI:SS.FF) then you may want to store it as a number. However, if you want to format it so it is readable or compare it to dates then you should prefer the TIMESTAMP (or DATE) data type.
Never use an inappropriate data-type for your column. The index performance between the different data-types should be mostly irrelevant and the overheads of converting from an inappropriate data-type to an appropriate one are likely to be a much more significant cost.

clickhouse dateTime with milliseconds

ClickHouse doesn't support, yet, DateTime with milliseconds.
I saw two possible suggestion regarding fields like: 2019-03-17T14:00:32.296Z
multiply by 100 an store it in UInt32/64. How do I use the multiply by 100 and store as UInt32?
to store milliseconds separately. Is there a way to remove milliseconds from 2019-03-17T14:00:32.296Z => 2019-03-17 14:00:32?
Thanks for your help!
Should use the datetime64 type - https://clickhouse.com/docs/en/sql-reference/data-types/datetime64/
In my mind, the main idea, why ClickHouse does not support milliseconds in DateTime is worse compression.
Long story short: use DateTime and precession by seconds. If you want to store milliseconds, you can go ahead with two ways:
Store milliseconds separately, so you will have a DateTime with your date, that you could use in all possible DateTime functions, as well as primary keys. And put milliseconds part in separate column with type UInt16. You have to prepare data separately before storing. Depends on what language do you use for preprocess data before storing, it could be different ways to do it. In golang it could be done:
time.Now().UnixNano() / 1e6 % 1e3
Another way, is to store whole as timestamp. This means you should convert your date to unix timestamp with milliseconds by your own and put it into ClickHouse as Uint64. It also depends on what do you use for preparing inserts. For golang it could like:
time.Now().UnixNano() / 1e6

MongoDB date import via Talend from Oracle is out by 1 day

I am actually using the mongDB API via a tLibraryLoad component, as I find this easier to build complex multi-level documents using tJavaRow and tJava components, than using the MongoDB palette components.
I am reading in data from Oracle which are date values with a zero time stamp component: For example:
29-JUN-08 00.00.00
The import works via Talend, however the records in mongo shell appear to be the previous day. You can see the record is inserted as the 28-JUN-2008.
Extract from JSON document in mongoDB:
"status_date" : ISODate("2008-06-28T23:00:00Z")
It is almost as if mongoDB (or Talend?) sees a midnight date as the end of the previous day, rather than the start of the 29th June 2008.
In my Talend schema I have specified the Oracle columns as Date types with a DB Type of Date also.
Any advice appreciated.
---UPDATE 1------
As only some dates are affected, it seems this is a DST adjustment in mongoDB to adjust perhaps the display to my local timezone, as the dates which are impacted are in the back half of the year in daylight saving time.
Is it adjusting the date due to the location of the mongo shell?
It is adjusting the date due to the location of the mongo server so all people using the mongo shell would get the same answers to date queries?
Would different people running different mongo queries on dates get different results based on their location, their DST kick in dates...i.e. you could imagine dates from the 1st November 2015, being counted as contributing to October 31st figures (at 23:00)....
i
I feel your pain - this is derived from MongoDB itself.
At issue is that MongoDB stores dates in UTC format by default.
https://docs.mongodb.org/manual/tutorial/model-time-data/
You can use Mongo's suggestion above but in this case you are storing just the date and not the time. I've used two solutions:
Don't bother storing dates as DATEs. Convert all your dates to %Y%m%d format and store them as integers. You can easily compare dates using $gt and $lte just using integers - just be sure to bring in your date using the same format and convert them back in your program later.
... or ...
Since in your case your date seems to be off by an hour, add an hour to it before you make your insert. It all depends on how long the timezone offset is to your local machine.
On linux you can see what the utc value is using:
date -u
I suppose you could change your local timezone on your machine to be UTC time and see what happens.
Personally, I've never had an issue with using the first approach. It's fast and ensures that I have what I want in there.
I think it is because of the timezone of your server. Try to remove the timezone from your date object and then insert it to the mongodb.
With moment.js you can create date without the timezone in such way:
var date = moment.utc("29-JUN-08 00.00.00", 'DD-MM-YYYY h.mm.s').format()

Best date format for timestamps in a MongoDB document?

I am developing an API using Codeigniter and MongoDB.
I am not sure what date format that is the most "flexible" for the timestamp of
each document in the database.
Currently I am using: yyyy-mm-dd hh:mm:ss
Is it better (for mongodb searching and internationalization) to use another format?
My question is just this, what is the best format for a timestamp in a MongoDB document?
Over the years, I have been forced to a very strong personal commitment to always storing date/time as the 10-digit Linux/Unix timestamp, which gives the current (add: local) time as seconds since the Epoch. Just a few moments ago, the time was 1329126719. To me, this is the most flexible format possible. When it comes time to display a date/time, it's simple to convert the 10-digit timestamp to any string you care to show.
Edit: Perhaps a better choice for me would be milliseconds from the Epoch, since that seems to be increasingly favored as the art evolves.
MongoDB has a built-in datetime type which interacts with the datetime types in your application's language (in PHP they become MongoDate instances). MongoDB datetimes are always stored in UTC, and are internally stored as milliseconds since the UNIX epoch. This means that they are compact (they are always 8 bytes, as opposed to string formats which are longer depending on how much precision you choose to store). Additionally, the MongoDB tools all "understand" datetime objects -- you can manipulate them easily from javascript in a Map-Reduce, or using the new aggregation framework.

Oracle date

How is Oracle date implemented? Is it stored as milliseconds or something like that?
An Oracle DATE stores the date and time to the second. An Oracle TIMESTAMP stores the date and time to up to 9 digits of subsecond precision, depending on the available hardware.
Both are implemented by storing the various components of the date and the time in a packed binary format. From the Oracle Concepts Guide section on dates
Oracle uses its own internal format to
store dates. Date data is stored in
fixed-length fields of seven bytes
each, corresponding to century, year,
month, day, hour, minute, and second.
You can use the DUMP() function to see the internal representation of any particular date (or any other value for that matter), but that's probably more than you need (or want) to know.
Apparently, not in form of millisecs.
Which actually makes sense, since they do not have any running operations on current date/time:
http://www.ixora.com.au/notes/date_representation.htm
http://infolab.stanford.edu/~ullman/fcdb/oracle/or-time.html
http://www.akadia.com/services/ora_date_time.html
No. DATE is a timestamp value with seconds precision. You need TIMESTAMP(3) to store milliseconds.

Resources