Oracle date calculation issue - oracle

there is a requirement like below:
string format is : dd hh:mm:ss, this means (days hours:minutes:seconds, day is optional)
now the string will add to value "1/1/4000", so if the incoming value is "00:15:00" the resulting value would be 1/1/4000 00:15:00 (add 15 minutes to 1/1/4000). If the incoming value is 2 00:15:00 then the resulting value would be 1/3/4000 00:15:00 (add 2 days and 15 minutes to 1/1/4000) . If the incoming value is 32 00:15:00 then the resulting value would be 2/1/4000 00:15:00.
so is there any simple method to implement this requirement above?

You can convert your input string to INTERVAL DAY TO SECOND datatype using TO_DSINTERVAL and then add it to your default date. The result will be a date.
date'4000-01-01' + TO_DSINTERVAL('2 23:23:12');
But this requires your input string to be in DD HH:MI:SS format. Since in your input, day is optional, you should append 0 days to the string, in case it isn't present.

Related

How do I convert 24 hours format to time hh:mm:tt in SSRS?

How do I convert 24 hours format into time format in SSRS? So for example e.g. (120 become 1:20 AM or 1420 become 2:20 PM)
You can convert your integer field into an SSRS datetime and then set the display to represent "H:MM tt" like you want.
Parse into date/time with an expression like
=CDate("2/28/2017 " + LEFT("0230",2) + ":" + RIGHT("0230",2))
Note that I picked a random date and appended "0230" representing the time to it. In your case, replace "0230" with "Fields!YourFieldName.Value" where YourFieldName is the column in your dataset.
Right click on your expression, go to "Text Box Properties", "Number" pane, and set your expression format in the "time" menu.

DATES with awk in UNIX [duplicate]

This question already has an answer here:
take date from file in unix
(1 answer)
Closed 6 years ago.
I want to take two dates as argument from the user ( ) with
$./tool.sh --born-since <dateA> --born-until <dateB>
and from a file print the lines that are between those two dates.For example:
933|Mahinda|Perera|male|1989-12-03|2010-03-17T13:32:10.447+0000|192.248.2.123|Firefox
1129|Carmen|Lepland|female|1984-02-18|2010-02-28T04:39:58.781+0000|81.25.252.111|Internet Explorer
4194|Hồ Chí|Do|male|1988-10-14|2010-03-17T22:46:17.657+0000|103.10.89.118|Internet Explorer
So , i use awk command like this :
awk -F'|' '{print $4} [ file ... ]
to take the dates .. how can i use awk to make the dates from the txt to seconds form ?
if the date variables are in the same format, you can convert everything to numbers and use comparison.
awk -F'|' -v from=$dateA -v to=$dateB '{gsub("-","",$5);
gsub("-","",from); gsub("-","",to)}
from <= $5 && $5 <= to' file
Note, it's the fifth field in your file.
You can either call the /bin/date +"%s" --date="DATESTRING" through system() if the DATESTRING matches a format "/bin/date" understands, or you use the internal mktime() function. But then you need to split your date according to awk(1):
mktime(datespec)
Turn datespec into a time stamp of the same form as returned by systime(), and return the result. The datespec is a string of
the form YYYY MM DD HH MM SS[ DST]. The contents of the string are six or seven numbers representing respectively the full year
including century, the month from 1 to 12, the day of the month from 1 to 31, the hour of the day from 0 to 23, the minute from 0
to 59, the second from 0 to 60, and an optional daylight saving flag. The values of these numbers need not be within the ranges
specified; for example, an hour of -1 means 1 hour before midnight. The origin-zero Gregorian calendar is assumed, with year 0
preceding year 1 and year -1 preceding year 0. The time is assumed to be in the local timezone. If the daylight saving flag is
positive, the time is assumed to be daylight saving time; if zero, the time is assumed to be standard time; and if negative (the
default), mktime() attempts to determine whether daylight saving time is in effect for the specified time. If datespec does not
contain enough elements or if the resulting time is out of range, mktime() returns -1.
So you need to prepare your date fields to use the form given in the documentation.
split($5, D, "-");
DS = sprintf("%4d %2d %2d 00 00 00", D[1], D[2], D[3]);
T = mktime(DS);
should do the job.

Elasticsearch fails to index long custom date format

For timestamps I am using the ISO format with zone ID included, defined as:
yyyy-MM-dd'T'HH:mm:ss.SSS'Z['z']'
This format for example matches these two timestamps:
2015-02-20T09:46:56.336Z[UTC]
2015-02-20T10:46:55.221+01:00[Europe/Berlin]
For indexing data into elasticsearch I also defined this date format in a mapping like this (using elastic4s DSL):
create index indexName mappings {
"/exampleType" as (
"exampleField" typed DateType
) dynamicDateFormats "yyyy-MM-dd'T'HH:mm:ss.SSS'Z['z']'"
}
Basically this mapping works as expected but I experience problems when the formatted date string gets too long due to the zone ID. E. g. the above example
2015-02-20T09:46:56.336Z[UTC]
having 30 chars works fine, whereas
2015-02-20T10:46:55.221+01:00[Europe/Berlin]
having 44 chars fails to index with following error:
...
Caused by: java.io.IOException: Cannot read numeric data larger than 32 chars
at org.elasticsearch.index.analysis.NumericTokenizer.incrementToken(NumericTokenizer.java:78) ~[elasticsearch-1.4.2.jar:na]
at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:618) ~[lucene-core-4.10.2.jar:4.10.2 1634293 - mike - 2014-10-26 05:51:56]
at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:359) ~[lucene-core-4.10.2.jar:4.10.2 1634293 - mike - 2014-10-26 05:51:56]
...
My question is if there's a way to get around this problem e. g. by means of configuration or if I am forced to change my date format to ensure that formatted dates do not exceed 32 chars.

Ruby local_to_utc returns invalid year

I have the following date string ('US/Eastern'), which I need to convert to UTC:
date_src = '2014-07-07T23:10:00+0'
First I convert it to a "valid" format so I can operate it on later processes. I use the following to have an iso version of the date:
date = DateTime.parse(date_src).iso8601
At this point date is a nice '2014-07-07T23:10:00+00:00'. The last step on my process is to translate this date to UTC. I'm using the following:
TZInfo::Timezone.get('US/Eastern').local_to_utc(date)
The problem is this is giving me 20014 as output, instead of the UTC version of the original date. If I try:
TZInfo::Timezone.get('UTC').local_to_utc(date)
I get 2014, which is the correct year but still unexpected output.
Any ideas about what I'm doing wrong, and what I could use to solve the problem?
local_to_utc actually expects a Time or a DateTime instance:
TZInfo::Timezone.get('US/Eastern').local_to_utc(DateTime.parse(date_src))
# => #<DateTime: 2014-07-08T03:10:00+00:00 ((2456847j,11400s,0n),+0s,2299161j)>
From the documentation, you can have a hint on what actually happened:
All methods in TZInfo that operate on a time can be used with either Time or DateTime instances or with nteger timestamps (i.e. as returned by Time#to_i). The type of the values returned will match the the type passed in.
What actually happens is the local_to_utc calls to_i on the input parameter, which on a string returns the parsed integer from the beginning of the string (2014 in your case since date is the string 2014-07-07T23:10:00+00:00), and adds the time difference to it - 18000 for "US/Eastern" (5 hour difference), and 0 for UTC:
date.to_i
# => 2014
TZInfo::Timezone.get('US/Eastern').local_to_utc(date) - date.to_i
# => 18000
TZInfo::Timezone.get('UTC').local_to_utc(date) - date.to_i
# => 0
So the bottom line is - kind of serendipitously you saw this weird behavior, which stems from the compilation of some surprising quirks of the APIs you used...

Extract date and time from text using SAS

I have something like this, which is in .txt format.
'random title'
random things , 00:00 AM, 1 January
2005, 555 words, (English)
'random long title'
random things , 00:00 AM, 1 January 2005, 111 words,
(English)
The time and date need to be extracted in the format yyyymmdd and hhmm.
I tried to use comma as the delimiter.
DATA News;
INFILE 'C:xxxx/xxxx/xxxx' DLM',';
INPUT Title $75. Time $10. Date $20. Words $15. Lang $10.;
PROC PRINT DATA=News;
TITLE 'Time and Date';
VAR Time Date;
RUN;
But it failed, those entries contain multiple lines and also are not well-formatted.
Are there any solutions?
If your dates are always formatted like so:
00:00 AM, 1 January 2005
Then you can use a perl regular expression to find them.
data test;
input #;
_prx = prxparse('/\d\d:\d\d (?:AM|PM), \d{1,2} (?:January|February|March) \d{4}/');
start = 1;
stop = length(_infile_);
call prxnext(_prx, start, stop, _infile_, position, length);
do while (position > 0);
found = substr(_infile_, position, length);
put found= position= length=;
call prxnext(_prx, start, stop, _infile_, position, length);
end;
datalines;
'random title'
random things , 00:00 AM, 1 January
2005, 555 words, (English)
'random long title'
random things , 00:00 AM, 1 January 2005, 111 words,
(English)
;;;;
run;
Then use the FOUND value as you would normally with a SAS character variable to obtain date and time, or datetime, information. Obviously extend my short list of months to contain all twelve months.
That finds the second example, but not the first (which is not reasonably findable using datalines in an example); but if you are not using datalines, but instead a text file, you could manipulate the record format to remove the line feed and carriage return and thus see both as a single record (and thus match). Look into RECFM=N for more details on that.

Resources