I work in a large scale IT support environment. Twice now we have seen an invalid date of 02/31/2157 being inserted in an Oracle DATE column. So far I have not been able to reproduce this problem, but it appears to be happening occasionally when a user attempts to save '00/00/0000' into the column. I believe the value is originating from a PowerBuilder DataWindow update.
The application uses myriad libraries for all sorts of technologies, so this question may be a bit vague, but...
Has anyone seen the date 02/31/2157 in some established library that Oracle could be defaulting to when some other invalid date is entered? Perhaps an end-of-time concept analogous to the beginning-of-time date of 1/1/1970?
From http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/datatype.htm#i1847"
Oracle uses its own internal format to
store dates. Date data is stored in
fixed-length fields of seven bytes
each, corresponding to century, year,
month, day, hour, minute, and second.
2157-256 = 1901, which seems suspiciously close to a possible epoch of 1/1/1900 (or 12/13/1901 - which is the rollover date for the Year 2038 Problem)
I'd guess that it is storing either 0x00 or 0xFF in the date bytes, then getting confused when it decodes it. (How does it deal with month 255?)
Turns out this was a powerbuilder issue. The field was created in the datawindow as required, but was programmatically changed to be non-required before saving. So a null value was being saved to a non-null database column, and powerbuilder inserted some dummy date instead of just throwing an error.
I remember getting a weird value when saving an invalid date. IIRC it was in PB 9 and we had to get an EBF for it. It was a problem with Date Editmasks and entering an invalid date that wasn't rejected. Sorry I don't have more details.
Related
I have to get the max payment date on an invoice and I am having trouble with the date format. I do not need the max in this formula as I am using the format in a reporting tool that is pulling the max from what it finds for me.
Using "to_char({datefield},'mm/dd/yyyy')" works for displaying that date the way we would like BUT when you use summary function MAX it does not pull the correct date because it is looking at a string and not a date (it will think 12/3/21 is larger than 3/2/22).
Another thing I have tried is trunc - "trunc({datefield})" which gives us the correct max date but it changes the formatting. For example if the date prior to the formula being applied is "8/12/21 12:00:00:000" the trunc formula will display it as 12-08-21 which is horribly wrong.
Long story short is I need a way to change a date/time to date with the format of 'mmmm/dd/yyyy' WITHOUT converting it to a string with something like to_char. Thank you!!!!
A DATE is a binary data type consisting of 7 bytes representing: century, year-of-century, month, day, hour, minute and second. It ALWAYS has all of those components and it is NEVER stored with any (human-readable) format.
What you are seeing when a date is displayed is the client application you are using to access the database making a decision to be helpful to you, the user, and display the binary DATE provided by the database in a human-readable format.
If you want to change how the DATE is displayed then you either need to:
Change the settings on the client application that controls how it formats dates when it displays them to you; or
Change the data-type so that it is no longer a DATE (which does not have a format) to a data type where the values of the date can be formatted (such as a string). You can do this using TO_CHAR.
If you want to find the maximum then do it BEFORE applying the formatting:
SELECT TO_CHAR(MAX({datefield}),'mm/dd/yyyy')
FROM your_table;
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I am trying to run a query but in a part of code that calculate the diference between two timestamps the error below is showed.
00000 - "specified field not found in datetime or interval"
*Cause: The specified field was not found in the datetime or interval.
*Action: Make sure that the specified field is in the datetime or interval
Part of code:
SYSTIMESTAMP - DT_PROP
The type of DT_PROP is TIMESTAMP(6).
I will be thankfull if someone can send me a suggestion to solve this.
Here is what's happening.
SYSTIMESTAMP is a timestamp WITH TIME ZONE. DT_PROP is a timestamp (without time zone). You are taking the difference between two values of different data types.
Oracle will not throw an error; it will make an implicit conversion. Converting from timestamp with time zone to a simple timestamp will lose information; so, Oracle won't do that. Instead, Oracle does the opposite: it up-casts the timestamp (DT_PROP) to a timestamp with time zone. For this, it must make an assumption: it assumes the time zone is the same as your system time zone.
And then it runs into trouble, if your system time zone is DST-aware (daylight saving time) and if the pure timestamp (without time zone) is invalid in that DST-aware time zone.
For example, in Los Angeles (U.S.A), daylight saving time in 2020 began on March 8 - the clock was moved forward one hour at 2 A.M. - meaning that, in one-second intervals, the time right after 01:59:59 was 03:00:00. A time-of-day of 02:30:00, for example, on the date 2020-03-08, simply did not exist.
This is one of the cases when Oracle will throw the exact error you reported. And it is entirely possible that it's the reason it threw it from your code.
Solutions? There are some; but you have a DATA PROBLEM. If the timestamps stored in DT_PROP are supposed to represent times in your DST-aware time zone, and you have an invalid value stored in the column, what's up with that? It's easy to write code around it, but shouldn't your business user be alerted to this first? I believe they should. Then see what they tell you - HOW they want this to be handled; otherwise you would be making a business decision for them, not just a "programming" decision.
We have an ApEx data entry site that is translated into both Mexican Spanish and Canadian French. One of the critical columns on most data tables is the date associated with the data. As such, there are date picker fields for each time this value is needed.
The translations automatically display the month code based on the current language (a date picked in January on the Spanish site will display 'Ene' for enero). Before the date is recorded to the DB, the application format mask 'DD-MON-RR' is applied; this understands the current language and records the value on the DB in English.
The issue is that the month of December (diciembre) is showing the abbreviation of 'Dec' rather than 'Dic'. As a result, error ORA-01843 (not a valid month) is generated and the data is not saved. However, if the entry is manually changed to ##-Dic-##, the value is recorded correctly without error.
This makes it appear that the automatically-generated month abbreviation for this language is incorrect. Is this a known error with a solution?
I don't know anything about globalization, my Apex applications are in Croatian only. However, as a workaround, perhaps you could switch to another date format mask, such as DD.MM.YYYY; doing so, you wouldn't depend on language differences.
I have staging table which contains date as string with format 'mm/dd/yy'. I have Oracle 11g procedure to convert the string to date format before loading into main table. I'm using to_date('03/20/34','mm/dd/rr') to convert into date format which is giving wrong output as 03/20/2034 whereas the correct date is 03/20/1934. Please help me out to get the correct output where my table contains dates from both centuries.
"I'm using to_date('03/20/34','mm/dd/rr') to convert into date format which is giving wrong output as 03/20/2034 whereas the correct date is 03/20/1934. "
RR was a hack Oracle introduced in the last Millennium as part of the fight to resolve the Y2K bug. The standard date mask YY defaults the century to the current century. But in 1999 it was more likely that 01/01/00 meant 01/01/2000 rather than 01/01/1900. So the RR hack was to derive the century for dates using fixed windows pivoting on 00: values 00-49 are given century 20, 50-99 are given 19. Clearly some of the time this guess would be wrong, but the data corruption introduced was of a lower level than defaulting all dates to century 19.
The key point is, the windows are fixed. It was intended to be a temporary solution, because there wasn't time to switch all the legacy systems to use four-digit years before 2000 arrived. But the vision was always that all systems would be fixed in the long term, even if only through retirement or replacement. Certainly nobody expected that new systems would be built supporting two-digit years.
It is now 2017 and there is no excuse for systems to still be using two-digit years. Back in the old days storage was expensive, and shaving two digits from a date was a valuable space saving. Now it is just sloppiness.
Which obviously doesn't help you solve your problem. The short answer is there is no way to change the pivot used by RR. The best solution would be to enforce stricter validation on the data input aspect of your system, and insist on four-digit years. Whether that's feasible depends on your office politics. The other solution is to write your own conversion function:
create or replace function my_to_date (p_str varchar2) return date as
begin
if to_number(substr(p_str, 7) <= 35 then
return to_date(substr(p_str, 1, 6)||'19'||substr(p_str, 7), 'dd/mm/yyyy');
else
return to_date(substr(p_str, 1, 6)||'20'||substr(p_str, 7), 'dd/mm/yyyy');
end;
Obviously you'll need to define the actual rules for deciding whether to use 19 or 20.
I also encountered an issue like this, when inserting date values from the late 90s. The format in the script I was given read DD-MON-YY, so the database read that as 20YY, instead of 19YY.
My very inelegant solution was to open the raw data file and simply add a "19" before the YY year values.
I'm using Mongoid to store DateTime. But now i'm confusing with the real date.
In mongodb , the date is stored as:
{"2013-01-14T12:50:00.000Z"}
But when i print that value, it says:
2013-01-14T19:50:00+07:00
I don't really understand whether those Date formats are the same, and which one is "right" in my current timezone ?
Thank you for your help.
Date is stored in GMT, when "printed", it is displayed in your local timezone (GMT+7?)
The default Ruby date object should be able to handle offsets in time:
http://ruby-doc.org/stdlib-1.9.3/libdoc/date/rdoc/Date.html
Whereby some way down the page it even talks about how to start manipulating it I believe:
An optional argument the offset indicates the difference between the local time and UTC.
I do believe that mongoid is already converting the time for you as can be seen by the T value within the iso date being 7 hours ahead:
2013-01-14T19:50:00+07:00
Merely if you were to print the date and/or time instead of the full output with the offset included I have no doubt you will get the real date.
I believe mongoid most likely prints the offset even when it is applied because that offset IS there (since the time is off-setted by 7 hours from UTC) it is just not applied further.