Issue with a mapping variable in Informatica - etl

We have an incremental Informatica job where around 30 sessions runs in a workflow. We are using a mapping variable in all of them as we want to pick records from last run of the Informatica. This variable holds the date of last run. While running, in every session this variable is picking the correct last run date while in one session it is picking wrong date (CREATE_DATE>TO_DATE ('01/01/1753 00:00:00', 'MM-DD-YYYY HH24:MI:SS')).
Can someone please give me some pointer to find out why it is giving wrong date (01/01/1753)?

Related

Oracle Date to_char Returns Different Results

I have a database on my local development machine and there is a database on our test server. Basically, the tables on my dev machine were copied over from the test machine.
However, I have found a difference in how the same date is treated by the to_char function. On my development machine if I run the following query:
select test_date, to_char(test_date, 'YYYY-MM-DD')
from test.table
where id = 'C0007784'
I get the following results:
31-DEC-99 1999-12-31
On the test server running the same query against the same schema and data I get the following:
31-DEC-99 1899-12-31
Could this difference in behaviour of to_char be due to a setting being different in the two Oracle instances?
If I run SELECT value FROM v$nls_parameters WHERE parameter ='NLS_DATE_FORMAT'; I get DD-MON-RR for both instances.
So you exported the contents of the table to a csv file using DD-MON-YY format. YY obviously causes ambiguity. I guess that when you were importing the file, 99 was interpreted as 1999 instead of 1899. I don't know the exact mechanism which is used by database to guess the full year, but anyway Oracle strongly recommends YYYY in date format:
Note: Oracle recommends that you use the 4-digit year element (YYYY)
instead of the shorter year elements for these reasons: The 4-digit
year element eliminates ambiguity.
The shorter year elements may affect query optimization because the
year is not known at query compile time and can only be determined at
run time.

Why in oracle jobs log sometime the value in last star Date is bigger than Next run date

Why is the last start date sometimes bigger than the next run dat in the Oracle jobs log?
When you select values in the ALL_SCHEDULER_JOBS table:
select * from ALL_SCHEDULER_JOBS
Sometimes the last start date is bigger than the next run date. For example, the last start date is:
2015/08/11 16:20:00.155707 +08:00
and next run date is:
2015/08/11 16:20:00.000000 +08:00
Why is this the case?
It means that the job is now running and the new value for the next run will be set after the current run finishes.

Oracle change dbtimezone

My database was configured with an dbtimezone=+2:00:
When my application sends a date which has a timezone, does Oracle automatically translate the date to its dbtimezone and store it in the column?
When my application asks for a field date, does Oracle automatically translate it to the application timezone?
In order to be consistency with business rules, I wanted to change this dbtimezone to UTC. So I made the alter database set time_zone='UTC' command, I restarted the oracle server and now the select dbtimezone from dual; command returns "UTC".
But, all fields date in DB haven't changed (no change -2 hours from GMT+2 to UTC). When I ask the sysdate, it returns the GMT+2 date ... I try to change my SQL Developer configuration timezone to UTC but it didn't change anything. Do I have an issue of Oracle session parameters that convert my DB data to GMT+2 before displaying it ?
Finally, does anyone have a good practice to make this change ? (change the database timezone and existing date to a new timezone).
If all you're doing is changing the database time zone setting, then you are only going to notice any change in output if your data is stored with the TIMESTAMP WITH LOCAL TIME ZONE type.
I don't recommend that though. It would be much better if your data was just stored in a regular TIMESTAMP field and was already set to UTC.
You should read the documentation about all of the different date and time datatypes, so you understand how each of these types works and differs from the other.

Unable to insert date and time when using date datatype

I am hitting a bit of a problem when using the date datatype. When trying to save a row to the table where the field it throws an error ora 01830 and complains about converting the date format picture ends...etc. Now when I do the insert, I use the to_date function with the format of "dd-mon-yyyy hh24:mi:ss". Of course, when I remove the time element, everything is perfect.
Checking sysdate, I noticed that the time element wasn't be displayed, and I used alter session set nls_date_format to set the date and time I want to save to the table, which worked!
I used alter system set nls_date_format ="dd-mon-yyyy hh24:mi:ss" scope=spfile; This showed that it was altered, and I can see the setting in the enterprise management console. In sqlplus, I shutdown the database, and restarted with startup mount; alter database open; and then selecting sysdate, it still shows the date as dd-mon-yy, and still no time! Checking the enterprise management, and looking up the nls_date_format the setting is still shown as "dd-mon-yyyy hh24:mi:ss".
So, my question is this - what am I doing wrong? Why can't save date and time using date in Oracle 11g?????
Thanks
Dates are stored with "second" granularity in Oracle.
Display formats are dependent on the system and session. In your case, since you are connecting with sqlplus, you are using a default session format from the client that does not include time. You need to execute an:
ALTER SESSION SET nls_date_format ="dd-mon-yyyy hh24:mi:ss";
when you start up your sqlplus client in order to change the default display. There is a client side file (glogin.sql?) that sqlplus will run on startup. You can place this kind of command in there if you want it to be executed each you start that client. I'm pretty sure the sqlplus client sends an "alter session set nls_date..." on start up.
In general, when outputting dates, I think it is better to just be explicit on the format by doing a TO_CHAR(myDateColumn, "dd-mon-yyyy hh24:mi:ss"). If you are reading dates programatically, you don't need to worry about it since you are dealing with internal formats, not display formats.
I've seen this error when the input data did not match the date format used. check your, data would be my suggestion.

Insert a datetime value with GetDate() function to a SQL server (2005) table?

I am working (or fixing bugs) on an application which was developed in VS 2005 C#. The application saves data to a SQL server 2005. One of insert SQL statement tries to insert a time-stamp value to a field with GetDate() TSQL function as date time value.
Insert into table1 (field1, ... fieldDt) values ('value1', ... GetDate());
The reason to use GetDate() function is that the SQL server may be at a remove site, and the date time may be in a difference time zone. Therefore, GetDate() will always get a date from the server. As the function can be verified in SQL Management Studio, this is what I get:
SELECT GetDate(), LEN(GetDate());
-- 2010-06-10 14:04:48.293 19
One thing I realize is that the length is not up to the milliseconds, i.e., 19 is actually for '2010-06-10 14:04:48'. Anyway, the issue I have right now is that after the insert, the fieldDt actually has a date time value up to minutes, for example, '2010-06-10 14:04:00'. I am not sure why. I don't have permission to update or change the table with a trigger to update the field.
My question is that how I can use a INSERT T-SQL to add a new row with a date time value ( SQL server's local date time) with a precision up to milliseconds?
Check your table. My guess is that the FieldDT column has a data type of SmallDateTime which stores date and time, but with a precision to the nearest minute. If my guess is correct, you will not be able to store seconds or milliseconds unless you change the data type of the column.
I would guess that you are not storing the GetDate() value in a DateTime field. If you store the value in a datetime field you will get the maximum precision allowed by the DateTime type. Additionally, DateTime is a binary type (a double actually) so 19 means 19 bytes, not 19 characters.
Try to create a simple table with a Datetime field like this
CREATE TABLE [dbo].[DateTable](
[DateField] [datetime] NOT NULL
)
And add a date with
insert into datetable (datefield) values(getdate())
When you execute a select you will get back a value including milliseconds. The following query
select * from datetable
returns
2010-06-11 00:38:46.660
Maybe this would work instead of getdate -
SYSDATETIME()
look here if you can find what you need -
http://msdn.microsoft.com/en-us/library/ms188383.aspx
As you're on SQL 2005, don't forget the getutcdate() function to ensure that, regardless of where your servers are actually located, you have a constant time reference.
Imagine, you have the server in the UK in winter (i.e. GMT+0), and save a record at 10:30am. You then cut over to a SQL server hosted in California (GMT+8) and 8 hours later save another record.
Using getdate(), both saves record the same time "10:30:00". Using getutcdate(), the first save records at "10:30:00", the second save records "18:30:00".
Not really answering the question, but important in your circumstances.
You can use like this in procedure and If there is no procedure use only getdate().
insert into [dbo].[Tbl_User] (UserId,Uvendoremail,UAddress,Ddob,DMobile,
DEmail,DPassword,DAddress,CreatedDate) values (#userid,#vendoremail#address,#dob,#mobile,#email,#dpassword,#daddress,getdate())

Resources