I am trying to use the Postgres import functionality, but it gives me an error on the date datatype.
I have dates in the following format 17-MAY-90 12.00.00.000000000 AM in my Oracle db, but I need to import the data into my Postgres db. I tried timestap with and without timezone and it still gives me an error msg.
What datatype can I use?
PS: Error I am getting
Thanks.
I was able to fix the problem by going to SQL Developer and under
Tools-Preferences-Database-Advanced-NLS
and changed the date format to the format I wanted. This way when I exported the file, it exports with the format I want to use and import in my Postgres db.
Related
I'm trying to import some csv files into Oracle Database. The import wizard can't recognize timestamp like this: "2022-12-14 20:48:45.596206". I found some answer that only has millisecond, but here it contains microsecond..What should I type in the Format for Data Import Wizard to recognize this kind of timestamp correctly?
Thanks
Nvm. I missed the "." between second and microsecond. It should be YYYY-MM-DD HH24:MI:SS.ff6
I'm trying to import a huge table from oracle 10g to HDFS (GCS since i'm using sqoop with Google Cloud Dataproc) as AVRO. Everything works fine when the table doesnt have any date columns, but when it does some dates are imported very wrong.
Like: Oracle data -> 30/07/76 and HDFS data -> 14976-07-30 20:02:00.0
Like: Oracle data -> 26/03/84 and HDFS data -> 10384-03-26 20:32:34.0
I'm already mapping the date fields as String to bring them like that. I was importing using the default sqoop way that is bringing the date fields as epoch ints but the conversion was incorrect too.
Like: Oracle data -> 01/01/01 and HDFS data -> -62135769600000 when it should be 978314400000
Please, hope someone help me to fix this issue.
Thanks
Aditional information:
Sqoop command that i'm running
import -Dmapreduce.job.user.classpath.first=true -Dorg.apache.sqoop.splitter.allow_text_splitter=true --connect=$JDBC_STR --username=$USER --password=$PASS --target-dir=gs://sqoop-dev-out-files/new/$TABLE --num-mappers=10 --fields-terminated-by="\t" --lines-terminated-by="\n" --null-string='null' --null-non-string='null' --table=$SCHEMA.$TABLE --as-avrodatafile --map-column-java="DATACADASTRO=String,DATAINICIAL=String,DATAFINAL=String"
Sqoop version: 1.4.7
JDBC version: 6
I think your date in oracle is 01/01/0001, try to_char(COLUMN,'DD/MM/YYYY').
My issue is that my date is really 01/01/0001, because of user mistyping, and I can't update the column in the origin oracle database.
My issue is that converting to unix should have come -62135596800000 but instead, it comes -62135769600000(30/12/0000).
At first, I thought that was a timezone issue but it is two days difference.
I am using SSIS for ETL. Source and destination databases are Oracle.
When I run job through SQL agent its prompts me with the following error:
This table contains 5 date columns which are creating this issue.
I have tried all possible solution but it didn't work. It does not seems data issue as I rerun job on those selective dates which worked perfectly. On full load it failed.
The bottom error message is:
Data Flow: Task:Error: SQLSTATE 22007, Message: [Microsoft][ODBC Oracle Wire Protocol driver]Invalid datetime format. Error in parameter 17.
You have an Invalid datetime format. You need to fix it by correcting either the data or the format model you are using but, since you haven't included any code, we can't help further.
I have a similar issue, the difference is my source is the SQL Server database and the destination is Oracle database.
I converted the source DateTime columns to type String first and then they were loaded to destination date columns successfully.
I am trying to import data from a CSV file into a Oracle GroupSpace table using SQL Developer tool. I am getting errors for Date column. My Date column has Date in the below format.
5/6/2016
4/11/2018
11/6/2017...
I get error that the date column has Invalid or Null Date Formats.
Any pointers on what format date format to use when importing Date column would be greatly appreciated.
Thank you so much!
JH
If you aren't sure that dates are valid (for example, nothing prevents you from entering 5/55/2016 into a CSV file, and that certainly isn't a valid DATE value), you can create a staging table whose columns are of VARCHAR2 datatype - it accepts everything, even garbage like 5/55/2016.
Then, after you load data, write some SQL to find errors, fix them, and then move data into the target table.
Check the CSV data in a text editor and look for which part represents the month (the month value will be in the range 1..12). If you are using US dates then use MM/DD/YYYY, otherwise you should probably use DD/MM/YYYY as the date format. If the data has a mixture of both, then you must separate those files and use a different format for each, or you are likely to get invalid date values in your database.
SQL Developer can help you.
You can try the date format masks in the drop-down. If we can guess it, we'll default to one. For some reason your data...fools us, but you can type your own.
If you get something that 'works' the warnings go away.
If you get it wrong, we'll let you know before you even get to the next step.
You can find all the data format masks here.
The company I work for is in the process of switching from oracle to EnterpriseDB and I'm trying to update a query that uses a timestamp from a table, but whenever I try to pull that timestamp it gives me:
[Devart][ODBC][PostgreSQL]Invalid TIMESTAMP string {HY000}
I've tried casting it as varchar2, date, timestamp, using to_date, and nothing has worked.
The query is:
select "ship_date" from "promotion"#pgProd
In postgres ship_date is just a timestamp.
Any information about how this can be accomplished would be appreciated.
EDIT: To clarify, this query is being run in oracle pulling data from postgres.
Oracle version is 11g
The relevent line of the creation script is:
ship_date timestamp without time zone NOT NULL