How to import csv file data to postgresql in "omnidb" in windows - windows

I am using this command to import csv file data to postgresql in omnidb windows :
COPY owner."order"(id,type,name)
FROM 'C:\Users\Desktop\omnidb_exported.csv' DELIMITER ';' CSV HEADER;
Getting this error, although it exists:
could not open file "C:\Users\Desktop\omnidb_exported.csv" for
reading: No such file or directory
I have also provided everyone security permissions of read and execute on csv file and its folder. Still the problem exists.
The csv file has delimiter ";" with header information.
This owner schema has 3 tables, which are connected by "id" column.
How to import the csv file data correctly? What is the problem with these commands?

OK, as below:
\copy owner."order"(id,type,name) FROM 'C:\Users\Desktop\omnidb_exported.csv' DELIMITER ';' CSV HEADER;
Just replace the copy to \copy, then can load data sucessfully.

Related

Is there a way to skip the header line while sqoop export the csv file to Vertica DB

I need to sqoop export the csv file to vertica, But since csv has header in it, that line as well gets exported.
Is there any efficient way to avoid the header
I don't know sqoop completely. But if sqoop can control the generation of the Vertica copy command, make sure, with a CSV file like this:
id|name
1|Arthur
2|Ford
3|Zaphod
to generate this command:
COPY public.foo FROM LOCAL 'foo.csv' DELIMITER '|' SKIP 1
SKIP 1 is the clause that makes COPY skip the first line.

import csv file on Neo4J with a Mac: which path to use for my file?

I am a new user of Neo4J. I would like to import a simple csv file into neo4J with my Mac, but it seems I am doing something wrong with the path to my file. I have tried many different ways but it is not working. the only workaround I found is to upload it on dropbox....
please see below the code I am using/
LOAD CSV WITH HEADERS FROM "file://Users/Cam/Documents/Neo4j/default.graphdb/import/node_attributes.csv" as line
RETURN count(*)
the error message is:
Cannot load from URL
'file://Users/Cam/Documents/Neo4j/default.graphdb/import/node_attributes.csv':
file URL may not contain an authority section (i.e. it should be
'file:///')
I already try to add some /// in the path but it is not working.
If the CSV file is in your default.graphdb/import folder, then you don't need to provide the absolute path, just give the path relative to the import folder:
LOAD CSV WITH HEADERS FROM "file:///node_attributes.csv" as line
RETURN count(*)
I'd use neo4j-import from the terminal:
Example from https://neo4j.com/developer/guide-import-csv/
neo4j-import --into retail.db --id-type string \
--nodes:Customer customers.csv --nodes products.csv \
--nodes orders_header.csv,orders1.csv,orders2.csv \
--relationships:CONTAINS order_details.csv \
--relationships:ORDERED customer_orders_header.csv,orders1.csv,orders2.csv
What is not working when you try
LOAD CSV WITH HEADERS FROM "file:///Users/Cam/Documents/Neo4j/default.graphdb/import/node_attributes.csv" as line RETURN count(*)
?

SQLLDR file path argument

I have more than 30 files to load the data.
The path changes at every run in those files. So the path becomes
INFILE "/home/dmf/Cycle7Data/ITEM_IMAGE.csv"
INFILE "/home/dmf/Cycle8Data/ITEM_IMAGE.csv"
The file names change on every control file (SUPPLIER.csv)
Is there any way to pass the File path in a variable, or set any Env. Variable?
So that the control file is not edited everytime
You can pass the data file name on the command line; from the documentation:
DATA specifies the name of the data file containing the data to be loaded. If you do not specify a file extension or file type, then the default is .dat.
If you specify a data file on the command line and also specify data files in the control file with INFILE, then the data specified on the command line is processed first. The first data file specified in the control file is ignored. All other data files specified in the control file are processed.
So pass the relevant file name with each invocation, e.g.
sqlldr user/passwd control=myfile.ctl data=/home/dmf/Cycle7Data/ITEM_IMAGE.csv
If you have lots of files to load from a directory you could have a shell script that loops over the directory contents and passes each file name in turn to an SQL*Loader session.

How do I import a file of SQL commands to PostgreSQL?

I'm running this command from PostgreSQL 9.4 on Windows 8.1:
psql -d dbname -f filenameincurrentdirectory.sql
The sql file has, for example, these commands:
INSERT INTO general_lookups ("name", "old_id") VALUES ('Open', 1);
INSERT INTO general_lookups ("name", "old_id") VALUES ('Closed', 2);`
When I run the psql command, I get this error message:
psql:filenameincurrentdirectory.sql:1: ERROR: syntax error at or near "ÿ_I0811a2h1"
LINE 1: ÿ_I0811a2h1 ru
How do I import a file of SQL commands using psql?
I have no problems utilizing pgAdmin in executing these sql files.
If your issue is BOM, Byte Order Marker, another option is sed. Also kind of nice because if BOM is not your issue it is non-destructive to you data. Download and install sed for windows:
http://gnuwin32.sourceforge.net/packages/sed.htm
The package called "Complete package, except sources" contains additional required libraries that the "Binaries" package doesn't.
Once sed is installed run this command to remove the BOM from your file:
sed -i '1 s/^\xef\xbb\xbf//' filenameincurrentdirectory.sql
Particularly useful if you file is too large for Notepad++
Okay, the problem does have to do with BOM, byte order marker. The file was generated by Microsoft Access. I opened the file in Notepad and saved it as UTF-8 instead of Unicode since Windows saves UTF-16 by default. That got this error message:
psql:filenameincurrentdirectory.sql:1: ERROR: syntax error at or near "INSERT"
LINE 1: INSERT INTO general_lookups ("name", "old_id" ) VAL...
I then learned from another website that Postgres doesn't utilize the BOM and that Notepad doesn't allow users to save without a BOM. So I had to download Notepad++, set the encoding to UTF-8 without BOM, save the file, and then import it. Voila!
An alternative to using Notepad++ is this little python script I wrote. Simply pass in the file name to convert.
import sys
if len(sys.argv) == 2:
with open(sys.argv[1], 'rb') as source_file:
contents = source_file.read()
with open(sys.argv[1], 'wb') as dest_file:
dest_file.write(contents.decode('utf-16').encode('utf-8'))
else:
print "Please pass in a single file name to convert."

SQL*Loader-522: lfiopn failed for file

I am getting below error in my script which is running a SQLLDR :
SQL*Loader-522: lfiopn failed for file (/home/abc/test_loader/load/badfiles/TBLLOAD20150520.bad)
As far my knowledge this is the error related to permission,but i am wondering in the folder "/load" there is no "badfiles" folder present .i have already define badfiles folder outside the load folder,but why in the error it is taking this location ?
is it like my input file having some problem and SQLLDR trying to create a bad file in the mention location ?
below is the SQLLDR command :
$SQLLDR $LOADER_USER/$USER_PWD#$LOADER_HOSTNAME control=$CTLFDIR/CTL_FILE.ctl BAD=$BADFDIR/$BADFILE$TABLE_NAME ERRORS=
0 DIRECT=TRUE PARALLEL=TRUE LOG=$LOGDIR/$TABLE_NAME$LOGFILE &
below is the control file temp :
LOAD DATA
INFILE '/home/abc/test_loader/load/FILENAME_20150417_001.csv' "STR '\n'"
APPEND
INTO TABLE STAGING.TAB_NAME
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
(
COBDATE,
--
--
--
FUTUSE30 TERMINATED BY WHITESPACE
)
Yes, your input file is having a problem so the sqlldr wants to create a file containing rejected rows (BAD file). The BAD file creation fails due to insufficient privileges - the user who runs the sqlldr does not have rights to create file in the folder you defined to contain BAD files.
Add write privileges on the BAD folder to the user who runs the sqlldr or place the BAD folder elsewhere.
This is likely some kind of permissions issue on writing the log file, maybe after moving services to a different server.
I ran into the same error. Problem was resolved by changing the name of the existing log file in filesystem and rerunning process. Upon rerunning, the SQLLDR process was able to recreate the log file, and subsequent executions were able to rewrite the log.

Resources