I am really new to SQLite.
I want to update BLOBs in the Column "data" in my database and i got it working:
UPDATE genp SET data= X'MyHexData' WHERE rowid=510849
As i want to update multiple BLOBs from the Column data i decited to write a .sh script:
sqlite3 my.db 'UPDATE genp SET data= X'MyHexData' WHERE rowid=510849'
When i execute this script i get the error message:
SQL error: no such column: XMyHexData
Why does SQLite think that my hex data is supposed to be the column? Where is my mistake? It works if i run this in the Command Line Shell of SQLite.
EDIT:
I got it working.Like this:
sqlite3 my.db "UPDATE genp SET data= X'MyHexData' WHERE rowid= '510849'"
Thanks for all your help
You've already used single quotes to quote the argument. Escape them.
... '...\'...'
Related
I am having trouble importing the data of the TPCH-Benchmark (generated with dbgen) into my monetDB-Database.
I've already created all the tables and I'm trying to import using the following command:
COPY RECORDS INTO region FROM "PATH\region.tbl" DELIMITERS tuple_seperator '|' record_seperator '\r\n';
And I get the following error message:
syntax error, unexpected RECORDS, expecting BINARY or INTO in: "copy records"
I also found out this one on the internet:
COPY INTO sys.region 'PATH/region.tbl' using delimiters '|','\n';
But I get the following error message:
syntax error, unexpected IDENT, expecting FROM in: "copy into sys.region "C:\ProgramData\MySQL\MySQL Server 5.7\Uploads\region."
Because I'm a new monetDB user I'm not getting
What I'm doing wrong ?
Any help will be appreciate :)
The RECORDS construct expects a number, specifically how many records you are to load. I usually do this:
COPY 5 RECORDS INTO region FROM '/path/to/region.tbl' USING DELIMITERS '|', '|\n' LOCKED;
Also in the second attempt you are missing a FROM before the path to the file like
COPY INTO sys.region FROM '/path/to/region.tbl' USING DELIMITERS '|', '\n';
See here for more information: https://www.monetdb.org/Documentation/Manuals/SQLreference/CopyInto
Basically I want to execute an SQL file from an SQL file in Postgres.
Similar question for mysql: is it possible to call a sql script from a stored procedure in another sql script?
Why?
Because I have 2 data files in a project and I want to have one line that can be commented/un-commented that loads the second file.
Clarification:
I want to call B.SQL from A.SQL
Clarification2:
This is for a Spring Project that uses hibernate to create the database from the initial SQL file (A.SQL).
On further reflection it seems I may have to handle this from java/string/hibernate.
Below is the configuration file:
spring.datasource.url=jdbc:postgresql://localhost:5432/dbname
spring.datasource.username=postgres
spring.datasource.password=root
spring.datasource.driver-class-name=org.postgresql.Driver
spring.datasource.data=classpath:db/migration/postgres/data.sql
spring.jpa.hibernate.ddl-auto=create
Import of other files is not supported in Sql, but if you execute the script with psql can you use the \i syntax:
SELECT * FROM table_1;
\i other_script.sql
SELECT * FROM table_2;
This will probably not work if you execute the sql with other clients than psql.
Hibernate is just:
reading all your SQL files line per line
strip any comment (lines starting with --, // or /*)
removes any ; at the end
executes the result as a single statement
(see SchemaExport.importScript and SingleLineSqlCommandExtractor)
There is no support for an include here.
What you can do:
Define your own ImportSqlCommandExtractor which knows how to include a file - you can set that extractor with hibernate.hbm2ddl.import_files_sql_extractor=(fully qualified class name)
Define your optional file as additional import file with hibernate.hbm2ddl.import_files=prefix.sql,optional.sql,postfix.sql, you can either add and remove the file reference as you like, or you can even exclude the file from your artifact - a missing file will only create a debug message.
Create an Integrator which sets the hibernate.hbm2ddl.import_files property dynamically - depending on some environment property
I'm running this command from PostgreSQL 9.4 on Windows 8.1:
psql -d dbname -f filenameincurrentdirectory.sql
The sql file has, for example, these commands:
INSERT INTO general_lookups ("name", "old_id") VALUES ('Open', 1);
INSERT INTO general_lookups ("name", "old_id") VALUES ('Closed', 2);`
When I run the psql command, I get this error message:
psql:filenameincurrentdirectory.sql:1: ERROR: syntax error at or near "ÿ_I0811a2h1"
LINE 1: ÿ_I0811a2h1 ru
How do I import a file of SQL commands using psql?
I have no problems utilizing pgAdmin in executing these sql files.
If your issue is BOM, Byte Order Marker, another option is sed. Also kind of nice because if BOM is not your issue it is non-destructive to you data. Download and install sed for windows:
http://gnuwin32.sourceforge.net/packages/sed.htm
The package called "Complete package, except sources" contains additional required libraries that the "Binaries" package doesn't.
Once sed is installed run this command to remove the BOM from your file:
sed -i '1 s/^\xef\xbb\xbf//' filenameincurrentdirectory.sql
Particularly useful if you file is too large for Notepad++
Okay, the problem does have to do with BOM, byte order marker. The file was generated by Microsoft Access. I opened the file in Notepad and saved it as UTF-8 instead of Unicode since Windows saves UTF-16 by default. That got this error message:
psql:filenameincurrentdirectory.sql:1: ERROR: syntax error at or near "INSERT"
LINE 1: INSERT INTO general_lookups ("name", "old_id" ) VAL...
I then learned from another website that Postgres doesn't utilize the BOM and that Notepad doesn't allow users to save without a BOM. So I had to download Notepad++, set the encoding to UTF-8 without BOM, save the file, and then import it. Voila!
An alternative to using Notepad++ is this little python script I wrote. Simply pass in the file name to convert.
import sys
if len(sys.argv) == 2:
with open(sys.argv[1], 'rb') as source_file:
contents = source_file.read()
with open(sys.argv[1], 'wb') as dest_file:
dest_file.write(contents.decode('utf-16').encode('utf-8'))
else:
print "Please pass in a single file name to convert."
I am trying to run a hive script in pseudo distributed mode. The commands in the script runs absolutely fine when I run it interactive mode. However, when I add all those commands in a script and run I get an error.
The script:
add jar /path/to/jar/file;
create table flights(year int, month int,code string) row format serde 'com.bizo.hive.serde.csv.CSVSerde';
load data inpath '/tmp/hive-user/On_Time_On_Time_Performance_2013_1.csv' overwrite into table flights;
The 'On_Time_On_Time_Performance_2013_1.csv' does exist in the HDFS. The error I get is:
FAILED: SemanticException Line 3:17 Invalid path ''/tmp/hive-user/On_Time_On_Time_Performance_2013_1.csv'': No files matching path hdfs://localhost:54310/tmp/hive-user/On_Time_On_Time_Performance_2013_1.csv
fs.default.name=hdfs://localhost:54310
My hadoop is running fine.
Can someone give any pointers?
Thanks.
This is not really an answer, but it is a more detailed and repeatable formulation of your question.
a) one needs to download the csv-serde from here: git clone https://github.com/ogrodnek/csv-serde
b) Build it using mvn package
c) Create a text file containing three comma separated fields corresponding to the three fields of the given table.
c) If the path is say "/shared" then the following is the correct sequence to load:
add jar /shared/csv-serde/target/csv-serde-1.1.2-0.11.0-all.jar;
drop table if exists flights;
create table flights(year int, month int,code string) row format serde 'com.bizo.hive.serde.csv.CSVSerde' stored as textfile;
load data inpath '/tmp/hive-user/On_Time_On_Time_Performance_2013_1.csv' overwrite into table flights;
I do see the same error as in the OP: FAILED: SemanticException Line 2:17 Invalid path ''/tmp/hive-user/On_Time_On_Time_Performance_2013_1.csv'': No files matching path hdfs://localhost:9000/tmp/hive-user/On_Time_On_Time_Performance_2013_1.csv
I am using a CTL file to load data stored in a file to a specific table in my Oracle database.
Currently, I launch the loader file using the following command line:
sqlldr user/pwd#db data=my_data_file control=my_loader.ctl
I would like to know if it is possible to use specify parameters to be retrieved in the CTL file.
Also, is it possible to retrieve the name of the data file used by the CTL to fill the table ?I also would like to insert it for each row. I currently have to call a procedure to update previously inserted records.
Any help would be appreciated !
As I know don't have any way to pass parametter as variable in ctrl. But You can use constant in ctl and modify clt file to change that constant value (in ctl file content) for every loading times.
Edit: more specific.
my_loader.ctl:
--options
load data
infile 'c:\$datfilename$' --this is optional, you can specify here or from command line
into table mytable
fields....
(
datafilename constant '$datfilename$', -- will be replace by real datafname each load
datacol1 char(1),
....
)
dataload.bat: assume that $datfilename$ is the text will be replace by datafile's name.
::sample copy
copy my_loader.ctl my_loader_temp.ctl
::replace the name of datafile (mainly the content to load into table's data column)
findandreplace my_loader_temp.ctl "$datafilename$" "%1"
::load
sqlldr user/pwd#db data=%1 control=my_loader_temp.ctl
::or with data be obmitted if you specified by infile in control file.
sqlldr user/pwd#db control=my_loader_temp.ctl
using: dataload.bat mydatafile_2010_10_10.txt