I'm trying to write an update sql statement in postgresql (pg commander) that will update a user profile image column
I've tried this:
update mytable set avatarImg = pg_read_file('/Users/myUser/profile.png')::bytea where userid=5;
got ERROR: absolute path not allowed
Read the file in the client.
Escape the contents as bytea.
Insert into database as normal.
(Elaborating on Richard's correct but terse answer; his should be marked as correct):
pg_read_file is really only intended as an administrative tool, and per the manual:
The functions shown in Table 9-72 provide native access to files on the machine hosting the server. Only files within the database cluster directory and the log_directory can be accessed.
Even if that restriction didn't apply, using pg_read_file would be incorrect; you'd have to use pg_read_binary_file. You can't just read text and cast to bytea like that.
The path restrictions mean that you must read the file using the client application as Richard says. Read the file from the client, set it as a bytea placement parameter in your SQL, and send the query.
Alternately, you could use lo_import to read the server-side file in as a binary large object, then read that as bytea and delete the binary large object.
pg_read_file can read the files only from the data directory path, if you would like to know your data directory path use:
SHOW data_directory;
For example it will show,
/var/lib/postgresql/data
Copy you file to the directory mentioned.
After the you can use only file name in your query.
UPDATE student_card SET student_image = pg_read_file('up.jpg')::bytea;
or can use pg_read_binary_file function.
UPDATE student_card SET student_image = pg_read_binary_file('up.jpg')::bytea;
Related
I have a filegroup Named (Year2020) which contains There different .ndf files, for example Summer.ndf Winter.ndf, Fall.ndf.
Now I want to create a Fall table and I want my table to be saved in Fall.ndf file not on Summer.ndf not on Winter.ndf Is there a way to do things like this? I am using SQL Server.
The problem is all are in the same filegroup Named year2020....how can we save it exactly where we want ??
When I save the fall table it goes into summer.ndf not on Fall.ndf
I create a database and connect with it. But when I execute
select optimizer;
it returns
SELECT: identifier 'optimizer' unknown
What's the problem with it? And I can't find the sys table in the database using \d.
If I want to add an optimizer myopt, is it enough for the steps below:
write the opt_myopt.h and opt_myopt.c in /monetdb5/optimizer/
Add the code into codes in /monetdb5/optimizer/opt_wrapper.c
Add the function into optimizer_init_funcs in /monetdb5/optimizer/optimizer.c
Add a new pipe in /monetdb5/optimizer/opt_pipes.c
Since Oct2020, variables now have a schema (to keep it other SQL objects). In your session, 'sys' is not the session's schema, that's why it cannot find the 'optimizer' variable, the same for the tables.
In default branch (will be available in the next release) I added a "schema path" property on the user to search SQL objects besides the current session's schema. By default it includes the 'sys' schema.
For your first question: if your current_schema is not sys, you need to use select sys.optimizer;.
For your second question: the best existing example is probably in monetdb5/extras/mal_optimizer_template. Next to that, it's basically checking the source code to see how other optimisers have been implemented. NB, although it doesn't often happen, the internals of MonetDB can change between (major) versions. I'd recomment you to use Oct2020 or newer.
Concerning your second question,
You also have to create and add an optimizer pipeline to opt_pipes.c. Look for the default_pipe and then copy/paste that one to a new pipeline and add your optimizer to it.
There are some more places where you might need to add your optimizer, like in the codes[]array in opt_wrapper.c. Just mimick one of the standard optimizers like "reorder".
I have a large dataset where I do data validation using a syntax. For each validation a variable is created and set to 1 if there is a problem with data I need to check out.
For each validation I then create a subset of the data holding only the relevant variables for the relevant cases. Still using the syntax I save these data files in excel in order to do the checks and correct the data (in a database).
Problem is that not all of my 50+ validations detect any problematic data every time I run the check, but 50+ files are saved because I save a file for each validation. I'd like to save the files only if there is data in them.
Current syntax for saving the files is:
DATASET ACTIVATE DataSet1.
DATASET COPY error1.
DATASET ACTIVATE error1.
FILTER OFF.
USE ALL.
SELECT IF (var_error1 = 1).
EXECUTE.
SAVE TRANSLATE OUTFILE='path + '_error1.xlsx'
/TYPE=XLS
/VERSION=12
/MAP
/REPLACE
/FIELDNAMES
/CELLS=VALUES
/KEEP=var1 var2 var3 var4.
This is repeated for each validation. If no case violates the validation for "error1" I will still get an output file (which is empty).
Any way to alter the syntax to only save the data if there are in fact cases that violate the validation?
The following syntax will write a new syntax that will contain the command to save the file to excel - only if there are actual cases in the file. You will run the new syntax every time, but the excel will be created only in relevant cases :
DATASET ACTIVATE DataSet1.
DATASET COPY error1.
DATASET ACTIVATE error1.
FILTER OFF.
USE ALL.
SELECT IF (var_error1 = 1).
EXECUTE.
do if $casenum=1.
write outfile='path\tmp\run error1.sps' /"SAVE TRANSLATE OUTFILE='path\var_error1.xlsx'"
/" /TYPE=XLS /VERSION=12 /MAP /REPLACE /FIELDNAMES /CELLS=VALUES /KEEP=var1 var2 var3 var4.".
end if.
exe.
insert file='path\tmp\run error1.sps'.
Please edit the "path" according to your needs.
Note that the new syntax will be written in all cases, but when there is no data in the file, the syntax will be empty, and so the empty file won't be written to excel.
I have a hive table with ip_address column. How can I find country, city and Zip code from that ip_address column?
I see a udf written:
https://github.com/edwardcapriolo/hive-geoip
How do I utilize udf in hive? Can I create function name myself?
The udf says we need separate database:
http://geolite.maxmind.com/download/geoip/database/GeoLiteCountry/GeoIP.dat.gz
How do I implement that database on Hive?
Any feedback will be appreciated.
Thanks,
Rio
You utilize UDFs in Hive by adding the jars and creating temporary functions as described by your first link.
add file GeoIP.dat;
add jar geo-ip-java.jar;
add jar hive-udf-geo-ip-jtg.jar;
create temporary function geoip as 'com.jointhegrid.hive.udf.GenericUDFGeoIP';
You may change the function name to whatever you would prefer, simply replace the word after "temporary function" from "geoip" to whatever you want.
Adding the database you linked to is a matter of downloading it to your unix server and then unzipping it using gzip. Once it is in the GeoIP.dat format, move it and the jars you've downloaded into the your /users/(your username)/ directory and then run the code as instructed above. The files must be in your top directory or else explicitly targeted during your add file and add jar statements. by that I mean instead of add file GeoIP.dat; it must be add file /users/wertz/downloads/GeoIP.dat; for example.
Finally, by looking at the code the UDF needs three arguments. The first argument is the IP address, the second argument is what you're looking for (choices appear to be COUNTRY_NAME, COUNTRY_CODE, AREA_CODE, CITY, DMA_CODE, LATITUDE, LONGITUDE, METRO_CODE, POSTAL_CODE, REGION, ORG, or ID) and the final value is the filename of the GeoIP database, which hopefully you have not changed from GeoIP.dat
Some questions arises when using MAGMI generic SQL datasource. Magmi 0.7.18 displays the following input information when using that plugin:
I tried several times with two approaches.
-the first one as described on the image was a direct query to the mysql database containing all rows to feed magento database using magmi. (no files on genericsql/requests)
-the second one using .sql file exported from my database and placing that file into (genericsql/requests).
in both cases I received the following statement: 1 warning(s) found Hide Details -> "No Records returned by datasource"
I read that some folks suggest to use input DB Initial Statement: SELECT COUNT(*) AS cnt FROM tablename but in my case it was the same.
question one: using mysql I can query mysql databse directy using the input db information (type, host, name, user, password) or I have to place the sql file in genericsql/requests too? is that my error?
question two: based on the fact that mysql cannot attach files - as MS sql can - which information do I have to place when magmi request user- pass for that sql file?
any help appreciated, I´m stuck with this issue and CSV is not suitable for my needs. brgds
by dweeves:
Your SELECT has to be put in a .sql file in the directory listed in red. (name it as you want as long as it ends with .sql extension)
The "Initial Statements" is a field that is meant to hold the "connection" time statements (like SET NAMES 'UTF8').
For the "quick count" , you might also add a .sql.count file in the same dir with the same name that the request you want to achieve.
By default magmi will find the count using a
SELECT COUNT(*) FROM (your request here)
see Generic SQL Datasource plugin documentation.