Basically I want to execute an SQL file from an SQL file in Postgres.
Similar question for mysql: is it possible to call a sql script from a stored procedure in another sql script?
Why?
Because I have 2 data files in a project and I want to have one line that can be commented/un-commented that loads the second file.
Clarification:
I want to call B.SQL from A.SQL
Clarification2:
This is for a Spring Project that uses hibernate to create the database from the initial SQL file (A.SQL).
On further reflection it seems I may have to handle this from java/string/hibernate.
Below is the configuration file:
spring.datasource.url=jdbc:postgresql://localhost:5432/dbname
spring.datasource.username=postgres
spring.datasource.password=root
spring.datasource.driver-class-name=org.postgresql.Driver
spring.datasource.data=classpath:db/migration/postgres/data.sql
spring.jpa.hibernate.ddl-auto=create
Import of other files is not supported in Sql, but if you execute the script with psql can you use the \i syntax:
SELECT * FROM table_1;
\i other_script.sql
SELECT * FROM table_2;
This will probably not work if you execute the sql with other clients than psql.
Hibernate is just:
reading all your SQL files line per line
strip any comment (lines starting with --, // or /*)
removes any ; at the end
executes the result as a single statement
(see SchemaExport.importScript and SingleLineSqlCommandExtractor)
There is no support for an include here.
What you can do:
Define your own ImportSqlCommandExtractor which knows how to include a file - you can set that extractor with hibernate.hbm2ddl.import_files_sql_extractor=(fully qualified class name)
Define your optional file as additional import file with hibernate.hbm2ddl.import_files=prefix.sql,optional.sql,postfix.sql, you can either add and remove the file reference as you like, or you can even exclude the file from your artifact - a missing file will only create a debug message.
Create an Integrator which sets the hibernate.hbm2ddl.import_files property dynamically - depending on some environment property
Related
Hey all I am wondering if its possible to place different .properties inside just one .properties file so that I do not have to make a seprate .praperties file for each of my database?
This site here is doing what I would like to do but it doesnt explain how to go about sending those contexts to the .properties file. And it also seems to be using seprate .properties files. As in, it doesnt show me how it should look inside the .properties file. Another good example is here but again, it doesnt do it in the .properties file.
Let's say I have the following inside my .properties file:
#liquibase.properties file content
url: jdbc:oracle:thin:#xxxxxxxx.str3.xxxxx.xxxxx:1511/xxxxxx.xxxxx.xxxxx.xxxxx
username: SEPRATE_1PEGA_BASEDA
password: XXXXXXXXXXXXX
referenceUrl: jdbc:oracle:thin:#xxxxxxxx.str2.xxxxx.xxxxx:1511/xxxxxx.xxxxx.xxxxx.xxxxx
referenceUsername: SEPRATE_1PEGA_BASEDB
referencePassword: YYYYYYYYYYYYYY
changeLogFile: diff_change.txt
diffTypes: tables, views, columns, indexes, foreignkeys, primarykeys, uniqueconstraints
And I am wanting to send a paramater to replace the "str2" and "SEPRATE_1PEGA_BASEDB" varibles currently hard-coded inside the .properties file. So I write the .properties file like so:
#liquibase.properties file content
url: jdbc:oracle:thin:#xxxxxxxx.str3.xxxxx.xxxxx:1511/xxxxxx.xxxxx.xxxxx.xxxxx
username: SEPRATE_1PEGA_BASEDA
password: XXXXXXXXXXXXX
referenceUrl: jdbc:oracle:thin:#xxxxxxxx.${liquibase.properties.str}.xxxxx.xxxxx:1511/xxxxxx.xxxxx.xxxxx.xxxxx
referenceUsername: ${liquibase.properties.un}
referencePassword: YYYYYYYYYYYYYY
changeLogFile: diff_change.txt
diffTypes: tables, views, columns, indexes, foreignkeys, primarykeys, uniqueconstraints
So would the CLI for this look like:
liquibase --str=str5 --un=BobBarker diff
My liquibase version is Liquibase-3.6.2.
So if I understand your need properly, you would like to replace placeholders inside your property files.
There are 2 things you should think of:
changelog parameter substitutions - I didn't tested that but if you define property parameter.testproperty=originalvalue and then you put on commandline -Dtestproperty=overridenvalue it could probably replace your value and you will be able to use it in changelogs
liquibase configuration parameters substitution - Liquibase is not doing this, because it's reading properties just like they are in file (method parsePropertiesFile last if/else) and it's trying to fill fields in java with that values. So for this you will need to replace placeholders before calling liquibase command or use different property file.
I can't seem to find a way to specify a relative path for my infile when using sql loader.
I'm running it through a command line and this is what it looks like:
C:\app\...in\sqlldr.exe userid=user/pass
control="C:\User...DATA_DATA_TABLE.ctl" log="C:\User...DATA_DATA_TABLE.log"
bad = "C:\User...DATA_DATA_TABLE.bad" discard = "C:\User...DATA_DATA_TABLE.dsc"
(I've added carriage returns just for the readability on here, the command i use is one line)
And this works, it's will start inserting stuff in the table IF the path to my infile in .ctl is absolute like "C:\Usertemp\example.ldr"
My ctl was generated autmatically by sqldeveloper. And i just changed the path to this:
OPTIONS (ERRORS=50)
LOAD DATA
INFILE 'AI_SLA_DATA_DATA_TABLE.ldr' "str '{EOL}'" <-- i'm trying to get relative path here but doesn't work
APPEND
CONTINUEIF NEXT(1:1) = '#'
INTO TABLE "USER"."DATA"
...other sqldeveloper generated stuff
The .ldr file is in the same directory as the .ctl file. Is it possible to get the path of the ctl? I'm pretty sure he searches for the .ldr file next to the sqlldr.exe instead of the ctl.
Any tips to do this? I can't find answers on docs.oracle.
Thanks.
I've never tried adding a relative path to the .ctl file, but for me it works fine as a command-line argument, e.g.
C:\app\...in\sqlldr.exe userid=user/pass
control="DATA_DATA_TABLE.ctl" log="DATA_DATA_TABLE.log"
bad = "DATA_DATA_TABLE.bad" discard ="DATA_DATA_TABLE.dsc"
data="AI_SLA_DATA_DATA_TABLE.ldr"
I am getting below error in my script which is running a SQLLDR :
SQL*Loader-522: lfiopn failed for file (/home/abc/test_loader/load/badfiles/TBLLOAD20150520.bad)
As far my knowledge this is the error related to permission,but i am wondering in the folder "/load" there is no "badfiles" folder present .i have already define badfiles folder outside the load folder,but why in the error it is taking this location ?
is it like my input file having some problem and SQLLDR trying to create a bad file in the mention location ?
below is the SQLLDR command :
$SQLLDR $LOADER_USER/$USER_PWD#$LOADER_HOSTNAME control=$CTLFDIR/CTL_FILE.ctl BAD=$BADFDIR/$BADFILE$TABLE_NAME ERRORS=
0 DIRECT=TRUE PARALLEL=TRUE LOG=$LOGDIR/$TABLE_NAME$LOGFILE &
below is the control file temp :
LOAD DATA
INFILE '/home/abc/test_loader/load/FILENAME_20150417_001.csv' "STR '\n'"
APPEND
INTO TABLE STAGING.TAB_NAME
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
(
COBDATE,
--
--
--
FUTUSE30 TERMINATED BY WHITESPACE
)
Yes, your input file is having a problem so the sqlldr wants to create a file containing rejected rows (BAD file). The BAD file creation fails due to insufficient privileges - the user who runs the sqlldr does not have rights to create file in the folder you defined to contain BAD files.
Add write privileges on the BAD folder to the user who runs the sqlldr or place the BAD folder elsewhere.
This is likely some kind of permissions issue on writing the log file, maybe after moving services to a different server.
I ran into the same error. Problem was resolved by changing the name of the existing log file in filesystem and rerunning process. Upon rerunning, the SQLLDR process was able to recreate the log file, and subsequent executions were able to rewrite the log.
I can add stuff to distributed cache via
add file largelookuptable
and then run a bunch of HQL.
now when I have a series of commands, like the following
add file largelookuptable1;
select blah from blahness using somehow largelookuptable1;
add file largelookuptable2;
select newblah from otherblah using largelookuptable2;
in this case largelookuptable1 is unnecessarily available for the second query. is there a way I can get rid of it before the second query runs ?
On the Hive CLI, type:
delete file largelookuptable1;
Same thing applies to jars added to distributed cache.
Syntax (from Hive CLI):
Usage: delete [FILE|JAR|ARCHIVE] []*
I am using a CTL file to load data stored in a file to a specific table in my Oracle database.
Currently, I launch the loader file using the following command line:
sqlldr user/pwd#db data=my_data_file control=my_loader.ctl
I would like to know if it is possible to use specify parameters to be retrieved in the CTL file.
Also, is it possible to retrieve the name of the data file used by the CTL to fill the table ?I also would like to insert it for each row. I currently have to call a procedure to update previously inserted records.
Any help would be appreciated !
As I know don't have any way to pass parametter as variable in ctrl. But You can use constant in ctl and modify clt file to change that constant value (in ctl file content) for every loading times.
Edit: more specific.
my_loader.ctl:
--options
load data
infile 'c:\$datfilename$' --this is optional, you can specify here or from command line
into table mytable
fields....
(
datafilename constant '$datfilename$', -- will be replace by real datafname each load
datacol1 char(1),
....
)
dataload.bat: assume that $datfilename$ is the text will be replace by datafile's name.
::sample copy
copy my_loader.ctl my_loader_temp.ctl
::replace the name of datafile (mainly the content to load into table's data column)
findandreplace my_loader_temp.ctl "$datafilename$" "%1"
::load
sqlldr user/pwd#db data=%1 control=my_loader_temp.ctl
::or with data be obmitted if you specified by infile in control file.
sqlldr user/pwd#db control=my_loader_temp.ctl
using: dataload.bat mydatafile_2010_10_10.txt