TestContainers - OracleContainer - how to pass parameters to my sql files by using withCopyFileToContainer - testcontainers

I'm copying to container my SQL directory that contains multiple sql files, the container wil execute them by alphabetical order. But i should pass parameters (schema_name) to my sql files $1
oracleContainer.withCopyFileToContainer(MountableFile.forClasspathResource("database/scripts/"), "/container-entrypoint-startdb.d/")
How can i pass parameter to the container to execute correcty the sql files and let it executes like this #ddl.sql &1 ?
Any ideas

withCopyFileToContainer does nothing more than copying the files to the location you define, it is nearly equivalent to using docker cp. The gvenzl/oracle-xe
supports executing shell scripts rather than SQL scripts as well, so you can use shell scripts if you need more sophisticated behavior.

Related

How to export files from Oracle Apex workspace using command line?

One simpler way is:
Go to https://apex.oraclecorp.com then login into the workspace using credentials.After that click on SQL scripts and proceed to download or export the SQL's.
But is there any way by which we can directly export/download all the sql files via command line ?
You can quickly export Apex applications through the command line, avoiding the Apex UI completely. To do this, Oracle has provided two java classes, which are included in the Apex download:
APEXExport.class
APEXExportSplitter.class
These classes are located in the apex/utilities/oracle/apex directory.
Before using these classes, you have to make sure that the following environment variables are set up correctly:
ORACLE_HOME should point to just that. Something like:
/u01/app/oracle11g/product/11.2.0/db1
APEX_HOME should point to the directory where you unzipped the Apex download. For example
/u01/downloads/apex
PATH needs to include $ORACLE_HOME/jdk/bin
CLASSPATH needs to include:
$ORACLE_HOME/jdbc/lib/ojdbc5.jar
$APEX_HOME/utilities
Note: You may also have to include the . path.
APEXExport
To execute the export use
$ java oracle.apex.APEXExport
You can get a list of the available arguments by executing the above command without arguments. For a basic application export use:
$ java oracle.apex.APEXExport -db localhost:1521:db1 -user scott -password tiger -applicationid 100
You’ll get a message saying: Exporting application 100.
Once the export is done you’ll see an *sql file in your current directory (e.g. f100.sql).
APEXExportSplitter
You can now take the export file and split it up into its various components. This is handy if, for example, you want to examine the SQL of an application page or two.
$ java oracle.apex.APEXExportSplitter f100.sql
This program creates a directory named after your application (e.g. f100) and multiple sub directories which contain the various components of the application.

Can a hive script be run from another hive script?

I have created two hive scripts script1.hql and script2.hql.
Is it possible to run the script script2.hql from script1.hql?
I read about using the source command, but could not get around about its use.
Any pointers/ref docs will be appreciated..
Use source <filepath> command:
source /tmp/script2.hql; --inside script1
The docs are here: https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Cli
Hive will include text of /tmp/script2.hql and execute it in the same context, so all variables defined for main script will be accessible script2 commands.
source command looks for local path (not HDFS). Copy file to local directory before executing.
Try using command and see if you can execute
hive -f /home/user/sample.sql

How can I run a series of queries/commands at once in Vertica?

Currently I am using Vertica on Ubuntu from the terminal as dbadmin. I am using admintools to connect a database and then executing queries like Create Table, Select, Insert in the terminal.
Is there any way I can write the commands in any external text file and execute all the query at once? Like for Oracle, We can create a SQL file in Notepad++ and then run all the queries in the database.
Not only can you use scripts, but it's a good practice for any operation that you might need to repeat.
From the vsql prompt, use the \i command to run a script:
vsql> \i create_tables.sql
From outside the vsql prompt, you can invoke vsql with -f filename.
File paths, if not absolute, are relative to the current working directory.

Saving hive queries

I need to know how we can store a query I have written in a command line just like we do in sql(we use ctrl+S in sql server).
I heared hive QL queries use .q or .hql extension.Is there any possibility I save my query to get the same by saving list of commands I am executing.
sure whatever ide you use you can just save your file as myfile.q and then run it from the command line as
hive -f myfile.q
You can also do
hive -f myfile.q > myfileResults.log
if you want to pipe your results into a log file.
Create a new file using "cat" command(You can even use editor).Write all the queries you want to perform inside the file
$cat > MyQueries.hql
query1
query2
.
.
Ctrl+D
Note: .hql or .q is not necessary. It is just for our reference to identify that it is a hive query(file).
You can execute all the queries inside the file at a time using
$hive -f MyQueries.hql
You can use hue or web interface to access hive instead of terminal. It will provide you UI from where you can write and execute queries. Solves copy problem too.
http://gethue.com/
https://cwiki.apache.org/confluence/display/Hive/HiveWebInterface

How can I execute commands from a CQL file within Neo4j's shell?

I have a file ImportData.cql which populates the database. Is there a way while within Neo4j's shell to execute all the commands within this file by simply calling on the file rather than pasting in each command?
when you .CQL, I am assuming that this file contains bunch of cypher queries.
if yes then you can use "./bin/Neo4jShell -file "name of file" -path "DB-Path"
Thanks,
Sumit

Resources