How can I run a series of queries/commands at once in Vertica? - vertica

Currently I am using Vertica on Ubuntu from the terminal as dbadmin. I am using admintools to connect a database and then executing queries like Create Table, Select, Insert in the terminal.
Is there any way I can write the commands in any external text file and execute all the query at once? Like for Oracle, We can create a SQL file in Notepad++ and then run all the queries in the database.

Not only can you use scripts, but it's a good practice for any operation that you might need to repeat.
From the vsql prompt, use the \i command to run a script:
vsql> \i create_tables.sql
From outside the vsql prompt, you can invoke vsql with -f filename.
File paths, if not absolute, are relative to the current working directory.

Related

TestContainers - OracleContainer - how to pass parameters to my sql files by using withCopyFileToContainer

I'm copying to container my SQL directory that contains multiple sql files, the container wil execute them by alphabetical order. But i should pass parameters (schema_name) to my sql files $1
oracleContainer.withCopyFileToContainer(MountableFile.forClasspathResource("database/scripts/"), "/container-entrypoint-startdb.d/")
How can i pass parameter to the container to execute correcty the sql files and let it executes like this #ddl.sql &1 ?
Any ideas
withCopyFileToContainer does nothing more than copying the files to the location you define, it is nearly equivalent to using docker cp. The gvenzl/oracle-xe
supports executing shell scripts rather than SQL scripts as well, so you can use shell scripts if you need more sophisticated behavior.

Can a hive script be run from another hive script?

I have created two hive scripts script1.hql and script2.hql.
Is it possible to run the script script2.hql from script1.hql?
I read about using the source command, but could not get around about its use.
Any pointers/ref docs will be appreciated..
Use source <filepath> command:
source /tmp/script2.hql; --inside script1
The docs are here: https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Cli
Hive will include text of /tmp/script2.hql and execute it in the same context, so all variables defined for main script will be accessible script2 commands.
source command looks for local path (not HDFS). Copy file to local directory before executing.
Try using command and see if you can execute
hive -f /home/user/sample.sql

How to create BiqQuery view from SQL source in a file (Windows command line)

I created a number of BigQuery views and all works well. I need to move the SQL source for the queries into my source control and manage changes from there. Is there a way to create/update a view from the command line using the source from a file? The bq mk command seems to only allow the SQL code to be inline on the command line --view keyword. Some of my views are quite lengthy and I'm sure there are characters that would need to be escaped - which I obviously don't want to get into. I'm running on Windows. Thanks
Simply use the flagfile parameter:
bq mk --help:
--flagfile: Insert flag definitions from the given file into the command line.
bq mk --view --flagfile=<path_to_to_your_file> dataset.newview
Let us assume that file MyQuery.sql contains view definition.
Create a Script file script.sh with below contents
query=`cat MyQuery.sql`
bq mk --use_legacy_sql=false --view "$query" dataset.myview
Run using command sh script.sh
This worked for me in Shell..!! You can make necessary changes for Windows..!!

Saving hive queries

I need to know how we can store a query I have written in a command line just like we do in sql(we use ctrl+S in sql server).
I heared hive QL queries use .q or .hql extension.Is there any possibility I save my query to get the same by saving list of commands I am executing.
sure whatever ide you use you can just save your file as myfile.q and then run it from the command line as
hive -f myfile.q
You can also do
hive -f myfile.q > myfileResults.log
if you want to pipe your results into a log file.
Create a new file using "cat" command(You can even use editor).Write all the queries you want to perform inside the file
$cat > MyQueries.hql
query1
query2
.
.
Ctrl+D
Note: .hql or .q is not necessary. It is just for our reference to identify that it is a hive query(file).
You can execute all the queries inside the file at a time using
$hive -f MyQueries.hql
You can use hue or web interface to access hive instead of terminal. It will provide you UI from where you can write and execute queries. Solves copy problem too.
http://gethue.com/
https://cwiki.apache.org/confluence/display/Hive/HiveWebInterface

Strange issue running hiveql using -e option from .sh file

I have checked Stackoverflow but could not find any help and that is the reason i m posting a new question.
Issue is related executing hiveql using -e option from .sh file.
If i run hive as $ bin/hive everything works fine and properly all databases and tables are displayed.
If i run hive as $ ./hive OR $ hive (as set in path variable) OR $HIVE_HOME/bin/hive only default database is displayed that too without any table information.
I m learning hive and trying to execute hive command using $HIVE_HOME/bin/hive -e from .sh file but it always give database not found.
So i understand that it is something related to reading of metadata but i m not able to understand why this kind of behavior.
However hadoop commands work fine from anywhere.
Below is one command i m trying to execute from .sh file
$HIVE_HOME/bin/hive -e 'LOAD DATA INPATH hdfs://myhost:8040/user/hduser/sample_table INTO TABLE rajen.sample_table'
Information:
I m using hive-0.13.0, hadoop-1.2.1
Can anybody pl explain me how to solve this or how to overcome this issue?
can you correct the query first, hive expect load statement path should be followed by quotes.
try this first from shell- HIVE_HOME/bin/hive -e "LOAD DATA INPATH '/user/hduser/sample_table' INTO TABLE rajen.sample_table"
or put your command in test.hql file and test $hive -f test.hql
--test.hql
LOAD DATA INPATH '/user/hduser/sample_table' INTO TABLE rajen.sample_table
I finally was able to fix the issue.
Issue was that i have kept the default derby setup of hive metadatastore_db , so from where ever i used to trigger hive -e command, it used to create a new copy of metadata_db copy.
So i created metadata store in mysql which became global and so now from where ever i trigger hive -e command, same metadata store db was being used.

Resources