How can I execute commands from a CQL file within Neo4j's shell? - shell

I have a file ImportData.cql which populates the database. Is there a way while within Neo4j's shell to execute all the commands within this file by simply calling on the file rather than pasting in each command?

when you .CQL, I am assuming that this file contains bunch of cypher queries.
if yes then you can use "./bin/Neo4jShell -file "name of file" -path "DB-Path"
Thanks,
Sumit

Related

TestContainers - OracleContainer - how to pass parameters to my sql files by using withCopyFileToContainer

I'm copying to container my SQL directory that contains multiple sql files, the container wil execute them by alphabetical order. But i should pass parameters (schema_name) to my sql files $1
oracleContainer.withCopyFileToContainer(MountableFile.forClasspathResource("database/scripts/"), "/container-entrypoint-startdb.d/")
How can i pass parameter to the container to execute correcty the sql files and let it executes like this #ddl.sql &1 ?
Any ideas
withCopyFileToContainer does nothing more than copying the files to the location you define, it is nearly equivalent to using docker cp. The gvenzl/oracle-xe
supports executing shell scripts rather than SQL scripts as well, so you can use shell scripts if you need more sophisticated behavior.

Getting output of a multiple commands and store them in a simple file using powershell

I have a .txt file which containers multiple commands. I want to input the file, read the content and run those commands line by line and retrieve output of each command in a single file using a powershell script. Can somebody help me with writing this script??
The issue was Solved Using the above solution

Unable to Pass option using getopts to oozie shell Action

I created a script in shell and passing the arguments using getopts methods in my script like this:
sh my_code.sh -F"file_name"
where my_code.sh is my unix script name and file_name is the file I am passing to my script using getopts.
This is working fine when I am invoking my script from the command line.
I want to invoke the same script by using oozie, but I am not sure how can I do it.
I tried passing the argument to the "exec" as well as "file" tag in the xml
When I am trying passing argument in exec tag, it was giving "JavaNullPoint" Expection
exec TAG
<exec>my_code.sh -F file_name</exec>
file TAG
<file>$/user/oozie/my_code.sh#$my_code.sh -F file_name</file>
When I am trying passing argument in File Tag, I was getting error, "No such File or directory". It was searching the file_name in /yarn/hadoop directory.
Can anyone please suggest how can I achieve this by using oozie?
You need to create a lib/ folder as part of your workflow where Oozie will upload the script as part of its process. This directory should also be uploaded to the oozie.wf.application.path location.
The reason this is required is that Oozie will run on any random YARN node, and pretend that you had a hundred node cluster, and you would otherwise have to ensure that every single server had /user/oozie/my_code.sh file available (which of course is hard to track). When this file can be placed on HDFS, every node can download it locally.
So if you put the script in the lib directory next to the workflow xml that you submit, then you can reference the script by name directly rather than using the # syntax
Then, you'll want to use the argument xml tags for the opts
https://oozie.apache.org/docs/4.3.1/DG_ShellActionExtension.html
I have created lib/ folder and uploaded it to oozie.wf.application.path location.
I am able to pass files to my shell action.

Can a hive script be run from another hive script?

I have created two hive scripts script1.hql and script2.hql.
Is it possible to run the script script2.hql from script1.hql?
I read about using the source command, but could not get around about its use.
Any pointers/ref docs will be appreciated..
Use source <filepath> command:
source /tmp/script2.hql; --inside script1
The docs are here: https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Cli
Hive will include text of /tmp/script2.hql and execute it in the same context, so all variables defined for main script will be accessible script2 commands.
source command looks for local path (not HDFS). Copy file to local directory before executing.
Try using command and see if you can execute
hive -f /home/user/sample.sql

Saving hive queries

I need to know how we can store a query I have written in a command line just like we do in sql(we use ctrl+S in sql server).
I heared hive QL queries use .q or .hql extension.Is there any possibility I save my query to get the same by saving list of commands I am executing.
sure whatever ide you use you can just save your file as myfile.q and then run it from the command line as
hive -f myfile.q
You can also do
hive -f myfile.q > myfileResults.log
if you want to pipe your results into a log file.
Create a new file using "cat" command(You can even use editor).Write all the queries you want to perform inside the file
$cat > MyQueries.hql
query1
query2
.
.
Ctrl+D
Note: .hql or .q is not necessary. It is just for our reference to identify that it is a hive query(file).
You can execute all the queries inside the file at a time using
$hive -f MyQueries.hql
You can use hue or web interface to access hive instead of terminal. It will provide you UI from where you can write and execute queries. Solves copy problem too.
http://gethue.com/
https://cwiki.apache.org/confluence/display/Hive/HiveWebInterface

Resources