I have a requirement where I need to run a sql script containing multiple select statement and then to store in a xml file using pentaho. I need to do this using command line as a cron job will initiate the execution of sql script. I tried using execute sql script but no success. Can any one please suggest me how to do it using pentaho or any other better etl tool which is open source. Thanks
Related
I am trying to unload a large oracle table as a JSON file. Is there a BCP utility similar to what we have in SQL Server?
Thanks
Using SQL*Plus or SQLcl to spool to a file is one option (methods to generate CSV would work similarly for JSON):
https://blogs.oracle.com/opal/fast-generation-of-csv-and-json-from-oracle-database
https://oracle-base.com/articles/misc/sqlcl-format-query-results-with-the-set-sqlformat-command#json
https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:9536328100346697722
You could also use PL/SQL functions and write to a file using UTL_FILE:
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/adjsn/generation.html#GUID-6C3441E8-4F02-4E95-969C-BBCA6BDBBD9A
https://oracle-base.com/articles/9i/generating-csv-files
Or use ROracle:
https://oralytics.com/2015/05/14/extracting-oracle-data-generating-json-data-file-using-roracle/
There are most likely several other ways, too.
In Oracle or other DBs, we have a concept of PL/SQL package where we can package multiple queries/procedures and call them inside a UNIX script. In case of Hive queries, what's the process used to package and automate the query processing in actual production environments.
If you are looking to automate the execution of numerous Hive queries, the hive or beeline CLI (think sqlplus with Oracle) allows you to pass a file containing one or more commands such as multiple inserts, select, create tables, etc. The contents of said file can be created programmatically using your favorite scripting language like python or shell.
See the "-i" option in this documentation: https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Cli
In terms of a procedural language, please see:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=59690156
HPL/SQL does have a Create Package option but if whatever you are trying to achieve is scripted outside of HPL/SQL (e.g. python, shell), you can 'package' your application in accordance with scripting best practices of your selected language.
To run mutilpe queries simply write it down one after another in a file (say 'hivescript.hql') and then it can be run from bash by simply calling it through beeline or hive shell
beeline -u "jdbc:hive2://HOST_NAME:10000/DB" -f hivescript.hql
I am using JRuby to connect to Hive, which is happening successfully. Now I want to create tables, but instead of writing the create statement as a parameter to execute() method, I want to call a ddl file that has the table definition.
I cannot take the file contents and use them because they are usually more than one statement before the actual table creation (i.e. CREATE DATABASE IF NOT EXISTS, CREATE TABLE IF NOT EXISTS ..)
Is there a command that I can use through my JDBC connect that take the ddl file and executes it?
As per my knowledge there is no direct way with JDBC API to do an operation similar to hive -f ,
option 1)
you can parse your SQL file and write a method to execute the commands sequentially (or) use third party code,
here is one reference http://www.codeproject.com/Articles/802383/Run-SQL-Script-sql-containing-DDL-DML-SELECT-state
option 2) If your client environment where Jruby code is running also supports hive, write a shell script which can connect to remote JDBC and run SQL with beeline which will make remote Thrift calls
Reference : https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients
I'm developing an application which runs on an Oracle database. I'm now in the process of creating an installation package which will contain some SQL scripts containing the default data that comes with the program.
Some of the tables have BLOB columns which need to contain MS Word documents. I'm not sure how to get these BLOBs into the SQL scripts. I know I could do it through Data Pump, but it is a requirement that all database stuff is included in plain text SQL files.
Does anyone know how to get these BLOBs into an SQL script which the client can just run?
Thanks!
I solved this problem by creating a PHP script that is run as part of the installation process - it loops through all my word documents and inserts them into the database. I would still rather have SQL scripts or something similar but this works for now.
I'm using oracle 11g and I'm loading data into the database using the sql loader which are invoked through the unix scripts. I want to select some rows of data and write the data into the file using shell scripts. Is it possible to write a shell script for the same.
Here is an excellent tutorial which clearly explain how to execute a query from UNIX
Ultimately what he does is login into Oracle i.e., setup a session for Oracle and read a query that needs to be executed and execute that query and do what ever operation needed.
Instead of reading the query from the user we could read it from a file or even hardcode it there itself as per our need.
Blog which explain about usage of Oracle query in Shell script