Load multiple csv files in different tables using sql loader at once - oracle

Just started playing around with sql loader.
I have multiple csv files which I want to load into respective tables.
I used sql loader and created multiple .ctl files one for each csv and I was able to run them one at a time to upload the data to my tables.
But instead of running multiple commands, I want to create a script that will run all these commands at once. Is there a way to do this in a shell script?
Edit: I will be using Linux Rh7
Thanks.

Sure, why not ... sqlldr is an operating system executable so you can call it from a batch / shell script. The way it looks depends on operating system you use (and you didn't mention).

Related

Package or automating execution of Hive queries

In Oracle or other DBs, we have a concept of PL/SQL package where we can package multiple queries/procedures and call them inside a UNIX script. In case of Hive queries, what's the process used to package and automate the query processing in actual production environments.
If you are looking to automate the execution of numerous Hive queries, the hive or beeline CLI (think sqlplus with Oracle) allows you to pass a file containing one or more commands such as multiple inserts, select, create tables, etc. The contents of said file can be created programmatically using your favorite scripting language like python or shell.
See the "-i" option in this documentation: https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Cli
In terms of a procedural language, please see:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=59690156
HPL/SQL does have a Create Package option but if whatever you are trying to achieve is scripted outside of HPL/SQL (e.g. python, shell), you can 'package' your application in accordance with scripting best practices of your selected language.
To run mutilpe queries simply write it down one after another in a file (say 'hivescript.hql') and then it can be run from bash by simply calling it through beeline or hive shell
beeline -u "jdbc:hive2://HOST_NAME:10000/DB" -f hivescript.hql

Extract data from oracle hyperion essbase

How can I extract data from oracle hyperion essbase with command via vpn client ?
There are several options to consider. The fact that it's on VPN doesn't really matter, but obviously your machine needs to be able to communicate with the Essbase server. The simplest approach would be to use the MaxL interpreter. MaxL is the scripting language that ships with Essbase. It is installed on Essbase servers but would need to be installed locally to your machine in order for you to extract data. The MaxL scripting language would give you the ability to run an Essbase report script (this is a script language that outputs data in a defined format), run an MDX script (MDX is like Essbase's equivalent of SQL), run a calc script that exports data, or just export all of the data in the cube using the appropriate MaxL command (this can end up generating very large files though).
You will need to do some amount of work to properly define a report script or MDX script and execute it, but the Essbase technical reference will help immensely.

Accessing Created date of a CSV file using an Oracle External table

Situation
I have a CSV file called inventory.csv located on an Oracle database server (2008 R2 Enterprise Edition Windows Server). This CSV file is used as an Oracle external table.
Every hour, a scheduled task (Windows Task Scheduler) executes a .bat file that copies over an updated version inventory.csv, overwriting the original.
The data is then used by a reporting application.
Problem
The application that uses the data in inventory.csv has no way of knowing when the data was last updated.
Ideally, I'd like the "last updated date" to be accessible as a column in the table.
One possible solution is to trigger a logging of the current date/time in a separate file, an then referencing that as an external table as well. However, this solution has too many moving parts, and I'd prefer something simpler, if possible.
I know that the CSV file itself knows when it was created...I'm wondering if there is any way for the Oracle external table to read the "Created" date from the CSV file properties?
Or any other ideas?
What version of Oracle?
If you are using 11.2 or later, you can use the preprocessor feature of external tables to run a shell script/ batch file on the file before it is loaded. My bias would be to go for simplicity-- have the preprocessing script grab the date, store it to a separate file, and have a separate external table that loads and exposes that data. That's likely easier than adding the date to every row.

Scheduling Oracle sql files using Unix based SAS enviornment

I have bunch of SQL queries that run against an Oracle database. Is there a way to schedule these .sql files using UNIX Based SAS, so they can execute one after another at certain time of day?
If they are .sql files, why do you want to schedule them using SAS? Are they SAS programs? If not, I would do one of three things, depending on my constraints:
1) Convert the .sql files to stored procedures and call them from DBMS_SCHEDULER within Oracle, since Oracle has a fantastic job scheduling subsystem (actually multiple variants) that protects against duplicate jobs among other issues, and you get transactional control, auditing and logging. http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_sched.htm
2) If converting them to stored procs is too much, then call the .sql scripts directly from DBMS_SCHEDULER with DBMS_SCHEDULER.CREATE_PROGRAM() and then schedule that program with DBMS_SCHEDULER.CREATE_JOB.
3) Use cron or atrun to schedule batch / shell script wrappers that call sqlplus to run .sql files.
If the question is specifically how to do this with SAS, then DBMS_SCHEDULER can still execute external SAS programs using option (2) above.

Getting output in flat file using oracle on UNIX

How to get the output of a query into a flat file using Oracle on UNIX?
For example:
I have a TEST table; I want to get the content of the TEST table into a flat file and then store the output in some other folder in .txt format.
See Creating a Flat File in the SQL*Plus User's Guide and Reference.
in the oracle SQLplus terminal you could type
spool ;
run your query
spool off;
Now the would contain the results of the query.
In fact it would contain all the output to the terminal since the execution of the spool command till spool off.
If you have access to directories on the database server, and authority to create "Directory" objects in Oracle, then you have lots of options.
For example, you can use the UTL_FILE package (part of the PL/SQL built-ins) to read or write files at the operating system level.
Or use the "external table" functionality to define objects that look like single tables to Oracle but are actually flat files at the OS level. Well documented in the Oracle docs.
Also, for one-time tasks, most of the tools for working SQL and PL/SQL provide facilities for moving data to and from the database. In the Windows environment, Toad's good at that. So is Oracle's free SQLDeveloper, which runs on many platforms. You wouldn't want to use those for a process that runs every day, but they're fine for single moves. I've generally found these easier to use than SQL*Plus spooling, but that's a primitive version of the same functionality.

Resources