Writing to oracle logfile from unix shell script? - oracle

I am having a Oracle concurrent program which calls a UNIX shell script which will execute SQL loader program. This is used for inserting flat file from legacy to Oracle Base tables.
My question here is,
How do I capture my custom messages, validation error messages etc in the Oracle log file of the concurrent program.
All help in this regards are much appreciated.

It looks like you are trying to launch SQL*Loader from Oracle Apps. The simplest way would be to use the SQL*Loader type of executables, this way you will get the output and log files right in the concurrent requests window.
If you want to write in the log file and the output file from a unix script, you can find them in the FND_CONCURRENT_REQUESTS table (column logfile_name and outfile_name). You should get the REQUEST_ID passed as a parameter to your script.
These files should be in $XX_TOP\log and should be called l{REQUEST_ID}.req and o{REQUEST_ID}.out (apps 11.5.10).

How is your concurrent process defined? If it's using the "Host" execution method then the output should go into the concurrent log file. If it's being executed from a stored procedure, I'm not sure where it goes.

Have your script use sqlplus to sign into oracle, and insert/update the information you need.

Related

Append DataSets (.ds) using UNIX

I'm currently working on DataStage IBM and here's my problem:
I have to get a n numbers of datasets that's going to be in a folder and I have to append them in one DataSet (.ds).
Since I don't know how many datasets I will have and neither they full name, I can't use a DataStage job to deal with them. All I know is they will have the same metadata (because they will be generated in the same job).
I think I have to use a Shell Cmd to append them but I'm not a UNIX guy.
Thank you for everyone who reads so far.
You can use the same job. Specify Append mode (rather than Override) for the target Data Set; each time you run the job data will be added to the same Data Set. Be careful not to inadvertently create duplicates by processing the same source data twice. Use parameters to specify the source.

How to test Hive CRUD queries from Shell scripting

i am creating a shell script, which should execute The HIVE basic queries and assert that with expected result.
from where should i start in shell scripting.?
thanks in advance
I have found an answer,
one thing we can do is, create a hql file which contain basic queries to be test and triggered hql file through beeline(as i was using) in bash file.

Writing autosys job information to Oracle DB

Here's my situation: we have no access to the autosys server other than using the autorep command. We need to keep detailed statistics on each of our jobs. I have written some Oracle database tables that will store start/end times, exit codes, JIL, etc.
What I need to know is what is the easiest way to output the data we require (which is all available in the autosys tables that we do not have access to) to an Oracle database.
Here are the technical details of our system:
autosys version - I cannot figure out how to get this information
Oracle version - 11g
We have two separate environments - one for UAT/QA/IT and several PROD servers
Do something like below
Create a table with the parameters you want to put. Put a key columns which should be auto generated. The jil column should be able to handle huge data. Also add one columns for sysdate.
Create a shell script. Inside it do as follows
"autorep -j -l0" to get all the jobs you want and put them in a file. -l0 is to ignore duplicate jobs. If a Box contain a job, then without -l0 you will get the job twice.
create a loop and read all the job names one by one.
In the loop, set varaibles for jobname/starttime/endtime/status (which all you can get from autorep -j . Then use a variable to hold jil by autorep -q -j
Append all these variable values in a flat file.
End the loop. After exiting a loop you wil end up with a file with all the job details.
Then use SQL loader to put the data in your oracle table. You can hardcode a control file and use it for every run. But the content of data file will change for every run.
Let me know if any part is not clear.

Running SQLLDR in DataStage

I was wondering, for folks familiar with DataStage, if Oracle SQLLDR can be used on DataStage. I have some sets of control files that I would like to incorporate into DataStage. A step by step way of accomplishing this will greatly be appreciated. Thanks
My guess is that you can run it with external stage in data stage.
You simply put the SQLLDR command in the external stage and it will be executed.
Try it and tell me what happens.
We can use ORACLE SQL Loader in DataStage .
If you check Oracle Docs there are two types of fast loading under SQL Loader
1) Direct Path Load - less validation in database side
2) Conventional Path Load
There is less validation in Direct Load if we compare to Conventional Load.
In SQL Loader process we have to specify points like
Direct or not
Parallel or not
Constraint and Index options
Control and Discard or Log files
In DataStage , we have Oracle Enterprise and Oracle Connector Stages
Oracle Enterprise -
we have load option in this stage to load data in fast mode and we can set Environment variable OPTIONS
for Oracle , example is below
OPTIONS(DIRECT=FALSE,PARALLEL=TRUE)
Oracle Connector -
We have bulk load option for it and other properties related to SQL Loader are available in properties tab .
Example - control and discard file values all set by DataStage but you can set these properties and others manually.
As you know SQLLDR basically loads data from files to database so datastage allows you to use any input data file, that would take input in any data file like sequential file, pass them format, pass the schema of the table, and it’ll create an in memory template table, then you can use a database connecter like odbc or db2 etc. and that would load your data in your table, simple as that.
NOTE: if your table does not exist already at the backend then for first execution make it create then set it to append or truncate.
Steps:
Read the data from the file(Sequential File Stage)
Load it using the Oracle Connector(You could use Bulk load so that you could used direct load method using the SQL loader and the data file and control file settings can be configured manually). Bulk Load Operation: It receives records from the input link and passes them to Oracle database which formats them into blocks and appends the blocks to the target table as opposed to storing them in the available free space in the existing blocks.
You could refer the IBM documentation for more details.
Remember, there might be some restriction in loading when it comes to handling rejects, triggers or constraints when you use bulk load. It all depends on your requirement.

How to pump data to txt file using Oracle datapump?

all hope for you.
I need to export a huge table (900 fields, 1 000 000 strings) in to txt ansi file.
UTL_FILE takes a lot of time. It is not suitable in this task.
I'am trying to use Oracle Datapump, but i can't receive txt file with ansi symbols in it (only 2TTЁ©QRўҐEJЉ•).
Can anybody advice me anything.
Thank you in advance.
Oracle Data Pump can only export in its proprietary binary format.
If you want to export data to text you have only a few options:
PL/SQL or Java (stored) procedure which writes a file using UTL_FILE or the Java equivalent api.
A program running outside the database that writes to a file. Use whichever language you're comfortable with.
Pro*C might be a good choice as it is apparently much faster than the UTL_FILE approach, see http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:459020243348
Use a special SQL script and run it in SQL*Plus using spooling. This is the "SQL Unloader" approach, see http://www.orafaq.com/wiki/SQL*Loader_FAQ#Is_there_a_SQL.2AUnloader_to_download_data_to_a_flat_file.3F
Googling "SQL Unloader" comes up with a few ready-made solutions that you might be able to use directly or modify for your needs.

Resources