Oracle OCI API RMAN Commands - oracle

When I try to execute the "delete noprompt archivelog until time ‘SYSDATE-1’;" command using "OCIStmtExecute" function ,
I am getting an error like "ORA-00933: SQL command not properly ended"
Is this the correct command to delete ARCHIVE logs? Whether it is possible to execute RMAN commands using OCI API ? Are there any OCI functions to delete ARCHIVELOGS (log truncation) ?

Related

How to use "alter system.." sql statement in AWS RDS oracle?

"ALTER SYSTEM FLUSH BUFFER_CACHE;"
to delete data buffer cache of oracle db command was used.
But then the error occurred.
SQL Error [1031] [42000]: ORA-01031
So I used another command:
"EXEC rdsadmin.rdsadmin_util.flush_buffer_cache;
The following error occurred.
SQL Error [900] [42000]: ORA-00900
According to the documentation, "exec admin.admin_util.flush_buffer_cache;" This command says it can be used, but can you tell me why it can't?
And is there any other way to do flust_buffer_cache?
The first error indicates that your user doesn't have privileges to use alter system.
The second error indicates your command is incorrect. You seem to have lost the "rds" from "rdsadmin". Have you tried this:
EXEC rdsadmin.rdsadmin_util.flush_buffer_cache;
or
begin
rdsadmin.rdsadmin_util.flush_buffer_cache;
end;
/
Per the AWS documentation, here: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.CommonDBATasks.System.html

how to use Big SQL commands to automate the synchronization with HIVE through shell script?

I have written a small shell script to automate the Big SQL and HIVE synchronization.
Code is as below
echo "Login to BigSql"
<path to>/jsqsh bigsql --user=abc --password=pwd
echo "login succesfull"
echo "Syncing hive table <tbl_name> to Big SQL"
call syshadoop.hcat_sync_objects('DB_name','tbl_name','a','REPLACE','CONTINUE');
echo "Syncing hive table TRAINING_TRACKER to Big SQL Successfully"
Unfortunately, I am getting the message:
Login to BigSql
Welcome to JSqsh 4.8
Type "\help" for help topics. Using JLine.
And then it enters the Big SQL command prompt. Now when I type "quit" and hit enter, it gives me following messages:
login succesful
Syncing hive table <tbl_name> to Big SQL
./script.sh: line 10: call syshadoop.hcat_sync_objects(DB_name,tbl_name,a,REPLACE,CONTINUE): command not found
What am I doing wrong?
You would need to redirect the output of your later commands into the jsqsh command. E.g. see this example
You can start JSqsh and run the script at the same time with this command:
/usr/ibmpacks/common-utils/current/jsqsh/bin/jsqsh bigsql < /home/bigsql/mySQL.sql
from here https://www.ibm.com/support/knowledgecenter/en/SSCRJT_5.0.2/com.ibm.swg.im.bigsql.doc/doc/bsql_jsqsh.html
There already is an auto-hcat sync job in Big SQL that does exactly what you're trying to do
Check if the job is running by
su - bigsql (or whatever instance owner)
db2 connect to bigsql
db2 "select NAME, BEGIN_TIME, END_TIME, INVOCATION, STATUS from
SYSTOOLS.ADMIN_TASK_STATUS where BEGIN_TIME > (CURRENT TIMESTAMP - 60 minutes)
and name ='Synchronise MetaData Changes from Hive' "
if you don't see an output , simply enable it through Ambari :
Enable Automatic Metadata Sync

Calling an Oracle stored procedure in Informatica Cloud postprocessing command

I have an Informatica data synchronization task that creates a table in Oracle. I am trying to include a call to an Oracle stored procedure in the postprocessing command of Informatica Cloud that will update a variety of tables at the completion of the task. The procedure that I am trying to call is in the same schema as the target of the synchronization task. The procedure runs correctly when I run it directly in Oracle SQL Developer, but I can't get it to run via Informatica Cloud. I know I'm not using the right syntax to make the call, but these are some examples that I have tried so far:
BEGIN
(PROCEDURE_NAME)
END;
CALL(PROCEDURE_NAME);
EXEC(PROCEDURE_NAME);
PROCEDURE_NAME;
Would designing a mapping in Informatica Cloud help me with this? Or is there a prefix that I should be appending to the stored procedure call, even though the procedure is in the same schema as the target of the task?
None of these options will work in Synchronisation task. Call (procedure_name); will work in Mappings which is the easiest way to do it
In Synchronisation task you need to create a batch file to run a SQL script. The script has to contain the connection details.
Example:
connect user/password#
exec (procedure_name);
disconnect;
exit SQL.SQLCODE
SQL.SQLCODE is used so that it will commit all the transactions.
After that, it will be necessary to use a .bat to run the above sql script using sqlplus client.
Example:
cd \bin
call cmd.exe /C "sqlplus /nolog < {path to sql script created in step
2}\filename.sql"
exit /b 0;

Executing an sql script vi an OCI8 connection object

I have a locally running oracle thin client and have successfully created a ruby script to connect to a remote oracle database. I successfully make a call (select * from table_name) to the database to get the content of a table:
begin
con = OCI8.new('<user>', <password>, '<host>:<port>/XE')
con.exec('select name from actor') do |records|
puts records
end
rescue OCIError
puts "Database Connection Error"
end
I also want to run an sql script that resides in the oracle directory on the remote host.
Usually I perform the following:
su - oracle
sqlplus <user>/<password>
<SQL> #<script_name>
and this will run the script
In the ruby script I try the following:
con.exec('#<script_name>')
Yet, I get the following error:
stmt.c:230:in oci8lib_200.bundle: ORA-00900: invalid SQL statement (OCIError)
#<script_name> is an sqlplus command.
When sqlplus find #<script_name>, it opens <script_name>, divides its contents to SQL statements and executes them.
If you want to run SQL statements in a script by ruby, you need to write code which opens the script, divides its contents and pass SQL statements to con.exec one by one.
I also want to run an sql script that resides in the oracle directory on the remote host. Usually I perform the following:
No, it can't. sqlplus reads the sql script that resides on the local host.
You can put this script in a function (function_name) and execute
con.exec("select function_name from dual")

Does SQLPlus exit after running a script?

We are using CA Workload Control Center (Autosys) to run a windows batch file which calls another batch file which calls SQLPlus with the following:
sqlplus username/password %1 %2 %3
Since this is all being done remotely I cannot see what is actually happening on the server.
I am wondering if the sqlplus file exists but has invalid syntax (like referring to non-existent table) will the sqlplus prompt exit and return back to the batch file?
The reason I ask is because I am looking at the log file from Autosys and it shows that SQLPlus call was made, it prints out ERROR AT LINE 2: Invalid Table, but then it does not show any other activity with the batch script after that where there are multiple echoes and file copies etc. It seems as though it is not exiting SQLPlus perhaps?
Is there a parm I need to pass to SQLPlus to tell it to exit SQLPlus and return back to the calling script after running a SQL script if it fails?
Edit: We are using "WHENEVER SQL ERROR" inside of our SQL files as well and the log file does show this:
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
But I am still expecting that it should continue with the rest of the Batch script but its not, above is the last that Autosys shows in the log
See SQLPlus instruction WHENEVER SQLERROR, it describes in Oracle docs:
http://docs.oracle.com/cd/B19306_01/server.102/b14357/ch12052.htm
Found the solution SO Post:
You can do sqlplus -l test/Test#mydatabase #script.sql; the -l flag means it will only try to connect once, and if it fails for any reason will exit instead of prompting. Look at the output of sqlplus -?, or see the documentation.

Resources