Is there any "best practice" on how to call a stored procedure from APEX instead of just using a simple DB-Link?
Calling PL/SQL Stored Procedures Within Oracle APEX
There's a great deal of detail involved with developing robust, maintainable and efficient solutions by paring PL/SQL stored procs with the Application Express web based frame work. Not knowing the current skill level of the author of the OP, I assume that the most helpful explanation is a simple example developed from scratch.
Getting Started: The APEX environment has a lot of tools that can get you started. If you don't already have an Oracle database environment with APEX installed, consider signing up for a free, hosted trial account through the APEX home page on Oracle.com.
Application Design Decisions
PL/SQL stored procedures are not necessary to develop applications on Oracle APEX, however they do supply a greater amount of flexibility and customization during the development process.
This example will use the popular schema object: EMP typically available as an optional part of each Oracle database installation. In case you don't have it, here is the DDL and DML source for building the example table:
The EMP Table (A Copy Aliased as: LAB01_SAMPLE_EMP)
CREATE TABLE "EMP"
( "EMPNO" NUMBER(4,0) NOT NULL ENABLE,
"ENAME" VARCHAR2(10),
"JOB" VARCHAR2(9),
"MGR" NUMBER(4,0),
"HIREDATE" DATE,
"SAL" NUMBER(7,2),
"COMM" NUMBER(7,2),
"DEPTNO" NUMBER(2,0),
PRIMARY KEY ("EMPNO") ENABLE
)
/
ALTER TABLE "EMP" ADD FOREIGN KEY ("MGR")
REFERENCES "EMP" ("EMPNO") ENABLE
/
If you would like some test data, this is what I had to work with:
BUILD a SQL-based DML update statement that will change the SAL (Salary) value for one employee at a time based on their ENAME.
WRAP the UPDATE DML statement into a reusable, compiled PL/SQL stored procedure. Include parameter arguments for the two data input values for "Name" and "Amount of Increase".
TEST the stored procedure and verify it works as required.
DESIGN and CODE an APEX Application Page which will:
(a) Show the current contents of the employee entity table.
(b) Accept input values to pass into the PL/SQL Stored Procedure.
(c) Utilize native APEX Error and Success message settings to provide more feedback on the outcome of the procedure call.
TEST the APEX page and verify it works as specified.
REVIEW the discussion and conclusions section at the end for additional comments and more tips to keep you moving on your own path to developing Oracle skills.
Programming a SQL DML Process
This is the initial example created to accomplish this task.
UPDATE LAB01_SAMPLE_EMP
SET SAL = SAL + 1250
WHERE ENAME = 'KING';
A quick revision reveals how we can parametrize the approach to make this statement
UPDATE LAB01_SAMPLE_EMP
SET SAL = SAL + p_salary_increase
WHERE ENAME = p_ename;
If you are not sure where to go next, this is where a lesson on "best-practices" is available thanks to APEX. Navigate to the OBJECT BROWSER and CREATE a Procedure Object. The application will walk through every step to set up a PL/SQL stored proc.
The Working PL/SQL Procedure Source Code:
After walking through the wizard setup, this is the cleaned-up stored procedure. There were some additional changes after debugging some compile-time error warnings:
create or replace procedure "PROC_UPDATE_SALARY"
(p_ename IN VARCHAR2, p_salary_increase IN VARCHAR2)
is
v_new_salary lab01_sample_emp.sal%TYPE;
begin
UPDATE LAB01_SAMPLE_EMP
SET SAL = SAL + p_salary_increase
WHERE ENAME = p_ename
RETURNING SAL INTO v_new_salary;
commit;
dbms_output.put_line('INCREASED SALARY FOR: ' || p_ename ||
' TO THE NEW AMOUNT OF: $ ' || to_char(v_new_salary));
end;
Best Practices, a Quick Aside and Discusssion: You just have to keep doing it... coding that is. There is just no way around it. Look for examples at work or in life and try your hand at developing schemas and designs to satisfy made-up but realistic requirements. For beginning developers, the PL/SQL stored procedure above may already show some "unfamiliar" or odd coding syntax and commands.
That is only the tip of what is possible out there. Coding style is also only a part of it as you get deeper into things, you may notice a few things:
ORGANIZATION is important. Learn quickly some conventions in naming and notation to use in the code. This will keep things organized and easy to find or reference elsewhere.
RECYCLE and REUSE means your code should be developed with reuse in mind. Common routines and processes should be bundled together to avoid redundancy.
ALWAYS TEST your work suggests that less frustration is found when initial, fundamental steps in your process or application have been carefully tested first before proceeding.
TESTING the Oracle PL/SQL Procedure
I used the built-in APEX scripting engine found in the SQL WORKSHOP section of the environment. Below is a screenshot of the output logs of my testing script.
Bringing it Together: Designing an APEX Application Page with a Procedure Call
Create or open up an APEX Application and start out by making a new page. There are system wizard processes that will help you get started if you haven't done this before.
Select the option to BUILD a FORM on top of a STORED PROCEDURE. There will be prompts for permission to build the page items needed for input parameters. Follow the wizard to completion to make sure all the dependent page design elements are included.
My near-finalized design is below:
There are a few extras added, such as the REPORT region to provide immediate visibility to the table and any applied changes to its data.
The Form in Action: Testing Data Input and Results
The SUCCESS alert message is a feature available for certain page elements to inform the user of any significant events conducted on the database.
Closing Comments and Discussion
The immediate answer to the question of the OP is YES, there are best practices. It is such a huge subject that the only realistic way of handling it is by walking through different examples to see the ways that these practices are "commonly" applied.
There were a few shortcuts involved in this solution (based on several assumptions) and it might be helpful to bring them up as a parting discussion on the possibility of revisiting this walk-through to make an improved, EMP-2.0 project.
The procedure works on the EMP table based on searches by ENAME. What is the REAL key of this table? Are there any risks involved with this approach- possibly with respect to larger data sets?
Most Oracle PL/SQL Objects used in production ready environments have some level of error trapping or exception handling through a PL/SQL EXCEPTION BLOCK. What kind of errors would you want to trap? How?
Don't underestimate the resources available within the APEX tool. There are lots of wizards that walk developers through the process of creating different, functioning modules of code. These automated guides provide solutions that can also be reverse-engineered to understand how the code was generated and also what general design approaches make compliant design patterns.
Related
I have several Oracle databases where my in-house applications are running. Those applications use both dba_jobs and dba_scheduler_jobs.
I want to write monitoring function: check_my_jobs which will be called periodically by Nagios to check if everything is OK with my jobs. (Are they running? Is it Broken? Is next_run_date delayed? and so on)
Solutions: Due to the fact that I have to monitor jobs on different databases there is two way of implementing solution:
Create a monitoring function and configuration tables only on one database which will check jobs on every database using database links.
pros: Centralized functionality, easy to maintain.
cons: I have to do the checks using database links.
Create a monitoring function and configuration tables on every database where I want to check jobs.
pros: I don't have to use DB links
cons: Duplicated monitoring code on every database
Which solution is better?
I'd go with option #1 - centralized functionality that uses database links.
Database links have an undeserved bad reputation. One of the main reasons is that too many people use public database links, where anyone connecting to the database can use the link. That's obviously a security nightmare, but it's not the default setting and it's easy to avoid that trap.
Some other issues with database links:
They don't perform well for huge inserts of millions of rows. On the other hand they're great at many small SELECTs or INSERTs. I frequently have hundreds of links open and fetching data concurrently, on 10 year-old hardware, and it works great.
They make execution plans more difficult to troubleshoot.
Not all data types are natively supported. This is better in 12.2, but in earlier versions you will need to use an INSERT to move data types like CLOB into tables, and then read from those tables.
For DDL you'll need to use DBMS_UTILITY.EXEC_DDL_STATEMENT#LINK_NAME('create ...'); Make sure to only use DDL in there. Other types of commands will silently fail.
Links may hang indefinitely in a few rare situations, like if the database has an archiver error or a guaranteed restore point that's full. (This one is really a blessing in disguise - many tools like Oracle Enterprise Manager will not catch those issues. You may want to have a background job checking for database link queries that have been running longer than X minutes.)
Links should not be hard-coded or else they could invalidate the package. But this may not matter - you'll probably want to loop through the list of databases and use dynamic SQL anyway. And if the link doesn't exist it's pretty easy to create a new one. Here's an example:
declare
v_result varchar2(4000);
begin
--Loop through a configuration table of links.
for links in
(
select database_name, db_link
from dbs_to_monitor
left join user_db_links
on dbs_to_monitor.database_name = user_db_links.db_link
order by database_name
) loop
--Run the query if the link exists.
if links.db_link is not null then
begin
--Note the user of REPLACE and the alternative quoting mechanism, q'[...]';
--This looks a bit silly with this small example, but in a real-life query
--it avoids concatenation hell and makes the query much easier to read.
execute immediate replace(q'[
select dummy from dual##DB_LINK#
]',
'#DB_LINK#', links.db_link)
into v_result;
dbms_output.put_line('Result: '||v_result);
--Catch errors if the links are broken or some other error happens.
exception when others then
dbms_output.put_line('Error with '||links.db_link||': '||sqlerrm);
end;
--Error if the link was not created.
--You will have to run:
--create database link LINK_NAME connect to USERNAME identified by "PASSWORD" using 'TNS_STRING';
else
dbms_output.put_line('ERROR - '||links.db_link||' does not exist!');
end if;
end loop;
end;
/
Despite all of that, database links are great because you can do everything in PL/SQL, on one database. In a single language you can create an agentless monitoring solution and don't have to worry about installing and fixing agents.
As an example, I built the open source program Method5 to do everything using database links. With that program installed you could gather results from hundreds of databases as simply as running select * from table(m5('select * from dba_jobs'));. That program is probably overkill for your scenario but it shows that database links are all you need for a full monitoring solution.
I occasionally encounter examples where SELECT...INTO...FROM DUAL is used to call a function - e.g.:
SELECT some_function INTO a_variable FROM DUAL;
is used, instead of
a_variable := some_function;
My take on this is that it's not good practice because A) it makes it unclear that a function is being invoked, and B) it's inefficient in that it forces a transition from the PL/SQL engine to the SQL engine (perhaps less of an issue today).
Can anyone explain why this might have been done, e.g. was this necessary in early PL/SQL coding in order to invoke a function? The code I'm looking at may date from as early as Oracle 8.
Any insights appreciated.
This practice dates from before PLSQL and Oracle 7. As already mentioned assignment was possible (and of course Best Practice) in Oracle7.
Before Oracle 7 there were two widely used Tools that needed the use of Select ... into var from dual;
On the one hand there used to be an Oracle Tool called RPT, some kind of report generator. RPT could be used to create batch processes. It had two kinds of macros, that could be combined to achieve what we use PLSQL for today. My first Oracle job involved debugging PLSQL that was generated by a program that took RPT batches and converted them automatically to PLSQL. I threw away my only RPT handbook sometime shortly after 2000.
On the other hand there was Oracle Forms 2.x and its Menu component. Context switching in Oracle Menu was often done with a Select ... from dual; I still remember how proud I was when I discovered that an untractable Bug was caused by a total of 6 records in table Dual.
I am sorry to say that I can not proof any of this, but it is the time of year to think back to the old times and really fun to have the answer.
With TSQL I'm used to putting some repeatable tests in for my stored procs. Typically this may include putting the db in a particular state, runnings the sproc, validating the state and rolling back. And contrived example might something like this"
BEGIN TRAN
--input for test case
DECLARE #TestName VARCHAR(10) = 'bob'
--insert test row
INSERT INTO tbl (data) values (#TestName)
--display initial state of target row
SELECT * FROM tbl WHERE data = #TestName
--do some useful test
EXEC MyProc
--display the final state of the target row
SELECT * FROM tbl WHERE data = #TestName
--put the db back where it started
ROLLBACK TRAN
Now I'm working with Oracle and PL/SQL and I'm trying to use a some similar pattern to test my work and not finding it obvious to me quite how to do that. I believe there are a few different ways I might accomplish it but haven't gotten anything to actually work. Ideally I would have a single script in which I could run multiple test cases and inspect the result.
I am trying to work in PL/SQL Developer at this point and understand that might have some differences from how it might work in Oracle SQL Developer or elsewhere.
In Oracle, using tools like SQL*Plus and GUI tools like SQL Developer, you have many options :
To execute the statements and procedures in a single session in an order, i.e. using procedural method of PL/SQL, write an anonymous plsql block and execute it as a script.
Most of the GUI based tools have an option like Execute as script or Test Window to execute your scripts individually or embedded in an anonymous block.
Using DBMS_SCHEDULER also you could achieve the same task.
As you are interested in PL/SQL Developer tool product of Allround Automations, you could simply use the test window to test individual objects.
I have documented few useful features of the PL/SQL Developer tool in my blog, please read http://lalitkumarb.wordpress.com/2014/08/14/plsql-developer-settings/
I need to capture Oracle stored procedures calls (with parameters) to trace an application (which uses JDBC to connect to the DB). I need something like sp_trace_setevent for Rpc:Completed event in MS SQL SERVER.
I do not have access to this application, but have mostly all rights in the database. I would like to stay in PL/SQL (and Oracle SQL Developer 3.2.20).
I have tried:
Oracle SQL Developer UI "Tools"/ "RealTime SQL Monitoring" and "Tools"/ "Sessions" instruments but can't understand how to enabling accumulating information instead of capturing moment snapshot.
exploring v$sql - it seems there are no sp calls.
v$sqlarea differences (Oracle: is there a tool to trace queries, like Profiler for sql server? , mdj3884 reply) - there I am able to find my test call, but without parameters...
Suggestion from Tom's article : http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:767025833873. Particularly, it is looping through v$sqltext_with_newlines, but I can't understand what is a script result. Something more like moment snapshot; isn't it? But then why they call it tracking?
use DBMS_APPLICATION_INFO - as I understand using this I can add custom info to V$SESSION and V$SESSION_LONGOPS - it can be useful for monitoring tasks but I can't imagine how it can be used for accumulating information about sp calls and theirs parameters.
use DBMS_MONITOR for enabling tracing into file. but I can't find option to enable tracing only sp call events, also it requires access to the server files.
DBMS_PROFILER - collects as I understand by default it collects only statistics (min, max time); there should be possibility to add custom information to plsql_profiler_runs but I can't find this table (when DBMS_PROFILER is in place).
What to see next? What I have missed?
P.S. If the only one way is to change SP body (those SP which need to be traced) then what is the quickest and safest way to log sp parameters from sp body in Oracle? It could be logging to custom table, but may be I could choose between generating another types of events (that are not rollbacked, something like SQL Server custom trace events)?
It would be easy to add some custom functionality to do this (see below for most of what is required) or you could use sqltrace or the enterprise manager reports and search through them:
create package p_audit as
type t_param_type is table of varchar2(50) index by binary_integer;
procedure p_audit (p_procedure varchar2, l_param_type t_param_type);
end;
create table audit_table (procedure_name varchar2(50), parameters varchar2(500))
create or replace package body p_audit is
procedure p_audit (p_procedure varchar2, l_param_type t_param_type) is
pragma autonomous_transaction;
begin
insert into audit_table values (p_procedure,l_param_type(1));
commit;
end;
end p_audit;
declare
l_param_type p_audit.t_param_type;
begin
l_param_type(1):='parameter 1';
p_audit.p_audit('test procedure',l_param_type);
end;
I used DBMS_AUDIT trail to determine what kind of procedures/functions/packages used when a client side application executed. I would really recommend you to use it once, it really helps, but problem is that you cannot analise deeper in a package hierarchy, its function/procedure called (calls), but only its usage. If you want to know dependency of the package you can use ALL_DEPENDENCIES. It can be helpful.
we have application where database contains large parts of business logic in triggers, with a update subsequently firing triggers on several other tables. I want to refactor the mess and wanted to start by extracting procedures from triggers, but can't find any reliable tool to do this. Using "Extract procedure" in both SQL Developer and Toad failed to properly handle :new and :old trigger variables.
If you had similar problem with triggers, did you find a way around it?
EDIT: Ideally, only columns that are referenced by extracted code would be sent as in/out parameters, like:
Example of original code to be extracted from trigger:
.....
if :new.col1 = some_var then
:new.col1 := :old.col1
end if
.....
would become :
procedure proc(in old_col1 varchar2, in out new_col1 varchar2, some_var varchar2) is
begin
if new_col1 = some_var then
new_col1 := old_col1
end if;
end;
......
proc(:old.col1,:new.col1, some_var);
It sounds like you want to carry out transformations on PL/SQL source. To do this reliably, you need a tool that can parse PL/SQL to some kind of compiler data structure, pattern-match against that structure and make directed changes, and then regenerate the modified PL/SQL code.
The DMS Software Reengineering Toolkit is such a tool. It is parameterized by the programming language being translated; it has off-the-shelf front ends for many languages, including C, C++, C#, Java, COBOL and ... PL/SQL.
This is not exactly the answer. I have not enough reputation to edit original question, obviously.
The issue is that it is not a "refactoring" as we usually think of. Even when you'll create bunch of procedures from triggers, you'll need to make a proper framework to run them in order to achieve original functionality. I suspect that this will be a challenge as well.
As a solution proposal, I'd go with one python script, based on state machine (see http://www.ibm.com/developerworks/library/l-python-state.html for example). If you put strict definition of what should be translated and how, it will be easy to implement.