Oracle PL/SQL data access package code generator - oracle

as many of you whose job is PL/SQL development on Oracle might have experienced in their career, it is common to create packages in order to handle the data access layer on a specific table. I mean, given a table 'employee' is a wide common practice to create a package 'da_employee' ('da' stands for 'data access') that implements routines such as ins() in order to insert a row into employee, del() for deleting a row, upd() for updating, lock() for locking, ..., I could go on...
The content of the package might vary on the basis of the needs and the personal choices, but it is likley to state that once the structure of a data access package is designed for a table, the hundreds more table I plan to create in my schema might need a package based on the same design.
At this point I could state it is possible to auto-generate such a kind of package using the metadata stored in the DB and a tamplate of the package itself.
I guess I'm not the first who have come to this conclusion, so I'm wondering if there are such code generation solutions around, either commercial or free.

The CodeGen utility is no longer available at Toadworld. I am now looking into offering alternatives for TAPI (and more generally data access layer) generation on the PL/SQL Challenge site (plqlchallenge.com). Rick, I would be interested to talk to you about yours - feel free to contact me at steven#stevenfeuerstein.com.
Regarding the question of whether to use a TAPI or not: I believe it is most important to focus on the fundamental principles first and then seek out the optimal solution.
The key principle, for me, is to avoid repetition of SQL statements in my app, and consequently to make it easier to optimize, maintain and enhance those statements. For this reason, a data access layer is critical. Some of us build apps that perform DML on individual tables, and so we find TAPIs useful. Others do not and prefer "XAPIs" (transaction APIs).
These days, I write packages that contain parts of both - and generate as much of it as I can.

You could try
https://code.google.com/p/tapig/
I looked at it briefly.. but it had an issue becuase I have table names with _ in them

Related

Using EntityFramework with SAP Business One without losing warranty?

i'd like to know if someone is using EntityFramework with SAP Business One?
If yes how do you handle the warranty. SAP only allows to Insert/Update/Delete through their DI Server API otherwise you lose the warranty. So if i am only allowed to select i can only use Entity Framework for reading data, is that correct?
Anyway would you recommend to use EntityFramework with SAP Business One or are there performance issues with an high amout of data?
Greetings.
You absolutely cannot use anything other than the DI to insert, update or delete data from the SAP Business One database! As someone who has spent the last 9 years working with SAP as a partner, my honest advice is do not even try it. As soon as you break the database, SAP will not support it, and you'll end up paying someone a lot of money to fix it...
Leaving aside the issues of warranty, even the simplest operation in SBO (let's say, adding a single Invoice with 1 line) causes an object model update encompassing at least 10 or 11 "major" tables and their own related sets of "minor" tables....fire up SQL Profiler and create an invoice in the SBO client, and watch how much SQL is generated, not just the inserts but the selects as well....plus the business logic of what SAP is doing with all this data is totally hidden from the you the caller. You have got very little chance i.e zero chance of modeling this correctly yourself....
As regards using the EF to read data from the database, again I would not bother - much of the data that you see in the SAP client is not taken out by correctly defined relationships which means your models will never be quite right. Better to sick with plain old SQL, by all means you can map this data into your own in-memory models.
In this respect SQL Profiler is your friend; nothing will give you 100% exactly how SBO does it but it will at least give you access to the same data it uses by executing operations in the client and watching the resultant queries.
Also just to correct one point - there are two ways to do this. One is the DI Server, which is an XML-based service, and the other is via the DIAPI which is a COM-based library you can link with your project and lets you work in a more object-oriented way (for certain, extremely limited values of "object oriented"!)
Update October 2019:
With respect to my previous advice about not using EF to read the data from the SAP tables, what I've found myself doing more and more - especially with EF Core Query Types is creating views against the various tables that make it simpler to read the data my applications need. A big advantage of this is that you can join in multiple tables from SBO and clean up column names, etc. And being views they are read-only and thus safe to use.

How can I log changing triggers events?

My task is to make a trigger which will fire when our programmers create, alter, replace or delete triggers in database. It must log their changes to 2 datatables which I made similar to SYS.trigger$ table and added some extra info about user who made changes to them. I copied the principles of logging from already existing audit capability in ERP-system named Galaktika or Galaxy to be simple. However, I encountered a well-famous problem ORA-04089: no one can create triggers on system tables and stuck with it.
Now I'm looking for a way to gently modify my trigger according to database rules. Here is the original code:
CREATE OR REPLACE TRIGGER MRK_AlTrigger$
BEFORE DELETE OR INSERT OR UPDATE
ON SYS.TRIGGER$
REFERENCING NEW AS New OLD AS Old
FOR EACH ROW
DECLARE
Log_Rec MRK_TRIGGERS_LOG_HEADER.NREC%TYPE;
BEGIN
INSERT INTO MRK_TRIGGERS_LOG_HEADER (DATEOFCHANGE,
USERCODE,
OPERATION,
OBJ#)
VALUES (
SYSDATE,
UID,
CASE
WHEN INSERTING THEN 0
WHEN UPDATING THEN 1
WHEN DELETING THEN 2
END,
CASE
WHEN INSERTING OR UPDATING THEN :new.OBJ#
ELSE :old.OBJ#
END)
RETURNING NRec
INTO Log_Rec;
IF INSERTING OR UPDATING
THEN
INSERT INTO MRK_TRIGGERS_LOG_SPECIF (LOGLINK,
OBJ#,
TYPE#,
UPDATE$,
INSERT$,
DELETE$,
BASEOBJECT,
REFOLDNAME,
REFNEWNAME,
DEFINITION,
WHENCLAUSE,
ACTION#,
ACTIONSIZE,
ENABLED,
PROPERTY,
SYS_EVTS,
NTTRIGCOL,
NTTRIGATT,
REFPRTNAME,
ACTIONLINENO)
VALUES (Log_Rec,
:new.OBJ#,
:new.TYPE#,
:new.UPDATE$,
:new.INSERT$,
:new.DELETE$,
:new.BASEOBJECT,
:new.REFOLDNAME,
:new.REFNEWNAME,
:new.DEFINITION,
:new.WHENCLAUSE,
:new.ACTION#,
:new.ACTIONSIZE,
:new.ENABLED,
:new.PROPERTY,
:new.SYS_EVTS,
:new.NTTRIGCOL,
:new.NTTRIGATT,
:new.REFPRTNAME,
:new.ACTIONLINENO);
END IF;
EXCEPTION
WHEN OTHERS
THEN
-- Consider logging the error and then re-raise
RAISE;
END MRK_AlTrigger$;
/
I can also provide MRK_TRIGGERS_LOG_HEADER and MRK_TRIGGERS_LOG_SPECIF DDL, but think it is not necessary. So to make summary, here are the questions I have:
How do I modify the above source to the syntax CREATE OR REPLACE TRIGGER ON DATABASE?
Am I inventing a wheel doing this? Is there any common way to do such things? (I noticed that some tables have logging option, but consider it is for debugging purposes)
Any help will be appreciated!
UPD: I came to decision (thanks to APC) that it is better to hold different versions of code in source control and record only revision number in DB, but dream about doing this automatically.
"We despaired to appeal to our programmers' neatness so my boss
requires that there must be strong and automatic way to log changes.
And to revert them quickly if we need."
In other words, you want a technical fix for what is a political problem. This does not work. However, if you have your boss's support you can sort it out. But it will get messy.
I have been on both sides of this fence, having worked as developer and development DBA. I know from bitter experience how bad it can be if the development database - schemas, configuration parameters, reference data, etc - are not kept under control. Your developers will feel like they are flying right now, but I guarantee you they are not tracking all the changes they make in script form . So their changes are not reversible or repeatable, and when the project reaches UAT the deployment will most likely be a fiasco (buy me a beer and I'll tell you some stories).
So what to do?
Privileged access
Revoke access to SYSDBA accounts and application schema accounts from developers. Apart from anything else you may find parts of the application start to rely on privileged accesses and/or hard-coded passwords, and those are Bad Things; you don't want to include those breaches in Production.
As your developers have got accustomed to having such access this will be highly unpopular. Which is why you need your boss's support. You also must have a replacement approach in place, so leave this action until last. But make no mistake, this is the endgame.
Source control
Database schemas are software too. They are built out of programs, just like the rest of the application, only the source code is DDL and DML scripts not C# or Java. These scripts can be controlled in SVN as with any other source code.
How to organise it in source control? That can be tricky. So recognise that you have three categories of scripts:
Schema scripts which deploy objects
Configuration scripts which insert reference data, manage system parameters, etc
Build scripts which call the other scripts in the right order
Managing the schema scripts is the hardest thing to get right. I suggest you use separate scripts for each object. Also, have separate scripts for table, indexes and constraints. This means you can build all the tables without needing to arrange them in dependency order.
Handling change
The temptation will be to just control a CREATE TABLE statement (or whatever). This is a mistake. In actuality changes to the schema are just as likely to add, drop or modify columns as to introduce totally new objects. Store a CREATE TABLE statement as a baseline, then manage subsequent changes as ALTER TABLE statements.
One file for CREATE TABLE and subsequent ALTER TABLE commands, or separate ones? I'm comfortable having one script: I don't mind if a CREATE TABLE statement fails when I'm expecting the table to already be there. But this can be confusing if others will be running the scripts in say Production. So have a baseline script then separate scripts for applying changes. One alter script per object per time-box is a good compromise.
Changes from developers consist of
alter table script(s) to apply the change
a mirrored alter table script(s) to reverse the change
other scripts, e.g. DML
change reference number (which they will use in SVN)
Because you're introducing this late in the day, you'll need to be diplomatic. So make the change process light and easy to use. Also make sure you check and run the scripts as soon as possible. If you're responsive and do things quickly enough the developers won't chafe under the restricted access.
Getting to there
First of all you need to establish a baseline. Something like DBMS_METADATA will give you CREATE statements for all current objects. You need to organise them in SVN and write the build scripts. Create a toy database and get this right.
This may take some time, so remember to refresh the DDL scripts so they reflect the latest statement. If you have access to a schema comparison tool that would be very handy right now.
Next, sort out the configuration. Hopefully you already know tables contain reference data, otherwise ask the developers.
In your toy database practice zapping the database and building it from scratch. You can use something like Ant or Hudson to automate this if you're feeling adventurous, but at the very least you need some shell scripts to get a build out of SVN.
Making the transition
This is the big one. Announce the new regime to the developers. Get your boss to attend the meeting. Remind the developers to inform you of any changes they make to the database.
That night:
Take a full export with Data Pump
Drop all the application schemas.
Build the application from SVN
Reload the data - but not the data structures - with Data Pump
Hopefully you won't have any structural issues; but if the developer has made changes without telling you you'll know - and they won't have any data in the table.
Make sure you revoke the SYSDBA access as soon as possible.
The developers will need access to a set of schemas so they can write the ALTER scripts. In the developers don't have local personal databases or private schemas to test things I suggest you let them have access to that toy database to test change scripts. Alternatively you can let them keep the application owner access, because you'll be repeating the Trash'n'Rebuild exercise on a regular basis. Once they get used to the idea that they will lose any changes they don't tell you about they will knuckle down and start Doing The Right Thing.
Last word
Obviously this is a lot of vague windbaggery, lacking in solid detail. But that's politics for you.
Postscript
I was at a UKOUG event yesterday, and attended a session by a couple of smart chaps from Regdate. They have a product Source Control for Oracle which provides an interface between (say) SVN and the database. It takes a rather different approach from what I outlined above. But their approach is a sound one. Their tool automates a lot of things, and I think it might help you a lot in your current situation. I must stress that I haven't actually used this product but I think you should check it out - there's a 28 day free trial. Of course, if you don't have any money to spend then this won't help you.
you can find the desierd infos in the following trigger attributes
dictionary_obj_name
dictionary_obj_owner
ora_sysevent
here is the simple ON DATABASE trigger
CREATE OR REPLACE TRIGGER trigger_name
AFTER CREATE OR DROP ON DATABASE
BEGIN
IF dictionary_obj_type = 'TRIGGER'
THEN
INSERT INTO log_table ( trg_name, trg_owner, trg_action) VALUES (dictionary_obj_name,dictionary_obj_owner, ora_sysevent);
END IF;
END;
/

How widely used are Oracle objects?

I'm writing an assignment for a databases class, and we're required to migrate our existing relational schema to Oracle objects. This whole debacle has got me wondering, just how widely used are these things? The data model is wonky, the syntax is horrendous, and the object orientation is only about three quarters of the way implemented.
Does anyone actually use this?
For starters some standard Oracle functionality uses Types, for instance XMLDB and Spatial (which includes declaring columns of Nested Table data types).
Also, many PL/SQL developers use types all the time, for declaring PL/SQL collections or pipelined functions.
But I agree few places use Types extensively and build PL/SQL APIs out of them. There are several reasons for this.
Oracle has implemented Objects very slowly. Although they were introduced in version 8.0 it wasn't until 9.2 that they fully supported Inheritance, Polymorphism and user-defined constructors. Proper Object-Oriented Programming is impossible without those features. We didn't get SUPER() until version 11g. Even now there are features missing, most notably the private declarations in the TYPE BODY.
The syntax is often clunky or frustratingly obscure. The documentation doesn't help.
Most people working with Oracle tend to come from the relational/procedural school of programming. This means they tend not to understand OOP, or they fail to understand where it can be useful in database programming. Even when people do come up with a neat idea they find it hard or impossible to implement using Oracle's syntax.
That last point is the key one. We can learn new syntax, we can persuade Oracle to complete the feature set, but it is only worthwhile if we can come up with a use for Types. That means we need problems which can be solved using Inheritance and Polymorphism.
I have worked on one system which used types extensively. It was a data warehouse system, and the data loading sub-system was built out of Types. The underlying rationale was simple:
we need to apply the same business rule template for every table we load, so the process is generic;
every table has its own projection, so the SQL statements are unique for each one.
The Type implementation is clean: the generic process is defined in a Type; the implementation for each table is defined in a Type which inherits from that generic Type. The specific types can be generated from metadata. I presented on this topic at the UKOUG a few years ago, and I have written it up in more detail on my blog.Find out more.
By the way, Relational Theory includes the concept of Domains, which are user-defined data-types, including constraints, etc. No flavour of RDBMS actually supports Domains but Oracle's Type Implementation is definitely a step along the way.
I've never seen the benefit to it, mostly because when I last examined it, your object definitions were immutable once they were used by a table.
So if you had an Address object you used in a Customer table definition, you could never ever change the Address object definition without dropping the Customer table, or having to go through a very wonky conversion.
Objects are fine for data instantiation - like what an application does - but for data storage and set-based manipulation, well, I simply don't see the point.
Many of the other answers have given good examples of where using objects does make sense; in general these are to handle some particular, perhaps complex, types of data. Oracle itself uses them for geospatial data.
What is not commonly done, except it would sadly appear in some college courses, is to use object-based tables instead of regular relational tables to hold regular data like employees and departments something like this:
create type emp_t as object (empno number, ename varchar2(10), ...);
create table emp of emp_t;
While these may be nice simple examples to teach the concepts, I fear they may lead to a new generation of database developers who think this approach is suitable, more modern and therefore better than "old-fashioned" relational tables. It emphatically is not.
I've only heard of it being used one place, and the developers involved were converting to get away from it. I've thought of using it purely in PL/SQL, but as our DBA's won't let us install any Types for fear that we might use them in a table this is unlikely to happen.
Share and enjoy.
It's not too uncommon to see them play a small role somewhere in your system. For example, if you're doing something with Oracle data cartridge. Some times when you need to do something really weird they are necessary.
It is uncommon to see them used extensively in a system. I've seen two different systems use a lot of objects and it was a disaster both times: difficult to use, very slow, and full of bugs.
"Simple" relational methods that use basic tables, rows, and columns are almost always good enough. Every programmer (and program) can understand these concepts, and they are powerful enough for almost any task. Yet you can spend many years trying to fully understand and optimize these methods. Object relational technology adds a huge amount of complexity on top of that for very little gain.
I've used simple types with constructors and a few methods to wrap some functionality of interacting with an existing tcp server. I needed to pass x bytes (raw object) and receive back x bytes (clean object). I could have written some procedure that was particular to my task, but using object type allowed this to be a bit more generic for others. Nothing fancy, very basic OO stuff, create the raw object, populate a handful of its 100 or so properties, call its clean function, and assign the result to a new "clean" object. Anyone else who wanted to call the tcp server could follow the same basic steps, only populating whatever raw values with their data.
Still, in my experience I wouldn't say Oracle is object oriented, but rather has some basic functionality of objects. And as others said, companies don't buy Oracle for it OO capabilities. Don't get too caught up in it with Oracle imo.
I would have to say it's not why people buy Oracle. It's very non-portable/non-standard, and as Adam pointed out has some usage pitfalls as well. I've not personally seen the benefit to it. I don't know how widespread it's usage is, but I can't imagine it's very big. Take a look around this site to see how many questions are asked about it. That may give you some insight.
Well never used them in my practice, never heard anyone using them either, not widely used i guess, It matters when you have an object oriented database, oracle supports OO but is not an OO database. I think people who migrate from OO databases to Oracle use them widely

Database design: Same table structure but different table

My latest project deals with a lot of "staging" data.
Like when a customer registers, the data is stored in "customer_temp" table, and when he is verified, the data is moved to "customer" table.
Before I start shooting e-mails, go on a rampage on how I think this is wrong and you should just put a flag on the row, there is always a chance that I'm the idiot.
Can anybody explain to me why this is desirable?
Creating 2 tables with the same structure, populating a table (table 1), then moving the whole row to a different table (table 2) when certain events occur.
I can understand if table 2 will store archival, non seldom used data.
But I can't understand if table 2 stores live data that can changes constantly.
To recap:
Can anyone explain how wrong (or right) this seemingly counter-productive approach is?
If there is a significant difference between a "customer" and a "potential customer" in the business logic, separating them out in the database can make sense (you don't need to always remember to query by the flag, for example). In particular if the data stored for the two may diverge in the future.
It makes reporting somewhat easier and reduces the chances of treating both types of entities as the same one.
As you say, however, this does look redundant and would probably not be the way most people design the database.
There seems to be several explanations about why would you want "customer_temp".
As you noted would be for archival purposes. To allow analyzing data but in that case the historical data should be aggregated according to some interesting query. However it using live data does not sound plausible
As oded noted, there could be a certain business logic that differentiates between customer and potential customer.
Or it could be a security feature which requires logging all attempts to register a customer in addition to storing approved customers.
Any time I see a permenant table names "customer_temp" I see a red flag. This typically means that someone was working through a problem as they were going along and didn't think ahead about it.
As for the structure you describe there are some advantages. For example the tables could be indexed differently or placed on different File locations for performance.
But typically these advantages aren't worth the cost cost of keeping the structures in synch for changes (adding a column to different tables searching for two sets of dependencies etc. )
If you really need them to be treated differently then its better to handle that by adding a layer of abstraction with a view rather than creating two separate models.
I would have used a single table design, as you suggest. But I only know what you posted about the case. Before deciding that the designer was an idiot, I would want to know what other consequences, intended or unintended, may have followed from the two table design.
For, example, it may reduce contention between processes that are storing new potential customers and processes accessing the existing customer base. Or it may permit certain columns to be constrained to be not null in the customer table that are permitted to be null in the potential customer table. Or it may permit write access to the customer table to be tightly controlled, and unavailable to operations that originate from the web.
Or the original designer may simply not have seen the benefits you and I see in a single table design.

ORM for Oracle pl/sql

I am developing a enterprise software for a big company using Oracle. Major processing unit is planned to be developed in PL/SQL. I am wondered if there is any ORM like Hibernate for Java, but the one for PL/SQL. I have some ideas how to make such a framework using PL/SQL and Oracle system tables, but it is interesting - why no one have done this before? What do you think will that be effective in speed and memory consumption? Why?
ORMs exist to provide an interface between a database-agnostic language like Java and a DBMS like Oracle. PL/SQL in contrast knows the Oracle DBMS intimately and is designed to work with it (and a lot more efficiently than Java + ORM can). So an ORM between PL/SQL and the Oracle DBMS would be both superfluous and unhelpful!
Take a read through these two articles - they contain some interesting points
Ask Tom - Relational VS Object Oriented Database Design
Ask Tom - Object relational impedance mismatch
As Tony pointed out ORMs really serve as helper between the App and Db context boundaries.
If you are looking for an additional level of abstraction at the database layer you might want to look into table encapsulation. This was a big trend back in the early 2000s. If you search you will find a ton of whitepapers on this subject.
Plsqlintgen still seems to be around at http://sourceforge.net/projects/plsqlintgen/
This answer has some relevant thoughts on the pros and cons of wrapping your tables in pl/sql TAPIs (Table APIs) for CRUD operations.
Understanding the differences between Table and Transaction API's
There was also a good panel discussion on this at last years UK Oracle User Group - the overall conclusion was against using table APIs and for transaction APIs, for much the same reason - the strength of pl/sql is the procedural control of SQL statements, while TAPIs push you away from writing set-based SQL operations and towards row-by-row processing.
The argument for TAPI is where you may want to enforce some kind of access policy, but Oracle offers a lot of other ways to do this (fine-grained access control, constraints, triggers on insert/update/etc can be used to populate defaults and enforce that the calling code is passing a valid request).
I would definitely advise against wrapping tables in PL/SQL object types.
A lot of the productivity with pl/sql comes from the fact that you can easily define things in terms of the underlying database structure - a row record type can be simply defined as %ROWTYPE, and will be automatically impacted when the table structure changes.
myRec myTable%ROWTYPE
INSERT INTO table VALUES myRec;
This also applies to collections based over these types, and there are powerful bulk operations that can be used to fetch & insert whole collections.
On the other hand, object types must be explicitly impacted each time you want to change them - every table change would require the object type to be impacted and released, doubling your work.
It can also be difficult to release changes if you are using inheritance and collections of types (you can 'replace' a package, but cannot replace a type once it is used by another type).
This isn't putting OO PL/SQL down - there are places where it definitely simplifies code (i.e. avoiding code duplication, anywhere you would clearly benefit from polymorphism) - but it is best to understand and play to the strengths of the language, and the main strength is that the language is tightly-coupled to the underlying DB.
That said, I do often find myself creating procedures to construct a default record, insert a record, etc - often enough to have editor macros for it - but I've never found a good argument for automatically generating this code for all tables (a good way to create a lot of unused code??)
Oracle is a Relation database and also has the ability to work as an object-oriented database as well. It does this by building an abstraction layer (fairly automatically) on top of the relational structure. This would seemingly eliminate the need for any "tool" as it is already built-in.

Resources