Large number of Oracle Packages - oracle

I am generating code for Oracle Stored Procedure (SP) based on a dependency graph. In order to reduce recompilation unit size I have organised them in Oracle Packages. But this is resulting in large number of packages (250+). The number of procedures are 1000+.
My question: Will this large number of package create any performance issues with Oracle 11gR2+ ? Or will there be any deployment/management related issues ? Can somebody share their experience on working with large number of Oracle packages ?

In one of the products that I've worked on, the schema had many thousands of stored procedures, functions and packages, totalling almost half a million lines of code. Oracle shouldn't have any issues with this at all. The biggest headache was maintenance and version control of the objects.
We stored each package header and body in separate files so that we could version them independently (the header typically changes much less frequently than the body), and we used an editor that supported ctags to make navigation within a package more manageable. When you have a hundred or more procedures and functions within a package, finding the right place to actually make changes takes as much time as actually doing the work! Another great tool was OpenGrok, which indexes the entire code base and makes searching for things super quick.
Deployment wise, we just used a simple script that wrapped SQL*Plus to load the files and log any issues with compilation or connectivity. There are more advanced tools that sit on top of your source control system and "manage" deployment and dependencies, but we never found that it was necessary.

The purpose of writing packages in oracle to implement the concept of modular methodology which explained as follows:
Consolidate logical procedure and functional under one package
There is way to define member variable in global and can be accessed with in package or outside packages
The program units defined in package will be loaded at once in memory for processing and reduces context switching time
More details provided under link:
https://oracle-concepts-learning.blogspot.com/

Related

SSIS Get List of all OLE DB Destinations in Data Flow

I have several SSIS Packages that we use to load in data from multiple different OLE DB data sources into our DB. Inside each package we have several Data Flow tasks that hold a large amount of OLE DB Sources and Destinations. What I'm looking to do is see if there is a way to get a text output the holds all of the Destinations flow configurations (Sources would be good to but not top of my list).
I'm trying to make sure that all my OLE DB Destination flows are pointed at the right table, as I've found a few hiccups, without having to double click on each Flow task and check that way, it just becomes tedious and still prone to missing things.
I'm viewing the packages in Visual Studio 2013. Any help is appreciated!
I am not aware of any programmatic ways to discover this data, other than building an application to read the XML within the *.dtsx package. Best advice, pack a lunch and have at it. I know for sure that there is nothing with respect to viewing and setting database tables (only server connections).
Though, a solution I may add once you have determined the list: create a variable(s) to store the unique connection strings and then set those connection strings inside the source/destination components. This will make it easier to manage going forward. In fact, you can take it one step further by setting the same values as parameter, as opposed to variables, which have the added benefit of being exposed on the server. This allows either you or the DBA to set the values as you promote through environments or change server nodes.
Also, I recommend rationalizing that solution into smaller solutions if possible. In my opinion, there is nothing worse than one giant solution that tries to do it all. I am not sure if any of this is helpful, but for what its worth, I do hope it helps.
You can use the SSIS Object Model for your needs..An example can be found here. Look in the method IterateAllDestinationComponentnsInPackage for the exact details. To start understanding the code, start in the Start method and follow the path.
Caveats: Make sure you use the appropriate Monikers and Class IDs for the Data Flow Tasks and your Destination Components. You can also use this for other Control Flow Tasks and Data Flow Components (for example, Source Components as your other need seems to be). Just keep in mind the appropriate Monikers and Class IDs.

Converting Oracle Forms Package as an Oracle DB Package

My team is migrating an Oracle Form and Reports-based system into Java/Grails system. Since the destination programming language is very different from the source language, we cannot change the fact that there is a 100% recoding to be done. Some stored procedures are short, converting them is easy.
I stumble across a very important module with the name "End of Day (EOD) Batch Process". Inside this oracle form is a huge package with 4369 lines of pure PL/SQL code. I am becoming anxious on converting this batch process. Is there any way to move this package to our database? Aside from the actual batch process underlying this package, there are also form-specific functions inside that are needed to be considered on the conversion (set_application_property(), :b_main.ctrl_something := 'X' and ADD_GROUP_COLUMN()).
Does removing this form specific functions enough for the conversion? Or is it really not possible to convert this form package into database package?

Splitting client/server code

I'm developing a client/server application in golang, and there are certain logical entities that exist both on client and server(the list is limited)
I would like to ensure certain code for this entities is included ONLY in the server part but NOT in the client(wise versa is nice, but not so important).
The naive thought would be to rely on dead code elimination, but from my brief research it's not a reliable way to handle the task... go build simply won't eliminate dead code from the fact that it may have been used via reflection(nobody cares that it wasn't and there is no option to tune this)
More solid approach seems to be splitting code in different packages and import appropriately, this seems reliable but over-complicates the code forcing you to physically split certain entities between different packages and constantly keep this in mind...
And finally there are build tags allowing to have multiple files under the same package built conditionally for client and server
The motivation with using build tags is that I want to keep code as clean as possible without introducing any synthetic entities
Use case:
there are certain cryptography routines, client works with public key, server operates with private... Code logically belongs to the same entity
What option would you choose and why?
This "dead code elimination" is already done–partly–by the go tool. The go tool does not include everything from imported packages, only what is needed (or more precisely: it excludes things that it can prove unreachable).
For example this application
package main; import _ "fmt"; func main() {}
results in almost 300KB smaller executable binary (on windows amd64) compared to the following:
package main; import "fmt"; func main() {fmt.Println()}
Excludable things include functions, types and even unexported and exported variables. This is possible because even with reflection you can't call a function or "instantiate" types or refer to package variables just by having their names as a string value. So maybe you shouldn't worry about it that much.
Edit: With Go 1.7 released, it is even better: read blog post: Smaller Go 1.7 binaries
So if you design your types and functions well, and you don't create "giant" registries where you enumerate functions and types (which explicitly generates references to them and thus renders them unexcludable), compiled binaries will only contain what is actually used from imported packages.
I would not suggest to use build tags for this kind of problem. By using them, you'll have an extra responsibility to maintain package / file dependencies yourself which otherwise is done by the go tool.
You should not design and separate code into packages to make your output executables smaller. You should design and separate code into packages based on logic.
I would go with separating stuffs into packages when it is really needed, and import appropriately. Because this is really what you want: some code intended only for the client, some only for the server. You may have to think a little more during your design and coding phase, but at least you will see the result (what actually belongs / gets compiled into the client and into the server).

Is modular approach in Drupal good for performance?

Suppose I have to create functionalities A, B and C through custom coding in Drupal using hooks.
Either I can club three of them in custom1.module or I can create three separate modules for them, say custom1.module, custom2.module and custom3.module.
Benefits of creating three modules:
Clean code
Easily searchable
Mutually independent
Easy to commit in multi-developer projects
Cons:
Every module entry gets stored in the database and requires a query.
To what extent does it mar the performance of the site?
Is it better to create a single large custom module file for the sake of reducing database queries or break it into different smaller ones?
This issue might be negligible for small scale websites, let the case be for large scale performance oriented sites.
I would code it based on how often do I need function A , B and C
Actual example:
I made a module which had two needs
1) Send periodic emails based on user preference. Lets call this function A
2) Custom content made in a module . Lets call this function B
3) Social integration . Lets call this function C
So what I did is as Function A is only called once a week I made a separate module for it.
As for Function B and C I put it all together as they would always be called together.
If you have problems with performance then check out this link . Its a good resource for performance improvement.
http://www.vmirgorod.name/10/11/5/tuning-drupal-performance
It lists a nice module called boost. I have not used it but I have heard good things about it.
cheers,
Vishal
Drupal .module files are all loaded with every page load. There is very little performance related that can be gained or lost simply by separating functions into different .module files. If you are not using an op code cache, then you can get improved performance by creating .inc files and referencing those files in the menu items in hook_menu, such that those files only get loaded when the menu items are accessed. Then infrequently called functions do not take up memory space.
File separation in general is a very small performance issue compared to how the module is designed with respect to caching, memory use, and/or database access and structure. Let the code maintenance and dependency issues drive when you do or do not create separate modules.
i'm actually interested in:
Is it better to create a single large custom module file for the sake of reducing database queries or break it into different smaller ones?
I was poking around and found a few things regarding benchmarking the database. Maybe the suggestion here is to fire up the dev version and test. check out db benchmarking
now i understand that doesn't answer specifically but i'd have to say its unique to each environment. I hate to use that type of answer but i truly believe it is. Depends on modules installed, versions used, hardware and os tunables among many other things.

Overhead for calling a procedure/function in another Oracle package

We're discussing the performance impact of putting a common function/procedure in a separate package or using a local copy in each package.
My thinking is that it would be cleaner to have the common code in a package, but others worry about the performance overhead.
Thoughts/experiences?
Put it in one place and call it from many - that's basic code re-use. Any overhead in calling one package from another will be minuscule. If they still doubt it, get them to demonstrate the performance difference.
The worriers are perfectly at liberty to prove the validity of their concerns by demonstrating a performance overhead. that ought to be trivial.
Meanwhile they should consider the memory usage and maintenance overhead in repeating code in multiple places.
Common code goes in one package.
Unless you are calling a procedure in a package situated on a different data base over a DB link, the overhead of calling a procedure in another package is negligible.
There are some performance concerns, as well as memory concerns, but they are rare and far between. Besides, they fall into "Oracle black magic" category. For example, check this link. If you can clearly understand what that is about, consider yourself an accomplished Oracle professional. If not - don't worry, because it's really hardcore stuff.
What you should consider, however, is the question of dependencies.
Oracle package consists of 2 parts: spec and body:
Spec is a header, where public procedures and functions (that is, visible outside the package) are declared.
Body is their implementation.
Although closely connected, they are 2 separate database objects.
Oracle uses package status to indicate if the package is VALID or INVALID. If a package becomes invalid, then all the other packages
that depend on it become invalid too.
For example, If you programme calls a procedure in package A, which calls a procedure in package B, that means that
you programme depends on package A, and package A depends on package B. In Oracle this relation is transitive and that means that
your programme depends on package B. Hence, if package B is broken, your programme also brakes (terminates with error).
That should be obvious. But less obvious is that Oracle also tracks dependencies during the compile time via package specs.
Let's assume that the specs and bodies for both of your package A and package B are successfully compiled and valid.
Then you go and make a change to the package body of package B. Because you only changed the body, but not the spec,
Oracle assumes that the way package B is called have not changed and doesn't do anything.
But if along with the body you change the package B's spec, then Oracle suspects that you might have changed some
procedure's parameters or something like that, and marks the whole chain as invalid (that is, package B and A and your programme).
Please note that Oracle doesn't check if the spec is really changed, it just checks the timestemp. So, it's enough just to recomplie the spec to invalidate everything.
If invalidation happens, next time you run you programme it will fail.
But if you run it one more time after that, Oracle will recompile everything automatically and execute it successfully.
I know it's confusing. That's Oracle. Don't try to wrap your brains too much around it.
You only need to remember a couple of things:
Avoid complex inter-package dependencies if possible. If one thing depends on the other thing, which depends on one more thing and so on,
then the probability of invalidating everything by recompiling just one database object is extremely high.
One of the worst cases is "circular" dependencies, when package A calls a procedure in package B, and package B calls procedure in package A.
It that case it is almost impossible to compile one without braking another.
Keep package spec and package body in separate source files. And if you need to change the body only, don't touch the spec!

Resources