Deploying huge PLSQL package via JDBC is incredible slow - jdbc

I have to deploy very large PLSQL packages via JDBC and experience extremely long deployment duration. I know that it isn't a good idea to use packages with up to 25.000 lines of code, but I have no choice about that right now. Deployment of such a package takes about 2.5 hours via JDBC.
I read the package from filesystem with wrapping the FileReader in a BufferedReader. I parse it line by line and check for a delimiter and append each line to a StringBuilder until the statement is complete. Then I use the StringBuilders toString() and hand the resulting String over to my Statements execute().
Thank you for advice!

Oracle PL/SQL packages consist of:
a package header - the specification similar to C .h files
the package body - the equivalent of a .c file
Uploading a package header will invalidate all PL/SQL packages that utilize functions/procedures/records/constants/objects/types from that package - generally anything that references or uses something from the package.
Specify PACKAGE to recompile both the package specification and the package body if one exists, regardless of whether they are invalid. This is the default. The recompilation of the package specification and body lead to the invalidation and recompilation of dependent objects as described for SPECIFICATION and BODY.
The database also invalidates all objects that depend upon emp_mgmt. If you subsequently reference one of these objects without explicitly recompiling it first, then the database recompiles it implicitly at run time.
source: http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/alter_package.htm
Uploading a package body has less impact on the state of the database. Packages dependent of the uploaded unit will not be invalidated.
Recompiling a package body does not invalidate objects that depend upon the package specification.
When you recompile a package body, the database first recompiles the objects on which the body depends, if any of those objects are invalid. If the database recompiles the body successfully, then the body becomes valid.
source: http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/alter_package.htm
The reason for your extremely long compile times is probably caused by a cascade of recompilations performed by the database. This is typically a result of keeping the package header and the package body in a single file and then uploading both the header and the body of each package sequentially like in the following example:
#pkg_a.sql - dependencies invalidated and recompilation attempt occurs
#pkg_b.sql - same as above
#pkg_c.sql - same as above
In this scenario, the database might attempt to recompile some packages several times without success because further dependencies were not yet uploaded. That's time wasted for dependency resolution and compilation.
This scenario can be greatly improved by splitting the packages into .pks (contains only the header) and .pkb (contains only the body). Uploading the header files first and then uploading the bodies.
#pkg_a.pks - dependencies invalidated but not recompiled
#pkg_b.pks - same as above
#pkg_c.pks - same as above
#pkg_a.pkb - pkg_a recompiled successfully because all headers are up to date
#pkg_b.pkb - same as above
#pkg_c.pkb - same as above
This is possible because it's only required that package headers from the dependencies are valid to compile a dependent package. In this scenario, recompilation of each package body occurs only once.
Splitting the packages into header and body files will also allow you to avoid uploading header files which did not change. This is quite often as most of the changes are made to the body (actual code) of a library. Skipping an unnecessary upload of a package header will result in less packages being invalidated and hence - less work to validate the whole database.
This approach should help you vastly reduce the time required to upload changes to the database.

I don't think your long package deployment times have to do with the Java, JDBC load process, but rather with the Oracle database package management.
When you execute a CREATE PACKAGE BODY FOO..., the RDBMS software first checks to make sure none of the other users in the system are using the package (via database locks). If they are, your process hangs until all the other users are finished using the package. Then it commits your package into the database. One of the ways to test this is to rename the package (in your original source file) and try and load it. If it doesn't take 2.5 hours this may be a contributing factor.
The other thing the RDMBS does when you run the statement is compile the package which involves verifying all the references in your code (e.g. tables, views, other packages) and generating a encoded version. For a large package the compile time may be significant. The way to test this is to run the statement ALTER PACKAGE FOO COMPILE; and time that.

Related

Avoiding Go panic when Oracle Database client libraries not found

I have a server written in Go that accesses an Oracle database. It works fine. However, there will be multiple instances running at different (currently 2) locations some of which do not need to access the database. (They get the same info passed on to them from their peer servers.)
I want the same executable running in all places but some will be configured to not use the database since they don't need it. The problem is that once I import the OCI package, its init() function is called which panics when it can't talk to the database.
Running GO 1.12.5 on Windows Server 2019.
I tried adding OCI.DLL to the same directory as the .EXE but it still panics.
import _ "github.com/mattn/go-oci8"
When I run on the server (without DB drivers) I get the error:
panic: OCIEnvCreate error
goroutine 1 [running]:
github.com/mattn/go-oci8.init.0()
D:/Golang/src/github.com/mattn/go-oci8/globals.go:160 +0x130
I want to avoid this panic when I don't need database access. I would prefer one .EXE without the mess of conditional builds.
Swap to the Go goracle driver which delays Oracle client library initialization until connections are opened precisely to handle your situation, where not all app users connect to Oracle DB.
As you said, adding the DLLs to the same directory as the exe solves your problem, so if you wanted a single file to still work you can have the exe copy all of the DLLs over when it starts, and even delete then when it is done if you want. This way, you can transfer the file to multiple locations, but there is most likely no way to keep it one file while running.

SSIS Package Deployment -DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER

> [OLE DB Source [113]] Error: SSIS Error Code
DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The
AcquireConnection method call to the connection manager "msAccess"
failed with error code 0xC0202009. There may be error messages posted
before this with more information on why the AcquireConnection method
call failed. .
I am new in SSIS Packages designing for learning purpose I am Designing dtsx packages. i have a .mdb file that I am trying to import in my sql server 2016 . when I design packages it running successfully but when I deploy that packages and try to run that packages through package utility it showing me above error.
i searched a lot i changed my packages properties like delay validation= true and validate external metadata = false.
But after deploying packages data is not loading into my respective table.
The most likely answer is that you don't have the .mdb file in the same local folder on the SQL Server:
C:\Users\Administrator\Desktop\msAccess.mdb
When you deploy the package and run it through the package utility, the package is running ON the SQL Server, and not on your local box anymore. So any local path in the package will be interpreted as a local path on the SQL Server.
sorry for late reply. I gave SQL server agent permission access to my packages and they are running through my stored procedure and also my mdb file i gave access to SQL server agent to access that file –
I solved this issue by adjusting which elements of my connection managers are parameterized.
I learned that you should parameterize either the entire connection string or the individual elements of the connection string. Not both.
As soon as I removed the ConnectionString parameter it worked fine. I just have the Password and DataSource parameterized. I used to have those plus the ConnectionString parameterized. But it is not necessary and just trips up SSIS. Only parameterize what you absolutely need to.

Golang dependencies registering sql driver in init() causing clash

I have some Go tests that depend on some external code which has init() methods that registers mysql drivers. My code also needs to register mysql drivers and I therefore end up with a panic and the error "Register called twice for driver mysql" when running my tests.
It seems that the repo I'm dependent on has a vendors directory with the driver in it ("vendors/github.com/go-sql-driver/mysql"). It seems that when I run my tests the init() methods in the driver are called and registers the mysql driver causing the clash.
The best option I can see would be to copy the dependency to my own vendors directory and remove the mysql driver from the dependency's vendor directory. However I'm not keen on this as it involves duplicating my dependency and then modifying it by deleting files. Also, I'm only dependent on this for running tests so I'm not sure I should be moving it to my vendors directory any way.
Is there a way to prevent init() being called on a dependency's vendor dependencies?
First of all, I would ditch the dependency. If it's registering a database driver - something that dependencies really should never do - I predict there will be more problems with it. Also I suggest opening an issue.
As a workaround, you can import the library depending on whether you are in a test build or not. So you will have a file called e.g. mysqlimport.go with nothing but
// +build !test
import _ "github.com/go-sql-driver/mysql"
This way this code will only be called when you're not testing. Although you'll have to start your tests with go test -tags test.

JSON file as config

I have JSON file as config. The problem I may see is that this cannot be compiled in Go and I'm worried that this also might affect the performance of the application since JSON is imported for every request. Would I be better off using Struct and initialising it in a separate Go file?
If you can store the configuration Go code, then I assume that the configuration does not change during the execution of the application. Load the configuration when the application starts and store the parsed representation in memory, possibly referenced from a package level variable.

Track/trace a used (invoked) package function/procedure or fire event on a particular package sub-program call

I am a newbie to Oracle. I have been assigned to find all used packages, functions and procedures used by the (client) system (legacy system). I have found a solution to that by using AUDIT. The problem is that the AUDIT does not allow us to trace/track package functions or procedures - that is I have a list of packages and function/procedures used by our client system now, but what I should do now is to find functions and procedures used (or not used by the client system) inside the list of those packages. In other words, how one can track or log function/procedure of a package invoked (used).
I reference DBMS_TRACE and DBMS_PROFILE to determine which function/procedure is invoked, but they do not give me information about package and its corresponding sub-program(s). The solution should give me information about sub-program unit called (used) by the clientside - that is it should not only give me package reference (since I have got packages used), but also give me the packages' sub-program run(called/invoked).
Or is there any way to fire a kind of trigger/event on a package/package subprogram called.
Could anyone help me please? Could be appreciated.

Resources