There is an Oracle DB to which I have access. I can view its packages using Aqua Data Studio and make queries to it. I don't have any access to the filesystem of the server.
There is also a binary that uses that database by calling stored procedures from it.
I want to know which stored procedures and with what parameters are used by this binary. It seems to be impossible to do with "Statement monitor for Oracle" - it only logs direct query calls, not stored procedures.
Can it be done with built-in trace if I don't have access to the filesystem?
Is there some other tool?
You can use the DBMS_PROFILER package: http://download.oracle.com/docs/cd/B10501_01/appdev.920/a96624/12_tune.htm#45936
You can try PLSQL/Developer,it support to debug your procedure step by step.
Related
I'm trying to export an Oracle DB using Oracle SQL Developer having tables, sequences, view, packages, etc. with dependencies on each other.
When I use Tools -> Database Export and select all DDL options, unfortunately the exported SQL file does not preserve the other that is some DB objects should be created before some other.
Is there a way to make the DB export utility preserve object dependencies/order? Or Is there any other tool do you use for this task?
Thank you
Normally expdp does a pretty good job. Problems arise when there are dependencies on objects/users that are not part of the dump. This is because the counter part, impdp, does not add grants on objects that are not created by impdp. I call that the 'not created by me syndrome' that impdp has.
If you have no external dependencies (external meaning to schema's that are not part of the dump), expdp/impdp do a good job for you. You might not be able to use it if you can not have access to the database server since expdp writes it's files on the database server.
If you happen to have access to a database server that is able to connect to the original database, you could pull the data over into your local database using a database link.
I have a csv file which has to be bulk imported to oracle dB. I was working on other sybase dB engine before so I had a sample script which has the environment setup for it. Right now I have to do the process in a oracle dB so what should be the first line I know about the rest other parameters but want to know the path which has to be defined when I write
path/bcp dbtable in data.txt
If anyone could help what should be the path for oracle dB
The primary tools for bulk or flat file loading are:
SQL*Loader
External Tables (and here)
GUI Tools like SQL*Developer
It is much more cumbersome, but if necessary you can roll your own solution with the UTL_FILE PL/SQL package.
We are migrating data from Oracle to Hadoop and there is a requirement
to continue use the existing reporting tool(Crystal Report) to generate reports from Hadoop (instead of Oracle)
In the current scenario we are using an Oracle Stored PROC to do few aggregations /logic.
Now with the above requirement and migrated data, The Options that we considered are
Use HPLSQL (instead of Oracle PLSQL) and call it from Crystal Reports.However it appears that there are challenges with this approach because unlike Oracle Stored Procs,
HPLSQL Stored Procs are not registered in the DB catalouge and hence Crystal reports may be unable to find / access those HPLSQL STored PROCS.
Create the custom aggregation /logic in java and expose it as a webservice which can be invoked /consumed from Crystal Reports
The help/guidance needed here is to find out whether
a) Did someone successfully called HPLSQL Stored Procs form an external tool/ reports like Crystal over ODBC/JDBC. If yes, can you share the details?
b) Is there a better option other than the above mentioned two options to achieve the requirement?
c) Are there any challenges in using webservices to fetch data and run reports
Thanks in advance and any help is really appreciated.
I have a GIS oracle database and I am needing to reference in a SSIS dataflow task. Ideally I would normally do something like this (which works perfectly in Oracle SQL Developer):
execute sde.version_util.set_current_version('SAFE.mvedits')
SELECT CAD_EVENTID
FROM SAFE.INCIDENT_POINT_MV
however when I try to use that as the SQL command of my OLE DB Datasource it throws me an "Invalid SQL" error.
How do I set the SDE version in a SSIS dataflow task data source?
Knowing nothing of nothing on Oracle, what you might try is
In your Oracle Connection Manager, change the property RetainSameConnection to True. This means that all connections will Oracle will use the same thread.
Add an Execute SQL Task before your Data Flow that talks to Oracle. Use your query there to modify the current version thing. This setting should be persisted on the connection.
In your OLE DB Datasource, start with the SELECT statement.
You might need to set DelayValidation to true as well.
If that's not working, let me know and I'll see if I can come up with anything else.
As it turns out this is a shortfall of interacting with GIS Oracle databases through thirdparty applications. In my situation we addressed it by just bundling the change up in a stored procedure that lives on the oracle server and invoking that stored procedure from inside SSIS.
Is there any easy way to save BLOB as a binary file into client-side file system with using of only standard Oracle utilities (such as sqlplus or sqlldr for example)?
I've already looked onto UTL_FILE package, but actually I have two problems with it:
I have doubts that it can work with client-side file system.
I have no privilege to CREATE DIRECTORY in schema where BLOBs are saved and so that I can't work with UTL_FILE at all.
Also, I know that I can just write some homebred utility in any language (Java for example); connect to Oracle, select my BLOB and save it in binary format. But I'd look for some easier way before doing this.
Would you really want a database writing a BLOB, for example winword.exe, to your local PC ? This is the sort of thing that is intentionally quite protected.
It is very client driven, so the best place to start is with whatever is running on your local machine. I'd go with a Java routine, or if you've got APEX running, a simple procedure that will push the BLOB out through the browser and let the browser prompt you for what to save it.