I use Oracle 11 on my local server, and wanna export my data using oracle exp tool:
http://docs.oracle.com/cd/B28359_01/server.111/b28319/exp_imp.htm#i1023725
I dont have any views, triggers or stored procedures, just ordinary tables and some image blobs in one table. It should be really simple to export this.
But I really didn't understand anything how to do it;
First of all, It says I should run the catexp.sql or catalog.sql before I run the exp tool..Ok but where the heck is these scripts? I searched my computer bu no such thing exists.
Second it is still not clear what needs to be done, What .exe exactly I need to run. And then it says;
exp PARAMETER=value
What the heck is parameter what the heck is value?..Is there any better documentaion or anyone can explain with simple terms the steps I need to take?
You only need to run catexp/catalog if they haven't been run already for some reason; they would normally exist and be run as pat of the database creation, so you probably don't need to worry about those.
PARAMETER is a placeholder for any of the supported parameters, as shown under 'invoking export and import'.
You need to specify an export (dump) file; the default is create a file called EXPDAT.DMP in the current directory. If you don't have permissions to write to that directory you need to specify the full path to where you want the file to be created, including its name.
There are [several export examples], including table mode and user mode. When you run interactively and don't specify OWNER or TABLES on the command line or in a parameter file you're prompted to choose the mode, which is the 'users or tables' prompt you saw. You might want something like this example:
exp blake/paper FILE=blake.dmp TABLES=(dept, manager) ROWS=y COMPRESS=y
... but with your own user/password, file name (and path), and table names.
Related
I have a requirement to make the read-only setting of 300+ IG columns in my application to null. I am able to query the columns from apex metadata views. I am wondering if it is OK to update the underlying APEX tables directly?
Or is it Ok to update the application export file and import it back again?
Will it have any negative implications or be considered malicious?
Or Is it not recommended at all?
Personally, I wouldn't touch Oracle metadata, that would be the last option if nothing else works and I'm very desperate.
I've edited export file quite a few times (in older Apex versions) as the export used to create an invalid file. For example, closing single quote was moved into a new line and import complained about it, e.g. the second line here - see that lonely single quote?
p_button_redirect_url => 'javascript...tree.collapse_all(''tree124124124124');
',
p_button_execute_validations => 'Y', ...
So, there was nothing to do but to edit the file and move it to the end of the first line.
As the export is a pure SQL textual file, no problem in editing it. Just make sure to save the original so that you could revert to it if necessary.
You can do it, and I do sometimes. However it is very much "at your own risk". If you get it wrong you could update data that belongs to the APEX Builder itself and stop it working. Good luck contacting Oracle support when you do that!
Have you considered doing the operation at the database level?
Is it possible to script schema of the entire database (SQL Server or Postgres) using datagrip?
I know I can get DDL for table and view and source for each stored procedure / function on it's own.
Can I get one script for all objects in database at once?
Alternatively, is there a way to search through code of all routines at once, say I need to find which ones are using #table temp table?
From 2018.2 there is a feature called SQL generator. It will generate the whole DDL for the database/schema with several available options.
The result is:
BUT:
If you want just to understand where the table is used, please use the dedicated functionality which is called Find Usages (Alt+F7 or context menu on a table name)
I was looking for this today and just found it. If you right click the schema you want to copy and choose "Copy DDL" this will copy the create script to the clipboard.
To answer your second part of the question: quick and easy way to search for #table in all of your procedures you can do the following query
SELECT *
FROM information_schema.routines
WHERE routine_definition LIKE '%#table%'
For now only dumping tables works. In 2016.3 EAP which will be available in the end of August there will be an integration with mysqldump and pg_dump.
OK, the question title probably isn't the best, but I'm looking for a good way to implement an extensible set of parameters for Oracle database applications that "stay with" the host/instance. By "stay with", I mean that I'd like to rule out just having an Oracle table of name/value pairs that would have to modified if I create a test/QA instance by cloning the production instance. (For example, imagine a parameter called email_error_address that should be set to prod_support#abc.com in production and qa_support#abc.com in testing).
These parameters need to be accessed from both PL/SQL code running in the database as well as client-side code. I started out doing this by overloading the plsql_cc_flags init parameter (not a solution I'm proud of), but this is getting messy to maintain and parse.
[Edit]
Ideally, the implementation would allow changes to the list without restarting the instance, similar to the dynamically-modifiable init parameters.
You want to have a separate set of values for each environment. You want these values to be independent of the data, so that they don't get overridden if you import data from another instance.
The solution is to use an external table (providing you are on 9i or higher). Because external tables hold the data in an OS file they are independent of the database. To apply changed values all you need to do is overwrite the OS file.
All you need to do is ensure that the files for each environment are kept separate, This is easy enough if Test, QA, Production, etc are on their own servers. If they are on the same server then you will need to distinguish them by file name or directory path; in either case you may need to issue a bit of DDL to correct the location in the event of a database refresh.
The drawback to using external tables is that they can be a bit of a performance overhead - they are really intended for bulk loading. If this is likely to be a problem you could use caching, with a user-defined namespace or CONTEXT. Load the values into memory using DBMS_SESSION.SET_CONTEXT() either on demand on with an ON LOGON trigger. Retrieve the values by wrapper calls to SYS_CONTEXT(). Because the namespace is in session memory retrieval is quite fast. René Nyffenegger has a simple example of working with CONTEXT: check it out.
While I've been writing this up I see you have added a requirement to change things on the fly. As I have said already this is easy with an OS file, but the use of caching makes things sightly more difficult. The solution would be to use a globally accessible CONTEXT. Have a routine which loads all the values at startup which you can also call whenever you refresh the OS file.
You could use environment variables that you can set per oracle user (the account that starts up the Oracle database) or per server. The environment variables can be read with the DBMS_SYSTEM.GET_ENV procedure.
I tend to use a system_parameters table. If your concerned with it being overwritten put it in it's own schema and make a public synonym.
#APC's answer is clever.
You could solve the performance overhead by adding a materialized view on top of the external table(s). You would refresh it after RMAN-cloning, and after each update of the config files.
I want to load data into text file that is generated after executing "views" in Oracle?How can I achieve this in oracle using UNIX.for example-
I want the same in Oracle on unix box.Please help me out as it alredy cosume lots of time.
your early response is highly appreciated!!
As Thomas asked, we need to know what you are doing with the "flat file". For example, if you're loading it into spreadsheet or doing some other processing that expects a defined format, then you need to use SQL*Plus and spool to a file. If you're looking to save a table (data + table definition) for moving it to another Oracle database then EXP/IMP is the tool to use.
We generally describe the data retrieval process as "selecting" from a table/view, not "executing" a table/view.
If you have access to directories on the database server, and authority to create "Directory" objects in Oracle, then you have lots of options.
For example, you can use the UTL_FILE package (part of the PL/SQL built-ins) to read or write files at the operating system level.
Or use the "external table" functionality to define objects that look like single tables to Oracle but are actually flat files at the OS level. Well documented in the Oracle docs.
Also, for one-time tasks, most of the tools for working SQL and PL/SQL provide facilities for moving data to and from the database. In the Windows environment, Toad's good at that. So is Oracle's free SQLDeveloper, which runs on many platforms. You wouldn't want to use those for a process that runs every day, but they're fine for single moves. I've generally found these easier to use than SQLPlus spooling, but that's a primitive version of the same functionality.
As stated by others, we need to know a bit more about what you're trying to do.
I find it hard to generate the dbscripts from TOAD. I get errors when executing the scripts things like looping chain of synonyms or certain statement not abel to exceute etc.
Is there any seamless way in said like connecting a remote oracle schema and just duplicate to my local environment?
And also do synchronization along the way?
Syncing an entire schema, data and all, is fairly easily done with exp and imp:
$ exp username/password#source-sid CONSISTENT=Y DIRECT=Y OWNER=schema FILE=schema.exp
$ ⋮ # some command(s) to nuke objects, see below
$ imp username/password#dest-sid FROMUSER=schema FILE=schema.exp
You can import into a different schema if you want by using TOUSER in the imp command.
You'll need to get rid of all the objects if they already exist before running imp. You can either write a quick script to drop them all (look at the user_objects view), or just drop the user with cascade and re-create the user.
There is probably a better way to do this, but this is quick to implement and it works.
If you are doing a one-off copy exp/imp (or expdp/impdp in newer versions) is best.
If you are progressing changes from dev to test to prod, then you should be using formal source control, with SQL or SQL*Plus scripts.
Schema Compare for Oracle should be able to achieve this, as it's a tool dedicated specifically to solve this task.
If you want it to happen in the background, there's a command line that lets you achieve this.