I want to have a backup of a specific table because a I want to change one of it's field, if changes don't work, apply the backup and restore the initial state. I'm using plsql developer
The simplest option is CTAS (Create Table As Select), i.e.
create table my_table_backup as select * From my_table;
Or, use Data Pump Export / Import utilities. Or, as it is just a single table, the original EXP / IMP utilities might also work.
Or, spool data into a CSV file and load it back using SQL*Loader (or external tables feature).
Quite a few options; I'd start with option 1 (CTAS).
Related
The goal is to export an existing test Oracle database to SQL so that it can be run in a test pipeline using a Docker images's /container-entrypoint-initdb.d bootstrap. Currently we mount an oradata export, but it's opaque what the actual data is, hence the desire to use SQL so that it can be maintained via git.
I'm aware you can use the GUI Oracle SQL Developer Export Wizard to export a database to SQL.
Though how to do it via command line tools? I've tried many things like:
impdp test/test#XEPDB1 dumpfile=export.dmp directory=DUMP_DIR sqlfile=ddl.sql
And the best I can achieve is exporting the schema, however it is missing the actual data, e.g.
INSERT INTO currencies_countries (currency_id, country_id) VALUES ('EUR', 'CYP');
You have the exp which exports to a dump file that can be imported into another Oracle DB using imp. More recent are the xpdp and impdp.
See: https://oracle-base.com/articles/10g/oracle-data-pump-10g for instance.
What do you mean by "export to SQL"?
The sqldeveloper client tool has the possibility of exporting a table as SQL inserts, for instance. But to export that table by table would be quite an effort. Moreover, if you have millions of rows, those row by row inserts won't perform well.
Better write to tab-separated text files, which, in case of need may be imported into an Oracle DB using sqlldr, or may somehow be imported to some other kind database. Still, tab-separated files won't work well for tables having CLOB, XMLTYPE or some object-type columns.
You could write to a text file by spooling sqlplus output to a file having set linesize ling enough to fit the columns length, set pagesize 0 (no pages).
Or you could write to a file in an Oracle directory via a pl/sql program in which you use utl_file.
how to import data into oracle ,when primary key conflict ,overwrite old data with new data ,just use cmd line ,using imp or impdp ,why oracle doesn't supply the overwrite option
try in command line
If you don't really care about existing data (because newly inserted values should replace existing ones anyway), then see whether IMPDP's TABLE_EXISTS_ACTION which can be set to one of SKIP | APPEND | TRUNCATE | REPLACE helps. See the documentation for more info; I presume you might be interested in truncate or replace option.
On the other hand, if target table contains data that doesn't exist in the .dmp file and you'd want to keep it, then you certainly don't want to use what I suggested earlier as you'd lose everything.
In that case, impdp can't do what you want. You'll have to either import data into a temporary table and then write SQL statement(s) which will do what you want - insert new rows, update existing ones.
Alternatively, save existing data into a temporary table, use truncate or replace impdp option and then fix what's wrong.
I need to create a table from another table along with it's indexes and constraints in another schema in Oracle. I know about CTAS syntax but it doesn't take the indexes and constraints with it. Is there any way to do it?
Also is there any way to flashback procedure, triggers or package after dropping?
The simplest approach is to treat DDL statements like any other piece of application code, and keep them as scripts in a source control repository.
However, it's easy to be wise after the event. If you're working in an environment where the schema is a bit of a free fire zone there are various options.
The best thing is to use DBMS_METADATA to re-create the DDL statements. These can be saved as scripts, run in other schemas and - crucially - stored somewhere which gets backed-up, ideally source control.
To generate all the DDL for a table and its dependent objects is reasonably straightforward. The DBMS_METADATA functions return clobs, which is not ideal but simple enough to spool them out in SQL*Plus:
SQL> set long 10000
SQL> set heading off
SQL> spool create_tab_t23.sql
SQL> select dbms_metadata.get_ddl('TABLE', 'T23') from dual;
SQL> select dbms_metadata.get_dependent_ddl('INDEX', 'T23') from dual;
SQL> select dbms_metadata.get_dependent_ddl('TRIGGER', 'T23') from dual;
SQL> spool off
Having to specify the individual object types is a bit of a nausea. Fortunately most IDEs (Oracle SQL Developer, PLSQL Developer, TOAD, etc) provide handy right-click menu options to handle all this for us.
The easiest way to copy an entire Oracle table (structure, contents, indexes, constraintes, triggers, etc.) is to use Oracle's export and import utiilities (expdp and impdp). These are command-line utilities that you run on the database server using parameters that you provide. Or, you can use OEM (Oracle Enterprise Manager) to run these for you. Note that they they depend on having at least one "logical directory" defined where the "dump" file can get written to by export and read from by import.
This method work well when you want to copy a table from one schema to another, or from one database to another, and keep the same table name. If however your goal is to create a copy table in the same schema, but with a different name, then the process gets more complex. You can still use export, but then with import instead of doing the actual import directly, you have import create a text file for you that contains all of the SQL commands it finds in the export file. Then you edit that text file to change the index, constraint and trigger names that need to be changed plus change the table name in those commands to the new table_name (but do not change the table name in the "create table..." command). Then rename the existing table to something else and run just the "create table ..." command (with the original table_name) from the script file. Next, run import to get just the data. Then rename the new table to the name you want it to have and rename the original table to its original name. After that, you manually run the other SQL scripts from the script file. You don't want those triggers, constraints and indexes in place when you do the actual data import.
If I have 1 table in a database, and I want to export it, then import it into new table in a different database?
Should I set up the table with same fields in database two, or is there a way create empty table so all the import will work?
If you have a dblink established, a quick way to copy a table without intermediate files would be to execute this from the target database (the one where you want the new table to be copied):
create table my_new_table as
select *
from my_original_table#my_original_database
This presupposes the dblink, of course, and also that there is sufficient redo space to allow that much data to be copied in one fell swoop.
If not, you could also build the table this way and then do a bunch of insert into transactions to move the data in chunks.
If you only want the structure (your question sort of implied that, but I wasn't sure), you can always add a where 1 = 3 to copy only the structure.
This won't import constrains or indexes, but I'm not sure if that matters for what you seek.
I have a question: I have a table (say tableA) in a database (say dbA) and I need to mirror tableA as another table (say tableB) in another database (say dbB).
I know this can be done via (materialised) view or via informatica. But by problem is that I need to sync DDL as well. For example if a column is added in tableA, the column should automatically reflect in tableB.
Can this be done anyway directly via oracle or Informatica.
(or I will have to write a procedure to sync table on basis of all_tab_cols).
Yes, you could:
create another database as a logical standby database with Data Guard
use Oracle Streams
I would use (2) if you just need a single table in the other database or (1) if you need an entire schema (or more).