PL/SQL script for export database - oracle

I want to export all the data from a database like functions, procedures, views, triggers for which an user has owner privilegies. I know that SQL Developer has the option for exporting databases in sql file, but I want to do this from the code. When I run the code I want to create a file with .sql extension which must contain all data from the database. First of all, I want to know if that is possible, and if it is, can anyone tell me some hints for doing this?
I started with making a file:
CREATE DIRECTORY test_dir AS 'H:\';
DECLARE
out_File UTL_FILE.FILE_TYPE;
BEGIN
out_File := UTL_FILE.FOPEN('test_dir', 'test.sql' , 'W');
UTL_FILE.PUT_LINE(out_file , 'here will be the database export');
UTL_FILE.FCLOSE(out_file);
END;

The Oracle export/import utilities will export/import database objects in user mode (use OWNER parameter) using a binary file format
http://docs.oracle.com/cd/B28359_01/server.111/b28319/exp_imp.htm
However, if you want to export the metadata for your objects and data into sql files, you can query Oracle's catalog users views like user_objects -> for a user's database objects user_tables -> tables, user_constraints -> constraints (even user_source to get text for compiled objects). If you intend your .sql files to re-create objects and data, you are in for some heavy lifting, especially if you want your scripts to be portable. In that case I would look 3rd party tools.

I guess this should be the answer for what you are looking for. Hope it helps. This is just an example for what you are asking. Other types can also be incorporated as required.
spool on;
spool <target path>/filename;
SET SERVEROUTPUT ON SIZE UNLIMITED;
BEGIN
FOR I IN
(SELECT al.*
FROM ALL_SOURCE al
WHERE OWNER = 'AVROY'
ORDER BY name,
line
)
LOOP
dbms_output.put_line(I.TEXT);
END LOOP;
FOR I IN
(SELECT * FROM ALL_TABLES WHERE OWNER = 'AVROY'
)
LOOP
BEGIN
dbms_output.put_line(dbms_metadata.get_ddl('TABLE',i.table_name,'AVROY'));
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
END LOOP;
END;
SPOOL OFF;

You might want to use the export and import utility of Oracle database.
http://www.oracle-dba-online.com/export_and_import.htm

Related

Trying to create a DML file of the owner inserts Oracle

I am trying to create a DML file that contains all the inserts to a database using only a script and asking only for the owner name, I found some documentation about the creation of files in Oracle and some other about how to get the insert statements.
This is the query that gets the inserts
SELECT /*insert*/ * FROM ALL_TAB_COLUMNS WHERE OWNER = 'OwnerName';
And this is what I`m trying to do in order to create the file with the selected rows from the query
DECLARE
F1 UTL_FILE.FILE_TYPE;
CURSOR C_TABLAS IS
SELECT /*insert*/ * FROM ALL_TAB_COLUMNS WHERE OWNER = 'BETA';
V_INSERT VARCHAR2(32767);
BEGIN
OPEN C_TABLAS;
LOOP
FETCH C_TABLAS INTO V_INSERT;
EXIT WHEN C_TABLAS%NOTFOUND;
F1 := UTL_FILE.FOPEN('D:\Desktop\CENFOTEC\4 Cuatrimestre\ProgramaciĆ³n de Bases de Datos\Proyecto\FileTests','TestUno.dml','W');
UTL_FILE.PUT_LINE(F1, V_INSERT);
UTL_FILE.FCLOSE (F1);
END LOOP;
CLOSE C_TABLAS;
END;
I'm having trouble with the fetch, I'm getting this error: wrong number of values in the INTO list of a FETCH statement
I know that it is a basic one, but I can't figure out how many columns I am getting from the query above
Although I'm trying this way i wouldn't mind changing it, I need to create a DML file of all the inserts needed to replicate the database of the given user. Thanks a lot
In SQL Developer, when you use:
SELECT /*insert*/ * FROM ALL_TAB_COLUMNS WHERE OWNER = 'OwnerName';
Then the /*insert*/ hint is processed by SQL Developer on the client-side and converts the returned result set into DML statements.
To quote #ThatJeffSmith in his answer where he gave the above solution:
here is a SQL Developer-specific solution
That behaviour is specific to the SQL Developer client application.
In the Oracle database, when you use:
SELECT /*insert*/ * FROM ALL_TAB_COLUMNS WHERE OWNER = 'OwnerName';
Then /*insert*/ is an inline comment and it is IGNORED and has zero effect on the output of the query.
Therefore, when you do:
DECLARE
F1 UTL_FILE.FILE_TYPE;
CURSOR C_TABLAS IS
SELECT /*insert*/ * FROM ALL_TAB_COLUMNS WHERE OWNER = 'BETA';
V_INSERT VARCHAR2(32767);
BEGIN
OPEN C_TABLAS;
LOOP
FETCH C_TABLAS INTO V_INSERT;
EXIT WHEN C_TABLAS%NOTFOUND;
F1 := UTL_FILE.FOPEN('D:\Desktop\CENFOTEC\4 Cuatrimestre\ProgramaciĆ³n de Bases de Datos\Proyecto\FileTests','TestUno.dml','W');
UTL_FILE.PUT_LINE(F1, V_INSERT);
UTL_FILE.FCLOSE (F1);
END LOOP;
CLOSE C_TABLAS;
END;
/
The PL/SQL anonymous block will be processed by the database's PL/SQL engine on the server-side and it will context-switch and pass the cursor's SQL to the database's SQL engine where it will be run and the /*insert*/ comment is ignored and it will return all the columns.
I can't figure out how many columns I am getting from the query above.
One column for every column in the ALL_TABS_COLUMNS table. You can use:
SELECT * FROM all_tabs_columns FETCH FIRST ROW ONLY
And then count the columns. I made it 37 columns (but might have miscounted).
However
Trying to generate INSERT statements that correspond to all the rows in the ALL_TAB_COLUMNS table so that you can recreate the database is WRONG. You need to generate the DDL statements for each table and not generate DML statements to try to modify a data dictionary table (which, likely as not, if you try to modify data dictionary tables will leave your database in an unusable state).
If you want to recreate the database then use the answers in this question or backup the database and then restore it to the new database.

How to rename multiple stored procedures in Oracle

I'm using Oracle 12C, now I'm in a following trouble:
I have multiple stored procedures like:
schema.TEST1, schema.TEST2, schema.TEST3....
Now, I want to rename all of them to schema.TEST01, schema.TEST02, schema.TEST03...or any name I want which was formatted before, this is for backup.
In Oracle, I can't rename a stored procedure using a ALTER statement rename like SQL. How can I do this with one click?
Thanks!
Make changes according to your schema and naming convention.
But it is nonsense, you do not need to backup in that way.
But I took it as a challenge and would like to present you the below code
Use CLOB if source text is large enough.
DECLARE
type names_table is table of VARCHAR2(50);
names names_table;
TYPE source_txt_table is TABLE OF VARCHAR2(32767);
source_txt source_txt_table;
header VARCHAR2(32767);
final_sourc_txt VARCHAR2(32767);
BEGIN
SELECT OBJECT_NAME bulk COLLECT into names from user_procedures WHERE object_type = 'PROCEDURE' AND OBJECT_NAME IN ('DO_SOMETHING_1','DO_SOMETHING_2');
FOR i in 1..names.LAST
LOOP
SELECT text bulk COLLECT into source_txt
FROM all_source
WHERE name = names(i)
ORDER BY line;
source_txt(1) := 'CREATE OR REPLACE '||source_txt(1);
header := REGEXP_REPLACE(upper(source_txt(1)), names(i), 'HR.'||names(i)||'_bck'); --make changes according to new naming convention
source_txt(1) := header;
FOR j in 1..source_txt.LAST
LOOP
final_sourc_txt := final_sourc_txt||source_txt(j);
END LOOP;
EXECUTE IMMEDIATE final_sourc_txt;
dbms_output.put_line('Success: '|| names(i));
final_sourc_txt := NULL;
header := NULL;
source_txt := NULL;
END LOOP;
END;
For backup? That's rather poorly chosen backup system.
what if database dies because of disk failure? You'll lose everything (including your "backup" procedures)
how many "backups" do you plan to keep? For example, one of my schemas contains 643 procedures/functions/packages. With two backups, I'm already close to 2K objects. If you perform backup regularly (e.g. daily), in a matter of only a month, I'd be close to 20K objects. I really wouldn't want to do that
Therefore, why wouldn't you consider something else? For example,
version control system (such as Git)
perform Data Pump Export as a "logical" backup
let DBA take care about RMAN backup
if you want to do it manually, some GUI tools (such as TOAD) let you select all and create script - that option stores source code as files on your hard disk drive, and then you can backup those files somewhere else (burn them on a DVD, copy to USB memory stick, another hard disk drive, somewhere within your network ...)
Finally, to answer your question: how to do what you asked for in one click? As far as I can tell, you can't. You'd first have to write a procedure which would do the job, but then you're back to my second objection to your approach. How will that procedure know that proc1 is "original", while proc01 is a backup version? Why wouldn't someone name their procedures proc05 initially? That's a valid name.
You can also try using DBMS_METADATA PACKAGE to export DDLs of the schema object.
I have written an example, you can use it after modifying it according to your needs.
CREATE DIRECTORY EXTERNAL AS '/external/';
DECLARE
h PLS_INTEGER;
th PLS_INTEGER;
fh utl_file.file_type;
ddls CLOB;
SYSD VARCHAR2(50);
BEGIN
h := dbms_metadata.open('PROCEDURE');
DBMS_METADATA.set_filter(h, 'SCHEMA','HR');
th := DBMS_METADATA.ADD_TRANSFORM (h, 'DDL');
DBMS_METADATA.SET_COUNT(h, 50);
ddls := dbms_metadata.fetch_clob(h);
SELECT TO_CHAR(SYSDATE, 'YYYYMMDDHHMISS') INTO SYSD FROM dual;
fh := utl_file.fopen('EXTERNAL', 'SCHEMA_BCK_'||SYSD||'.bck', 'w');
utl_file.put(fh, ddls);
UTL_FILE.FCLOSE(fh);
DBMS_METADATA.CLOSE(h);
END;
It is far safer against database failures and you will not unnecessarily populate your database schema with backup objects.

how to compile all procedures with standard format in oracle sql developer

I have a lot of procedures and functions in a schema in oracle SQL developer,
I want to know how to compile all procedures and functions with standard format (that after that all of them have the same format like when press Ctrl + F7 manually) in oracle SQL developer automatically?
I have a lot of procedures and functions in a schema in oracle SQL developer, I want to know how to compile all procedures and functions
In the "Connections" view:
expand the connection to the schema
right click on "Procedures" (or "Functions")
in the context menu that pops up, chose "Compile All"
if you wish, you can view the PL/SQL block that is going to be run by looking at the "SQL" tab
press the "Apply" button to recompile everything.
I want to know how to [have] all procedures and functions with standard format
This is nothing to do with (re)compiling. You can apply whatever formatting (whitespace/case/etc.) rules you want to your code and so long as the code remains syntactically correct then it does not affect whether the code will recompile.
Go to "Tools" > "Preferences..." > "Database" > "SQL Formatter" and edit the appropriate formatting to your specification.
Then right-click on the procedure's/function's code and select "Format" (or press Ctrl + F7).
You will need to do this for each procedure and function as there does not appear to be a SQL Developer option to apply it to all objects in a schema.
Alternatively, you may use a public synonym referencing a procedure to compile through the DB by commands, created and authorized as below :
$ sqlplus / as sysdba
SQL> Create or Replace Procedure SYS.Pr_Compile_All Is
v_command varchar2(1500);
Begin
For c in
(
Select 'alter '||o.object_type||' '||o.owner||'.'|| o.object_name|| ' compile' command1,
'alter PACKAGE '||o.owner||'.'|| o.object_name|| ' compile' command2,
'alter PUBLIC SYNONYM '|| o.object_name|| ' compile' command3,
object_type,
owner
From dba_objects o
Where o.status = 'INVALID'
)
Loop
Begin
v_command := c.command1;
If c.object_type in ('FUNCTION','PROCEDURE','TRIGGER') Then v_command := v_command ||' debug'; End If;
If c.object_type in ('PACKAGE BODY') Then v_command := c.command2||' debug body'; End If;
If c.object_type in ('SYNONYM') and c.owner = 'PUBLIC' Then v_command := c.command3; End If;
Execute Immediate v_command;
Exception When Others Then null;
End;
End Loop;
End;
SQL> Create or Replace Public Synonym Pr_Compile_All For SYS.Pr_Compile_All;
SQL> grant execute on Pr_Compile_All to public;
SQL> conn myschema/pwd
SQL> begin Pr_Compile_All end; -- call from any schema you'd like, in this way.
The 'best' way to look at this is via source control, and hopefully the source of truth is a subversion or Git project.
You can feed all of the files in a directory to our CLI with the FORMAT command. It will then go through each file in that folder, format the code, and write it to the supplied output directory.
You would then check those files in to your source control system.
c:\Program Files\Oracle\sqldev\18.1\sqldeveloper\sqldeveloper\bin>sdcli format input=c:\users\jdsmith\unformatted output=c:\users\jdsmith\formatted
Command Completed.
So here I go from 3 files unformatted to 3 files formatted, and if I open the same 'object' before and after...
All this is nice, but know as soon as another developer checks out a file, they will immediately change the way it looks due to personal preferences. I'm not sure I've ever seen a successful 'formatting rules' system where everyone agrees to format the code the same. But, formatting it as it goes in your VCS seems to work OK...and will also help with DIFFs/Deltas.
You could also theoretically also write some js and use SQLcl to grab each object, format it, and then compile it. Some examples are here.
I don't like the idea of compiling objects w/o looking at them first, but that's just me.
The best way to solve this problem in oracle 19c/18c (tested) is to run this script on your DB server machine:
SQL> #Oracle_home/rdbms/admin/utlrp.sql
The link to the reference is here.

Creating INSERT-Statements to export the data from every exisiting table

I am currently not able to get a proper database dump, because the DB runs on a remote server inside a closed system --> no remote copying possible, only way to get files in/out is by being physically present at the servers location or via e-Mail (but I can't send a several GB big dump via mail...).
However, I still need the data in order to import it into my dev system.
I figure the best way of doing this is by creating INSERT statements that contain the needed information.
The SQL-Developer software can actually do this, but apparently it only works for one table at a time. As soon as one selects multiple tables the respective option disappears from the right-click-menu and one can only export the DDL statements :-/
So this approach is not really viable for me, as there are hundreds of tables...
Does anyone know of a standardized way to create INSERT statements via the querying of metadata tables (user_tables, user_columns, ...)? I could imagine that it might be possible to create all the statements by cleverly joining those meta tables. However, before dumping several hours into this approach, I'd appreciate if someone can confirm this suspicion first.
Also someone else must have had this problem before, so I hope that some of you may be able to give me a hint on other approaches. Thanks in advance!
My answer isn't full solution.
1) To extract DDL use.
select table_name,dbms_metadata.get_ddl(OBJECT_TYPE=>'TABLE', NAME=>table_name) from user_tables;
2) To extract record from table use xmltype (refucursor) and dbms_xmlstore to insert them.
Below only suggestion how to do this.
create table test as select level as "LP" from dual connect by level < 100;
declare
v_cursor sys_refcursor;
xmlDoc xmltype;
curid NUMBER;
insCtx DBMS_XMLSTORE.ctxType;
rows NUMBER;
begin
open v_cursor for 'select * from test';
xmlDoc := xmltype(v_cursor);
close v_cursor;
dbms_output.put_line(xmlDoc.getClobVal()); -- extracted row into xml.
insCtx := DBMS_XMLSTORE.newContext('test');
DBMS_XMLSTORE.clearUpdateColumnList(insCtx);
rows := DBMS_XMLSTORE.insertXML(insCtx, xmlDoc);
dbms_output.put_line('ROWS inserted' || rows);
DBMS_XMLSTORE.closeContext(insCtx);
commit;
end;

Oracle sample data problems

So, I have this java based data trasformation / masking tool, which I wanted to test out on Oracle 10g. The good part with Oracle 10g is that you get a load of sample schemas with half a million records in some. The schemas are : SH, OE, HR, IX and etc. So, I installed 10g, found out that the installation scripts are under ORACLE_HOME/demo/scripts.
I customized these scripts a bit to run in batch mode. That solves one half of my requirement - to create source data for my testing my data transformation software.
The second half of the requirement is that I create the same schemas under different names (TR_HR, TR_OE and so on...) without any data. These schemas would represent my target schemas. So, in short, my software would pick up data from a table in a schema and load it up in to the same table in a different schema.
Now, I have two issues in creating my target schema and emptying it.
I would like this in a batch job. But the oracle scripts you get, the sample schema names are not configurable. So, I tried creating a script, replacing OE with TR_OE, HR with TR_HR and so on. However, this approach is kind of irritating coz the sample schemas are kind of complicated in the way they are created; Oracle creates synonyms, views, materialized views, data types and lot of weird stuff.
I would like the target schemas (TR_HR, TR_OE,...) to be empty. But some of the schemas have circular references, which would not allow me to delete data. The only work around seems to be removing certain foreign keys, deleting data and then adding the constraints back.
Is there any easy way to all this, without all this fuss? I would need a complicated data set for my testing (complicated as in tables with triggers, multiple hierarchies.. for instance.. a child table that has children up to 5 levels, a parent table that refers to an IOT table and an IOT table that refers to a non-IOT table etc..). The sample schemas are just about perfect from a data set perspective. The only challenge I see is in automating this whole process of loading up the source schemas, and then creating the target schemas and emptying them. Appreciate your help and suggestions.
UPDATE
The main script that you are required to run for manually installing oracle sample schemas is mkplug.sql. Here is the line that loads the schemas up from a dmp file:
host imp "'sys/&&password_sys AS SYSDBA'" transport_tablespace=y file=&imp_file log=&imp_logfile datafiles='&datafile' tablespaces=EXAMPLE tts_owners=hr,oe,pm,ix,sh
Well, I tried modifying this line (after patching up path related issues on mkplug.sql and all other sql files) to this:
host imp "'sys/&&password_sys AS SYSDBA'" rows=n transport_tablespace=y file=&imp_file log=&imp_logfile datafiles='&datafile' tablespaces=EXAMPLE tts_owners=hr,oe,pm,ix,sh
And... it did NOT help. The schema got created with row data, despite rows=n attribute :(
Since you're already familiar with exp/imp (or expdp/impdp) from the Oracle scripts that use the .dmp file, why not just:
Create the empty TR_xxx schemas
Populate the TR_xxx schema from the
xxx .dmp file with the FROMUSER/TOUSER
options and ROWS=N (similar options
exist for expdp/impdp)
[Edit after reading your comment about the transportable tablespaces]
I didn't know that the Oracle scripts were using transportable tablespaces and that multiple schemas were being imported from a single file. This is probably the most straightforward way to create your new empty TR schemas:
Start with the standard, populated
database built with the Oracle
scripts
Create no-data export files on a
schema-by-schema basis (OE shown) by:
exp sys/&&password_sys AS SYSDBA
file=oe_nodata.dmp
log=oe_nodata_exp.log owner=OE rows=N
grants=N
(You should only have to do this once
and this dmp file can be reused)
Now, your script should:
Drop any TR_ users with the CASCADE
option
Re-create the TR_ users
Populate the schema objects (OE
shown) by:
host imp "'sys/&&password_sys AS
SYSDBA'" file=oe_nodata.dmp
log=tr_oe_imp.log fromuser=OE
touser=TR_OE
Here is an anonymos block which - for a given schema - disables triggers and foreign keys, truncates all the tables and then re-enables triggers and foreign keys. It uses truncate for speed but obviously this means no rollback: so be careful which schema name you supply! It's easy enough to convert that call into a delete from statement if you prefer.
The script is a fine example of cut'n'paste programming, and would no doubt benefit from some refactoring to remove the repetition.
begin
<< dis_triggers >>
for trgs in ( select owner, trigger_name
from all_triggers
where table_owner = '&&schema_name' )
loop
execute immediate 'alter trigger '||trgs.owner||'.'||trgs.trigger_name
||' disable';
end loop dis_triggers;
<< dis_fkeys >>
for fkeys in ( select owner, table_name, constraint_name
from all_constraints
where owner = '&&schema_name'
and constraint_type = 'R')
loop
execute immediate 'alter table '||fkeys.owner||'.'||fkeys.table_name
||' disable constraint '||fkeys.constraint_name;
end loop dis_fkeys;
<< zap_tables >>
for tabs in ( select owner, table_name
from all_tables
where owner = '&&schema_name' )
loop
execute immediate 'truncate table '||tabs.owner||'.'||tabs.table_name
||' reuse storage';
end loop zap_tables;
<< en_fkeys >>
for fkeys in ( select owner, table_name, constraint_name
from all_constraints
where owner = '&&schema_name'
and constraint_type = 'R')
loop
execute immediate 'alter table '||fkeys.owner||'.'||fkeys.table_name
||' enable constraint '||fkeys.constraint_name;
end loop en_fkeys;
<< en_triggers >>
for trgs in ( select owner, trigger_name
from all_triggers
where table_owner = '&&schema_name' )
loop
execute immediate 'alter trigger '||trgs.owner||'.'||trgs.trigger_name
||' enable';
end loop en_triggers;
end;
/

Resources