Trying to create a DML file of the owner inserts Oracle - oracle

I am trying to create a DML file that contains all the inserts to a database using only a script and asking only for the owner name, I found some documentation about the creation of files in Oracle and some other about how to get the insert statements.
This is the query that gets the inserts
SELECT /*insert*/ * FROM ALL_TAB_COLUMNS WHERE OWNER = 'OwnerName';
And this is what I`m trying to do in order to create the file with the selected rows from the query
DECLARE
F1 UTL_FILE.FILE_TYPE;
CURSOR C_TABLAS IS
SELECT /*insert*/ * FROM ALL_TAB_COLUMNS WHERE OWNER = 'BETA';
V_INSERT VARCHAR2(32767);
BEGIN
OPEN C_TABLAS;
LOOP
FETCH C_TABLAS INTO V_INSERT;
EXIT WHEN C_TABLAS%NOTFOUND;
F1 := UTL_FILE.FOPEN('D:\Desktop\CENFOTEC\4 Cuatrimestre\Programación de Bases de Datos\Proyecto\FileTests','TestUno.dml','W');
UTL_FILE.PUT_LINE(F1, V_INSERT);
UTL_FILE.FCLOSE (F1);
END LOOP;
CLOSE C_TABLAS;
END;
I'm having trouble with the fetch, I'm getting this error: wrong number of values in the INTO list of a FETCH statement
I know that it is a basic one, but I can't figure out how many columns I am getting from the query above
Although I'm trying this way i wouldn't mind changing it, I need to create a DML file of all the inserts needed to replicate the database of the given user. Thanks a lot

In SQL Developer, when you use:
SELECT /*insert*/ * FROM ALL_TAB_COLUMNS WHERE OWNER = 'OwnerName';
Then the /*insert*/ hint is processed by SQL Developer on the client-side and converts the returned result set into DML statements.
To quote #ThatJeffSmith in his answer where he gave the above solution:
here is a SQL Developer-specific solution
That behaviour is specific to the SQL Developer client application.
In the Oracle database, when you use:
SELECT /*insert*/ * FROM ALL_TAB_COLUMNS WHERE OWNER = 'OwnerName';
Then /*insert*/ is an inline comment and it is IGNORED and has zero effect on the output of the query.
Therefore, when you do:
DECLARE
F1 UTL_FILE.FILE_TYPE;
CURSOR C_TABLAS IS
SELECT /*insert*/ * FROM ALL_TAB_COLUMNS WHERE OWNER = 'BETA';
V_INSERT VARCHAR2(32767);
BEGIN
OPEN C_TABLAS;
LOOP
FETCH C_TABLAS INTO V_INSERT;
EXIT WHEN C_TABLAS%NOTFOUND;
F1 := UTL_FILE.FOPEN('D:\Desktop\CENFOTEC\4 Cuatrimestre\Programación de Bases de Datos\Proyecto\FileTests','TestUno.dml','W');
UTL_FILE.PUT_LINE(F1, V_INSERT);
UTL_FILE.FCLOSE (F1);
END LOOP;
CLOSE C_TABLAS;
END;
/
The PL/SQL anonymous block will be processed by the database's PL/SQL engine on the server-side and it will context-switch and pass the cursor's SQL to the database's SQL engine where it will be run and the /*insert*/ comment is ignored and it will return all the columns.
I can't figure out how many columns I am getting from the query above.
One column for every column in the ALL_TABS_COLUMNS table. You can use:
SELECT * FROM all_tabs_columns FETCH FIRST ROW ONLY
And then count the columns. I made it 37 columns (but might have miscounted).
However
Trying to generate INSERT statements that correspond to all the rows in the ALL_TAB_COLUMNS table so that you can recreate the database is WRONG. You need to generate the DDL statements for each table and not generate DML statements to try to modify a data dictionary table (which, likely as not, if you try to modify data dictionary tables will leave your database in an unusable state).
If you want to recreate the database then use the answers in this question or backup the database and then restore it to the new database.

Related

How can we use oracle private temporary tables in a pl/sql block?

I see the concept of temporary table in oracle is quite different from other databases like SQL Server. In Oracle, we have a concept of global temporary table and we create it only once and in each session we fill it with data which is not the same in other databases.
In 18c, oracle has introduced the concept of private temporary tables which states that upon successful usage, tables can be dropped like in other databases. But how do we use it in a PL/SQL block?
I tried using it using dynamic SQL - EXECUTE IMMEDIATE. But it is giving me table must be declared error. what do I do here?
But how do we use it in a PL/SQL block?
If what you mean is, how can we use private temporary tables in a PL/SQL program (procedure or function) the answer is simple: we can't. PL/SQL programs need to be compiled before we can call them. This means any table referenced in the program must exist at compilation time. Private temporary tables don't change that.
The private temporary table is intended for use in ad hoc SQL work. It allows us to create a data structure we can use in SQL statements for the duration of a session, to make life easier for ourselves.
For instance, suppose I have a massive table of sales data - low level transactions - and my task is to investigate monthly trends. So I only need the total sales by month. Unfortunately, there is no materialized view providing this summary. I don't want to include the aggregating query in my select statements. In previous versions I would have had to create a permanent table (and had to remember to drop it afterwards) but in 18c I can use a private temporary table to stage my summary just for the session.
create private temporary table ora$ptt_sales_summary (
sales_month date
, total_value number )
/
insert into ora$ptt_sales_summary
select trunc(sales_date, 'MM')
, sum (qty*price)
from massive_sales_table
group by trunc(sales_date, 'MM')
/
select *
from ora$ptt_sales_summary
order by sales_month
/
Obviously we can write anonymous PL/SQL blocks in our session but let's continue assuming that's not what you need. So what is the equivalent of a private temporary table in a permanent PL/SQL program? Same as it's been for several versions now: a PL/SQL collection or a SQL nested table type.
Private temporary tables (Available from Oracle 18c ) are dropped at the end of the session/transaction depending on the definition of PTT.
The ON COMMIT DROP DEFINITION option creates a private temporary table that is transaction-specific. At the end of the transaction,
Oracle drops both table definitions and data.
The ON COMMIT PRESERVE DEFINITION option creates a private temporary table that is session-specific. Oracle removes all data and
drops the table at the end of the session.
You do not need to drop it manually. Oracle will do it for you.
CREATE PRIVATE TEMPORARY TABLE ora$ptt_temp_table (
......
)
ON COMMIT DROP DEFINITION;
-- or
-- ON COMMIT PRESERVE DEFINITION;
Example of ON COMMIT DROP DEFINITION (table is dropped after COMMIT is executed)
Example of ON COMMIT PRESERVE DEFINITION (table is retained after COMMIT is executed but it will be dropped at the end of the session)
Note: I don't have access to 18c DB currently and db<>fiddle is facing some issue so I have posted images for you.
Cheers!!
It works with dynamic SQL:
declare
cnt int;
begin
execute immediate 'create private temporary table ora$ptt_tmp (id int)';
execute immediate 'insert into ora$ptt_tmp values (55)';
execute immediate 'insert into ora$ptt_tmp values (66)';
execute immediate 'insert into ora$ptt_tmp values (77)';
execute immediate 'select count(*) from ora$ptt_tmp' into cnt;
dbms_output.put_line(cnt);
execute immediate 'delete from ora$ptt_tmp where id = 66';
cnt := 0;
execute immediate 'select count(*) from ora$ptt_tmp' into cnt;
dbms_output.put_line(cnt);
end;
Example here:
https://livesql.oracle.com/apex/livesql/s/l7lrzxpulhtj3hfea0wml09yg

How to find the column used in the dynamic query without executing whole query

Problem Statement
I have a dynamic SQL which i need to store in a table ,but before
storing the sql i need to validate the sql with the list of columns
stored in another table.
Without executing the query , is it possible to find name of columns in the select ?
Approach1
Only option i can think of is ,try to use explain plan of the query and read the meta data in the data dictionaries table .But unfortunately i am not able to find any table with such data.Please let me know if you know such views?
Approach2
Use DBMS_SQL.DESCRIBE_COLUMNS package to find the column name ,but i believe this will execute the whole query.
You don't need to execute the query to get the column names, you just need to parse it; e.g. as a simple example:
set serveroutput on
declare
l_statement varchar2(4000) := 'select * from employees';
l_c pls_integer;
l_col_cnt pls_integer;
l_desc_t dbms_sql.desc_tab;
begin
l_c := dbms_sql.open_cursor;
dbms_sql.parse(c=>l_c, statement=>l_statement, language_flag=>dbms_sql.native);
dbms_sql.describe_columns(c=>l_c, col_cnt=>l_col_cnt, desc_t=>l_desc_t);
for i in 1..l_col_cnt loop
dbms_output.put_line(l_desc_t(i).col_name);
end loop;
dbms_sql.close_cursor(l_c);
exception
when others then
if (dbms_sql.is_open(l_c)) then
dbms_sql.close_cursor(l_c);
end if;
raise;
end;
/
which outputs:
EMPLOYEE_ID
FIRST_NAME
LAST_NAME
EMAIL
PHONE_NUMBER
HIRE_DATE
JOB_ID
SALARY
COMMISSION_PCT
MANAGER_ID
DEPARTMENT_ID
PL/SQL procedure successfully completed.
You can do whatever validation you need on the column names inside the loop.
Bear in mind that you'll only see (and validate) the column names or aliases for column expressions, which won't necessarily reflect the data that is actually being retrieved. Someone could craft a query that pulls any data from anywhere it has permission to access, but then gives the columns/expression aliases that are considered valid.
If you're trying to restrict access to specific data then look into other mechanisms like views, virtual private database, etc.
DBMS_SQL.PARSE will not execute a SELECT statement but it will execute a DDL statement. If the string 'select * from employees' is replaced by 'drop table employees' the code will fail but the table will still get dropped.
If you're only worried about the performance of retrieving the metadata then Alex Poole's answer will work fine.
If you're worried about running the wrong statement types then you'll want to make some adjustments to Alex Poole's answer.
It is surprisingly difficult to tell if a statement is a SELECT instead of something else. A simple condition checking that the string begins with select will work 99% of the time but getting from 99% to 100% is a huge amount of work. Simple regular expressions cannot keep up with all the different keywords, comments, alternative quoting format, spaces, etc.
/*comment in front -- */ select * from dual
select * from dual
with asdf as (select * from dual) select * from asdf;
((((((select * from dual))))));
If you need 100% accuracy I recommend you use my open source PLSQL_LEXER. Once installed you can reliably test the command types like this:
select
statement_classifier.get_command_name(' /*comment*/ ((select * from dual))') test1,
statement_classifier.get_command_name('alter table asdf move compress') test2
from dual;
TEST1 TEST2
----- -----
SELECT ALTER TABLE

How to create a table and insert in the same statement

I'm new to Oracle and I'm struggling with this:
DECLARE
cnt NUMBER;
BEGIN
SELECT COUNT(*) INTO cnt FROM all_tables WHERE table_name like 'Newtable';
IF(cnt=0) THEN
EXECUTE IMMEDIATE 'CREATE TABLE Newtable ....etc';
END IF;
COMMIT;
SELECT COUNT(*) INTO cnt FROM Newtable where id='something'
IF (cnt=0) THEN
EXECUTE IMMEDIATE 'INSERT INTO Newtable ....etc';
END IF;
END;
This keeps crashing and gives me the "PL/SQL: ORA-00942:table or view does not exist" on the insert-line. How can I avoid this? Or what am I doing wrong? I want these two statements (in reality it's a lot more of course) in a single transaction.
It isn't the insert that is the problem, it's the select two lines before. You have three statements within the block, not two. You're selecting from the same new table that doesn't exist yet. You've avoided that in the insert by making that dynamic, but you need to do the same for the select:
EXECUTE IMMEDIATE q'[SELECT COUNT(*) FROM Newtable where id='something']'
INTO cnt;
SQL Fiddle.
Creating a table at runtime seems wrong though. You said 'for safety issues the table can only exist if it's filled with the correct dataset', which doesn't entirely make sense to me - even if this block is creating and populating it in one go, anything that relies on it will fail or be invalidated until this runs. If this is part of the schema creation then making it dynamic doesn't seem to add much. You also said you wanted both to happen in one transaction, but the DDL will do an implicit commit, you can't roll back DDL, and your manual commit will start a new transaction for the insert(s) anyway. Perhaps you mean the inserts shouldn't happen if the table creation fails - but they would fail anyway, whether they're in the same block or not. It seems a bit odd, anyway.
Also, using all_tables for the check could still cause this to behave oddly. If that table exists in another schema, you create will be skipped, but you select and insert might still fail as they might not be able to see, or won't look for, the other schema version. Using user_tables or adding an owner check might be a bit safer.
Try the following approach, i.e. create and insert are in two different blocks
DECLARE
cnt NUMBER;
BEGIN
SELECT COUNT (*)
INTO cnt
FROM all_tables
WHERE table_name LIKE 'Newtable';
IF (cnt = 0)
THEN
EXECUTE IMMEDIATE 'CREATE TABLE Newtable(c1 varchar2(256))';
END IF;
END;
DECLARE
cnt2 NUMBER;
BEGIN
SELECT COUNT (*)
INTO cnt2
FROM newtable
WHERE c1 = 'jack';
IF (cnt2 = 0)
THEN
EXECUTE IMMEDIATE 'INSERT INTO Newtable values(''jill'')';
END IF;
END;
Oracle handles the execution of a block in two steps:
First it parses the block and compiles it in an internal representation (so called "P code")
It then runs the P code (it may be interpreted or compiled to machine code, depending on your architecture and Oracle version)
For compiling the code, Oracle must know the names (and the schema!) of the referenced tables. Your table doesn't exist yet, hence there is no schema and the code does not compile.
To your intention to create the tables in one big transaction: This will not work. Oracle always implicitly commits the current transaction before and after a DDL statement (create table, alter table, truncate table(!) etc.). So after each create table, Oracle will commit the current transaction and starts a new one.

DDL statements in PL/SQL?

I am trying the code below to create a table in PL/SQL:
DECLARE
V_NAME VARCHAR2(20);
BEGIN
EXECUTE IMMEDIATE 'CREATE TABLE TEMP(NAME VARCHAR(20))';
EXECUTE IMMEDIATE 'INSERT INTO TEMP VALUES(''XYZ'')';
SELECT NAME INTO V_NAME FROM TEMP;
END;
/
The SELECT statement fails with this error:
PL/SQL: ORA-00942: table or view does not exist
Is it possible to CREATE, INSERT and SELECT all in a single PL/SQL Block one after other?
I assume you're doing something like the following:
declare
v_temp varchar2(20);
begin
execute immediate 'create table temp(name varchar(20))';
execute immediate 'insert into temp values(''XYZ'')';
select name into v_name from temp;
end;
At compile time the table, TEMP, does not exist. It hasn't been created yet. As it doesn't exist you can't select from it; you therefore also have to do the SELECT dynamically. There isn't actually any need to do a SELECT in this particular situation though you can use the returning into syntax.
declare
v_temp varchar2(20)
begin
execute immediate 'create table temp(name varchar2(20))';
execute immediate 'insert into temp
values(''XYZ'')
returning name into :1'
returning into v_temp;
end;
However, needing to dynamically create tables is normally an indication of a badly designed schema. It shouldn't really be necessary.
I can recommend René Nyffenegger's post "Why is dynamic SQL bad?" for reasons why you should avoid dynamic SQL, if at all possible, from a performance standpoint. Please also be aware that you are much more open to SQL injection and should use bind variables and DBMS_ASSERT to help guard against it.
If you run the program multiple time you will get an error even after modifying the program to run the select statement as dynamic SQL or using a returning into clause.
Because when you run the program first time it will create the table without any issue but when you run it next time as the table already created first time and you don't have a drop statement it will cause an error: "Table already exists in the Database".
So my suggestion is before creating a table in a pl/sql program always check if there is any table with the same name already exists in the database or not. You can do this check using a Data dictionary views /system tables which store the metadata depending on your database type.
For Example in Oracle you can use following views to decide if a tables needs to be created or not:
DBA_TABLES ,
ALL_TABLES,
USER_TABLES

Moving XML over a DBLink

I am trying to move some data over a dblink and one of the columns is an XMLType column. The code looks like this:
begin
delete from some_schema.some_remote_tab#src_2_trg_dblink;
INSERT INTO some_schema.some_remote_tab#src_2_trg_dblink(id, code, gen_date, xml_data)
SELECT id, code, gen_date, xml_data
FROM local_table;
end;
Oracle returns these errors:
ORA-02055: distributed update operation failed; rollback required
ORA-22804: remote operations not permitted on object tables or user-defined type columns
Some research on ORA-22804 shows that I am probably getting this error because of the XMLType column, but I am not sure how to resolve this.
(Oracle 10g)
We get ORA-22804 because every instance of a Type in our Oracle database has an OID, which is unique within the database. We cannot transfer that OID to another database; this has caused me grief before when trying to import schemas which have User-Defined Types. I hadn't realised that it also affected XMLType, but it is an Object so it is not surprising.
The solution is icky: you will have to unload the XML into text on your local database and then convert it back into XML in the remote database.
I don't have a distributed DB set-up to test this right now, but if you're lucky it may work:
INSERT INTO some_schema.some_remote_tab#src_2_trg_dblink(id, code, gen_date, xml_data)
SELECT id, code, gen_date, xmltype ( xml_data.asClobVal() )
FROM local_table;
If the asClobVal() method doesn't work you may need to use the SQL function XMLSERIALIZE() instead.
XMLSerialize(DOCUMENT xml_data AS CLOB)
If you're really unlucky you won't be able to do this in a single SQL statement, and you'll have to solve it using PL/SQL. To a certain extent this will depend on which version of the database you are using; the more recent the version, the more likely you'll be able to it in SQL rather than PL/SQL.
Try to do this the other way around. That is log into the remote db, create a dblink to the local db, and do an insert like this
INSERT INTO remote_schema.some_remote_tab(id, code, gen_date, xml_data)
SELECT id, code, gen_date, xml_data
FROM local_table#dblink_to_local_db;
Instead Perform a Data PULL.
create the data pull procedure at Remote database B.
create synonyms and provide grants to the dblink user.
Call the Remote procedure from Database A (Source) Perform a commit at Database A(source).
(Meanwhile .. wait for oracle to find some solution to perform the PUSH of XML over dblink in the future)
Create a procedure at Remote site Database B
CREATE OR REPLACE PROCEDURE PR_REMOTE(OP_TOTAL_COUNT OUT NUMBER) IS
BEGIN
INSERT /*+ DRIVING_SITE(src) */
INTO REMOTE_TABLE TGT_B
(XMLDATA_COL)
SELECT SRC.XMLDATA FROM LOCAL_TABLE#TGT2SRC_DBLINK SRC;
OP_TOTAL_COUNT := SQL%ROWCOUNT;
END;
Call the procedure from Database A
DECLARE
V_COUNT NUMBER := 0;
BEGIN
PR_REMOTE(V_COUNT);
COMMIT;
END;
I was facing the same issue with an heterogeneous DB link to SQL server.
Ended up using xmltype.getStringVal() to insert in a VARCHAR column on SQL Server side as the data was under 4000 characters.
There is also xmltype.getClobVal() if over 4000 characters but I haven't tested it.
The "xml->text->xml" chain might be complicated, but could help in some cases (for example when inserting is not on option but updating only).
You can try with "n" peaces of varchar columns (in the destination table or in a differnet one, perheaps in different schema on the remote DB), where "n" is:
ceil(max(dbms_lob.getlength(MyXmlColumn)) / 4000)
Then you can transfer these fragments to remote temporary fields:
insert into RemoteSchema.MyTable(Id, XmlPart1, XmlPart2,...)
(select 1 /*some Id*/,
dbma_lob.substr(MyXmlColumn.getclobval(), 4000, 1),
dbma_lob.substr(MyXmlColumn.getclobval(), 4000, 4001),
...
from LocalSchema.MyTable
XmlType can be re-composed from fragments like this:
create or replace function concat_to_xml(p_id number)
return xmltype
is
xml_lob clob;
xml xmltype;
begin
dbms_lob.createtemporary(xml_lob, true);
for r in (select XmlPart1, XmlPart2, ... from RemoteSchema.MyTable where Id = p_id)
loop
if r.XmlPart1 is not null then
dbms_lob.writeappend(xml_lob, length(r.XmlPart1), r.XmlPart1);
end if;
if r.XmlPart2 is not null then
dbms_lob.writeappend(xml_lob, length(r.XmlPart2), r.XmlPart2);
end if;
...
end loop;
xml := xmltype(xml_lob);
dbms_lob.freetemporary(xml_lob);
return xml;
end;
Finally use the result to update any other table in the remothe schema like:
update RemoteSchema.MyTable2 t2 set t2.MyXmlColumn = concat_to_xml(1 /*some Id*/);

Resources