How to merge two or more unknown tables into one table in Oracle - oracle

I'm trying to merge x number of identical tables into one table. The reason we did this is because we want to have, for example 50 columns per table in the database. Tables are created externally via SCADA software called Ignition.
Every time table is created in the database, we want to view the data as one regardless of how many tables the data came from provided that all tables will have the same first three letters for example, Table_1, Table_2, Table_3....so on.
The query/procedure we want to have is like:
step 1: since the tables are unknown we can't do it by simple union, merge insert etc., so we must find all table_name with 'Table' prefix.
SELECT table_name FROM all_tables where table_name like 'Table%'
step 2: this is where the magic begins, it should query one by one each listed table_name in first step, then collect all the data and merge into one table or view.
I tried many ways using PL/SQL but don't know how to proceed with step 2. Is there any way to get what we want to achieve? any possible solutions would be great! :)
Thanks!

Assuming that you are selecting only the common columns from all the tables, you could create a dynamic view, which does a UNION ALL of all the tables starting with "Table" prefix.
DECLARE
v_select CLOB;
BEGIN
SELECT
LISTAGG('SELECT col1,col2,col3 FROM ' || table_name,
' UNION ALL ' || CHR(10) ) WITHIN GROUP
(
ORDER BY table_name
)
INTO v_select
FROM user_tables WHERE table_name LIKE 'TABLE_%';
IF
v_select IS NOT NULL
THEN
EXECUTE IMMEDIATE ' CREATE OR REPLACE VIEW v_all_tabs as ' || v_select;
END IF;
END;
/
Then select from the view by executing the above block(or put it into a procedure ) each time there's a new table added.
select * from v_all_tabs;
If there's a chance of your SQL string exceeding 4000 characters, instead of a single LISTAGG, you could append each select through a simple assignment in PL/SQL in a cursor loop.

Related

How to rename multiple columns in oracle using one Alter table statement?

The only thing I found is renaming one column at a time:
ALTER TABLE table_name
RENAME COLUMN old_name TO new_name;
I read Oracle documentations, and couldn't get the answer for many columns at a time .
Ref: https://docs.oracle.com/javadb/10.6.2.1/ref/rrefsqljrenamecolumnstatement.html
It is not possible to rename multiple table columns in a single command, as of Oracle 18c.
The Oracle 18c SQL Language Reference includes the below diagram to illustrate how the RENAME_COLUMN_CLAUSE of the ALTER TABLE command works. Unfortunately, almost every column property can be modified in groups, except for renaming.
You can use user_tab_columns dictionary view as a data source within a cursor for a loop statement
declare
v_table_name varchar2(40):='mytable';
begin
for c in ( select from user_tab_columns where table_name = upper(v_table_name) )
loop
execute immediate ('ALTER TABLE '||c.table_name||' RENAME COLUMN '||c.column_name
||' TO new_'||c.column_name);
end loop;
end;

How to find the column used in the dynamic query without executing whole query

Problem Statement
I have a dynamic SQL which i need to store in a table ,but before
storing the sql i need to validate the sql with the list of columns
stored in another table.
Without executing the query , is it possible to find name of columns in the select ?
Approach1
Only option i can think of is ,try to use explain plan of the query and read the meta data in the data dictionaries table .But unfortunately i am not able to find any table with such data.Please let me know if you know such views?
Approach2
Use DBMS_SQL.DESCRIBE_COLUMNS package to find the column name ,but i believe this will execute the whole query.
You don't need to execute the query to get the column names, you just need to parse it; e.g. as a simple example:
set serveroutput on
declare
l_statement varchar2(4000) := 'select * from employees';
l_c pls_integer;
l_col_cnt pls_integer;
l_desc_t dbms_sql.desc_tab;
begin
l_c := dbms_sql.open_cursor;
dbms_sql.parse(c=>l_c, statement=>l_statement, language_flag=>dbms_sql.native);
dbms_sql.describe_columns(c=>l_c, col_cnt=>l_col_cnt, desc_t=>l_desc_t);
for i in 1..l_col_cnt loop
dbms_output.put_line(l_desc_t(i).col_name);
end loop;
dbms_sql.close_cursor(l_c);
exception
when others then
if (dbms_sql.is_open(l_c)) then
dbms_sql.close_cursor(l_c);
end if;
raise;
end;
/
which outputs:
EMPLOYEE_ID
FIRST_NAME
LAST_NAME
EMAIL
PHONE_NUMBER
HIRE_DATE
JOB_ID
SALARY
COMMISSION_PCT
MANAGER_ID
DEPARTMENT_ID
PL/SQL procedure successfully completed.
You can do whatever validation you need on the column names inside the loop.
Bear in mind that you'll only see (and validate) the column names or aliases for column expressions, which won't necessarily reflect the data that is actually being retrieved. Someone could craft a query that pulls any data from anywhere it has permission to access, but then gives the columns/expression aliases that are considered valid.
If you're trying to restrict access to specific data then look into other mechanisms like views, virtual private database, etc.
DBMS_SQL.PARSE will not execute a SELECT statement but it will execute a DDL statement. If the string 'select * from employees' is replaced by 'drop table employees' the code will fail but the table will still get dropped.
If you're only worried about the performance of retrieving the metadata then Alex Poole's answer will work fine.
If you're worried about running the wrong statement types then you'll want to make some adjustments to Alex Poole's answer.
It is surprisingly difficult to tell if a statement is a SELECT instead of something else. A simple condition checking that the string begins with select will work 99% of the time but getting from 99% to 100% is a huge amount of work. Simple regular expressions cannot keep up with all the different keywords, comments, alternative quoting format, spaces, etc.
/*comment in front -- */ select * from dual
select * from dual
with asdf as (select * from dual) select * from asdf;
((((((select * from dual))))));
If you need 100% accuracy I recommend you use my open source PLSQL_LEXER. Once installed you can reliably test the command types like this:
select
statement_classifier.get_command_name(' /*comment*/ ((select * from dual))') test1,
statement_classifier.get_command_name('alter table asdf move compress') test2
from dual;
TEST1 TEST2
----- -----
SELECT ALTER TABLE

how to select table name from a query in oracle

select SRI_PACK_INDEX_VAL
from (select TABLE_NAME
from all_tables
where table_name like '%SE_RAO_INDEX_04_12_2015%');
Hi,
The above query is not working in oracle.Can anyone help me to solve this
You can't dynamically select from tables like that as Oracle's parser can't figure it out before fetch to know what tables it needs to read. If you want to do a dynamic fetch you need to do dynamic sql. Something like (forgive any minor syntax errors-I'm away from my database at the moment
declare
index_val number; -- i am assuming it is a number, change as appropriate
begin
-- I'm doing this in a loop if there are multiple tables that meet your name mask.
-- If there will only be one, then just select it into a variable and use it that way instead
for tab_name in (SELECT TABLE_NAME
from all_tables
where table_name like '%SE_RAO_INDEX_04_12_2015%')
loop
execute immediate 'select SRI_PACK_INDEX_VAL from '||tab_name.table_name
into index_val;
-- do your stuff with it
end loop;
end;
now, that works if the select only brings back one row. if you are bringing back multiple rows, then you have to handle it differently. In that case you will either want to EXECUTE IMMEDIATE a pl/sql block and embed your processing of the results there, or execute immediate bulk collect into an array, an example on how to do that is here.
select
extractvalue(
xmltype(
dbms_xmlgen.getxml(
'select sri_pack_index_val a from '||owner||'.'||table_name
)
), '/ROWSET/ROW/A'
) sri_pack_index_val
from all_tables
where table_name like '%SE_RAO_INDEX_04_12_2015%';
This query is based on a post from Laurent Schneider. Here's a SQLFiddle demonstration.
This is a neat trick to create dynamic SQL in SQL, but it has a few potential issues. It's probably not as fast as the typical dynamic SQL approach as shown by Michael Broughton. And I've ran into some weird bugs when trying to use this for large production queries.
I recommend only using this approach for ad hoc queries.

Need Column name where I know the database and table names

I know the database and table name and need to find a column name. Example as in emp table; I know data 7369 and table name as emp, and I need to get the column name as empno. My table has hundreds of columns and it is getting difficult to search each column name.
You don't have any choice but to search in every column. Please note though that this value could, potentially, appear in multiple columns and/or multiple times in a single column. There's no way to restrict how often it appears across an entire table.
This is the point of a database; everything stored in a column and, most importantly, that column has meaning. If you disassociate the data stored in a column from a meaning then you will have to search everything.
Two steps, not using cursors or complex pl/sql, only SQL Plus.
Produce your search queries:
select select '||
COLUMN_NAME ||
',count(*) from emp where ' ||
column_name || ' = 7369 group by '||
COLUMN_NAME || ';'
from cols
where table_name = 'EMP';
EG:
--------------------------------------------------------------------------------------
select SECOND,count(*) from TESTER where SECOND = 7369 group by SECOND;
(in my env, Second was a column in table TESTER)
Capture the output, clean up the headers and the like, and run it.
It will return every column that matches, along with a count of how many rows matched.

Optimal way to DELETE specified rows from Oracle

I have a project that needs to occasionally delete several tens of thousands of rows from one of six tables of varying sizes but that have about 30million rows between them. Because of the structure of the data I've been given, I don't know which of the six tables has the row that needs to be deleted in it so I have to run all deletes against all tables. I've built an INDEX against the ID column to try and speed things up, but it can be removed if that'll speed things up.
My problem is, that I can't seem to find an efficient way to actually perform the delete. For the purposes of my testing I'm running 7384 delete rows against single test-table which has about 9400 rows. I've tested a number of possible query solutions in Oracle SQL Developer:
7384 separate DELETE statements took 203 seconds:
delete from TABLE1 where ID=1000001356443294;
delete from TABLE1 where ID=1000001356443296;
etc...
7384 separate SELECT statements took 57 seconds:
select ID from TABLE1 where ID=1000001356443294
select ID from TABLE1 where ID=1000001356443296
etc...
7384 separate DELETE from (SELECT) statements took 214 seconds:
delete from (select ID from TABLE1 where ID=1000001356443294);
delete from (select ID from TABLE1 where ID=1000001356443296);
etc...
1 SELECT statement that has 7384 OR clauses in the where took 127.4s:
select ID from TABLE1 where ID=1000001356443294 or ID = 1000001356443296 or ...
1 DELETE from (SELECT) statement that has 7384 OR clauses in the where took 74.4s:
delete from (select ID from TABLE1 where ID=1000001356443294 or ID = 1000001356443296 or ...)
While the last may be the fastest, upon further testing its still very slow when scaled up from the 9000 row table to even just a 200,000 row table (which is still < 1% of the final tableset size) where the same statement takes 14mins to run. While > 50% faster per row, that still extrapolates up to about a day when being run against the full dataset. I have it on good authority that the piece of software we used to us to do this task could do it in about 20mins.
So my questions are:
Is there a better way to delete?
Should I use a round of SELECT statements (i.e., like the second test) to discover which table any given row is in and then shoot off delete queries? Even that looks quite slow but...
Is there anything else I can do to speed the deletes up? I don't have DBA-level access or knowledge.
In advance of my questions being answered, this is how I'd go about it:
Minimize the number of statements and the work they do issued in relative terms.
All scenarios assume you have a table of IDs (PURGE_IDS) to delete from TABLE_1, TABLE_2, etc.
Consider Using CREATE TABLE AS SELECT for really large deletes
If there's no concurrent activity, and you're deleting 30+ % of the rows in one or more of the tables, don't delete; perform a create table as select with the rows you wish to keep, and swap the new table out for the old table. INSERT /*+ APPEND */ ... NOLOGGING is surprisingly cheap if you can afford it. Even if you do have some concurrent activity, you may be able to use Online Table Redefinition to rebuild the table in-place.
Don't run DELETE statements you know won't delete any rows
If an ID value exists in at most one of the six tables, then keep track of which IDs you've deleted - and don't try to delete those IDs from any of the other tables.
CREATE TABLE TABLE1_PURGE NOLOGGING
AS
SELECT ID FROM PURGE_IDS INNER JOIN TABLE_1 ON PURGE_IDS.ID = TABLE_1.ID;
DELETE FROM TABLE1 WHERE ID IN (SELECT ID FROM TABLE1_PURGE);
DELETE FROM PURGE_IDS WHERE ID IN (SELECT ID FROM TABLE1_PURGE);
DROP TABLE TABLE1_PURGE;
and repeat.
Manage Concurrency if you have to
Another way is to use PL/SQL looping over the tables, issuing a rowcount-limited delete statement. This is most likely appropriate if there's significant insert/update/delete concurrent load against the tables you're running the deletes against.
declare
l_sql varchar2(4000);
begin
for i in (select table_name from all_tables
where table_name in ('TABLE_1', 'TABLE_2', ...)
order by table_name);
loop
l_sql := 'delete from ' || i.table_name ||
' where id in (select id from purge_ids) ' ||
' and rownum <= 1000000';
loop
commit;
execute immediate l_sql;
exit when sql%rowcount <> 1000000; -- if we delete less than 1,000,000
end loop; -- no more rows need to be deleted!
end loop;
commit;
end;
Store all the to be deleted ID's into a table. Then there are 3 ways.
1) loop through all the ID's in the table, then delete one row at a time for X commit interval. X can be a 100 or 1000. It works on OLTP environment and you can control the locks.
2) Use Oracle Bulk Delete
3) Use correlated delete query.
Single query is usually faster than multiple queries because of less context switching, and possibly less parsing.
First, disabling the index during the deletion would be helpful.
Try with a MERGE INTO statement :
1) create a temp table with IDs and an additional column from TABLE1 and test with the following
MERGE INTO table1 src
USING (SELECT id,col1
FROM test_merge_delete) tgt
ON (src.id = tgt.id)
WHEN MATCHED THEN
UPDATE
SET src.col1 = tgt.col1
DELETE
WHERE src.id = tgt.id
I have tried this code and It's working fine in my case.
DELETE FROM NG_USR_0_CLIENT_GRID_NEW WHERE rowid IN
( SELECT rowid FROM
(
SELECT wi_name, relationship, ROW_NUMBER() OVER (ORDER BY rowid DESC) RN
FROM NG_USR_0_CLIENT_GRID_NEW
WHERE wi_name = 'NB-0000001385-Process'
)
WHERE RN=2
);

Resources