I know the database and table name and need to find a column name. Example as in emp table; I know data 7369 and table name as emp, and I need to get the column name as empno. My table has hundreds of columns and it is getting difficult to search each column name.
You don't have any choice but to search in every column. Please note though that this value could, potentially, appear in multiple columns and/or multiple times in a single column. There's no way to restrict how often it appears across an entire table.
This is the point of a database; everything stored in a column and, most importantly, that column has meaning. If you disassociate the data stored in a column from a meaning then you will have to search everything.
Two steps, not using cursors or complex pl/sql, only SQL Plus.
Produce your search queries:
select select '||
COLUMN_NAME ||
',count(*) from emp where ' ||
column_name || ' = 7369 group by '||
COLUMN_NAME || ';'
from cols
where table_name = 'EMP';
EG:
--------------------------------------------------------------------------------------
select SECOND,count(*) from TESTER where SECOND = 7369 group by SECOND;
(in my env, Second was a column in table TESTER)
Capture the output, clean up the headers and the like, and run it.
It will return every column that matches, along with a count of how many rows matched.
Related
Is there a way to disable/restrict/alert-when-using some column in Oracle in a where clauses?
The reason that I'm asking this is because I have a very complex system (~30 services span cross millions of lines of code with thousends of sqls in it, in a sensitive production environment) working with an Oracle DB I need to migrate from using one column that is part of a key (and have a very not uniqu name) to another column.
Simple search is impossible....
The steps I'm having are:
populate new column
Add indexes on with the second column whenever there's an index with the first one.
Migrate all uses in where caluses from old to new column
Stop reading from the first column
Stop writing to the first column
Delete the column
I'm currently done step 3 and want to verify I've found all of the cases.
So, you're replacing one column with another. Which benefit do you expect once you're done? How will that improve overall experience with that application? I hope it is worth the effort.
As of your question: query user_source (or expand it to all_source or even dba_source, but you'll need additional privileges to do that) and see where's that very not unique name used. Something like this:
SQL> select * from user_source where lower(text) like '%empno%';
NAME TYPE LINE TEXT
--------------- ------------ ----- --------------------------------------------------------------------------------
P_RAISE PROCEDURE 22 WHERE empno = par_empno;
P_RAISE PROCEDURE 14 WHERE empno = par_empno;
P_RAISE PROCEDURE 1 PROCEDURE p_raise (par_empno IN emp.empno%TYPE)
GET_LIST FUNCTION 7 'select empno, ename, job, sal from emp where deptno = 10 order by '
SQL>
I'm trying to merge x number of identical tables into one table. The reason we did this is because we want to have, for example 50 columns per table in the database. Tables are created externally via SCADA software called Ignition.
Every time table is created in the database, we want to view the data as one regardless of how many tables the data came from provided that all tables will have the same first three letters for example, Table_1, Table_2, Table_3....so on.
The query/procedure we want to have is like:
step 1: since the tables are unknown we can't do it by simple union, merge insert etc., so we must find all table_name with 'Table' prefix.
SELECT table_name FROM all_tables where table_name like 'Table%'
step 2: this is where the magic begins, it should query one by one each listed table_name in first step, then collect all the data and merge into one table or view.
I tried many ways using PL/SQL but don't know how to proceed with step 2. Is there any way to get what we want to achieve? any possible solutions would be great! :)
Thanks!
Assuming that you are selecting only the common columns from all the tables, you could create a dynamic view, which does a UNION ALL of all the tables starting with "Table" prefix.
DECLARE
v_select CLOB;
BEGIN
SELECT
LISTAGG('SELECT col1,col2,col3 FROM ' || table_name,
' UNION ALL ' || CHR(10) ) WITHIN GROUP
(
ORDER BY table_name
)
INTO v_select
FROM user_tables WHERE table_name LIKE 'TABLE_%';
IF
v_select IS NOT NULL
THEN
EXECUTE IMMEDIATE ' CREATE OR REPLACE VIEW v_all_tabs as ' || v_select;
END IF;
END;
/
Then select from the view by executing the above block(or put it into a procedure ) each time there's a new table added.
select * from v_all_tabs;
If there's a chance of your SQL string exceeding 4000 characters, instead of a single LISTAGG, you could append each select through a simple assignment in PL/SQL in a cursor loop.
Problem Statement
I have a dynamic SQL which i need to store in a table ,but before
storing the sql i need to validate the sql with the list of columns
stored in another table.
Without executing the query , is it possible to find name of columns in the select ?
Approach1
Only option i can think of is ,try to use explain plan of the query and read the meta data in the data dictionaries table .But unfortunately i am not able to find any table with such data.Please let me know if you know such views?
Approach2
Use DBMS_SQL.DESCRIBE_COLUMNS package to find the column name ,but i believe this will execute the whole query.
You don't need to execute the query to get the column names, you just need to parse it; e.g. as a simple example:
set serveroutput on
declare
l_statement varchar2(4000) := 'select * from employees';
l_c pls_integer;
l_col_cnt pls_integer;
l_desc_t dbms_sql.desc_tab;
begin
l_c := dbms_sql.open_cursor;
dbms_sql.parse(c=>l_c, statement=>l_statement, language_flag=>dbms_sql.native);
dbms_sql.describe_columns(c=>l_c, col_cnt=>l_col_cnt, desc_t=>l_desc_t);
for i in 1..l_col_cnt loop
dbms_output.put_line(l_desc_t(i).col_name);
end loop;
dbms_sql.close_cursor(l_c);
exception
when others then
if (dbms_sql.is_open(l_c)) then
dbms_sql.close_cursor(l_c);
end if;
raise;
end;
/
which outputs:
EMPLOYEE_ID
FIRST_NAME
LAST_NAME
EMAIL
PHONE_NUMBER
HIRE_DATE
JOB_ID
SALARY
COMMISSION_PCT
MANAGER_ID
DEPARTMENT_ID
PL/SQL procedure successfully completed.
You can do whatever validation you need on the column names inside the loop.
Bear in mind that you'll only see (and validate) the column names or aliases for column expressions, which won't necessarily reflect the data that is actually being retrieved. Someone could craft a query that pulls any data from anywhere it has permission to access, but then gives the columns/expression aliases that are considered valid.
If you're trying to restrict access to specific data then look into other mechanisms like views, virtual private database, etc.
DBMS_SQL.PARSE will not execute a SELECT statement but it will execute a DDL statement. If the string 'select * from employees' is replaced by 'drop table employees' the code will fail but the table will still get dropped.
If you're only worried about the performance of retrieving the metadata then Alex Poole's answer will work fine.
If you're worried about running the wrong statement types then you'll want to make some adjustments to Alex Poole's answer.
It is surprisingly difficult to tell if a statement is a SELECT instead of something else. A simple condition checking that the string begins with select will work 99% of the time but getting from 99% to 100% is a huge amount of work. Simple regular expressions cannot keep up with all the different keywords, comments, alternative quoting format, spaces, etc.
/*comment in front -- */ select * from dual
select * from dual
with asdf as (select * from dual) select * from asdf;
((((((select * from dual))))));
If you need 100% accuracy I recommend you use my open source PLSQL_LEXER. Once installed you can reliably test the command types like this:
select
statement_classifier.get_command_name(' /*comment*/ ((select * from dual))') test1,
statement_classifier.get_command_name('alter table asdf move compress') test2
from dual;
TEST1 TEST2
----- -----
SELECT ALTER TABLE
I have a project that needs to occasionally delete several tens of thousands of rows from one of six tables of varying sizes but that have about 30million rows between them. Because of the structure of the data I've been given, I don't know which of the six tables has the row that needs to be deleted in it so I have to run all deletes against all tables. I've built an INDEX against the ID column to try and speed things up, but it can be removed if that'll speed things up.
My problem is, that I can't seem to find an efficient way to actually perform the delete. For the purposes of my testing I'm running 7384 delete rows against single test-table which has about 9400 rows. I've tested a number of possible query solutions in Oracle SQL Developer:
7384 separate DELETE statements took 203 seconds:
delete from TABLE1 where ID=1000001356443294;
delete from TABLE1 where ID=1000001356443296;
etc...
7384 separate SELECT statements took 57 seconds:
select ID from TABLE1 where ID=1000001356443294
select ID from TABLE1 where ID=1000001356443296
etc...
7384 separate DELETE from (SELECT) statements took 214 seconds:
delete from (select ID from TABLE1 where ID=1000001356443294);
delete from (select ID from TABLE1 where ID=1000001356443296);
etc...
1 SELECT statement that has 7384 OR clauses in the where took 127.4s:
select ID from TABLE1 where ID=1000001356443294 or ID = 1000001356443296 or ...
1 DELETE from (SELECT) statement that has 7384 OR clauses in the where took 74.4s:
delete from (select ID from TABLE1 where ID=1000001356443294 or ID = 1000001356443296 or ...)
While the last may be the fastest, upon further testing its still very slow when scaled up from the 9000 row table to even just a 200,000 row table (which is still < 1% of the final tableset size) where the same statement takes 14mins to run. While > 50% faster per row, that still extrapolates up to about a day when being run against the full dataset. I have it on good authority that the piece of software we used to us to do this task could do it in about 20mins.
So my questions are:
Is there a better way to delete?
Should I use a round of SELECT statements (i.e., like the second test) to discover which table any given row is in and then shoot off delete queries? Even that looks quite slow but...
Is there anything else I can do to speed the deletes up? I don't have DBA-level access or knowledge.
In advance of my questions being answered, this is how I'd go about it:
Minimize the number of statements and the work they do issued in relative terms.
All scenarios assume you have a table of IDs (PURGE_IDS) to delete from TABLE_1, TABLE_2, etc.
Consider Using CREATE TABLE AS SELECT for really large deletes
If there's no concurrent activity, and you're deleting 30+ % of the rows in one or more of the tables, don't delete; perform a create table as select with the rows you wish to keep, and swap the new table out for the old table. INSERT /*+ APPEND */ ... NOLOGGING is surprisingly cheap if you can afford it. Even if you do have some concurrent activity, you may be able to use Online Table Redefinition to rebuild the table in-place.
Don't run DELETE statements you know won't delete any rows
If an ID value exists in at most one of the six tables, then keep track of which IDs you've deleted - and don't try to delete those IDs from any of the other tables.
CREATE TABLE TABLE1_PURGE NOLOGGING
AS
SELECT ID FROM PURGE_IDS INNER JOIN TABLE_1 ON PURGE_IDS.ID = TABLE_1.ID;
DELETE FROM TABLE1 WHERE ID IN (SELECT ID FROM TABLE1_PURGE);
DELETE FROM PURGE_IDS WHERE ID IN (SELECT ID FROM TABLE1_PURGE);
DROP TABLE TABLE1_PURGE;
and repeat.
Manage Concurrency if you have to
Another way is to use PL/SQL looping over the tables, issuing a rowcount-limited delete statement. This is most likely appropriate if there's significant insert/update/delete concurrent load against the tables you're running the deletes against.
declare
l_sql varchar2(4000);
begin
for i in (select table_name from all_tables
where table_name in ('TABLE_1', 'TABLE_2', ...)
order by table_name);
loop
l_sql := 'delete from ' || i.table_name ||
' where id in (select id from purge_ids) ' ||
' and rownum <= 1000000';
loop
commit;
execute immediate l_sql;
exit when sql%rowcount <> 1000000; -- if we delete less than 1,000,000
end loop; -- no more rows need to be deleted!
end loop;
commit;
end;
Store all the to be deleted ID's into a table. Then there are 3 ways.
1) loop through all the ID's in the table, then delete one row at a time for X commit interval. X can be a 100 or 1000. It works on OLTP environment and you can control the locks.
2) Use Oracle Bulk Delete
3) Use correlated delete query.
Single query is usually faster than multiple queries because of less context switching, and possibly less parsing.
First, disabling the index during the deletion would be helpful.
Try with a MERGE INTO statement :
1) create a temp table with IDs and an additional column from TABLE1 and test with the following
MERGE INTO table1 src
USING (SELECT id,col1
FROM test_merge_delete) tgt
ON (src.id = tgt.id)
WHEN MATCHED THEN
UPDATE
SET src.col1 = tgt.col1
DELETE
WHERE src.id = tgt.id
I have tried this code and It's working fine in my case.
DELETE FROM NG_USR_0_CLIENT_GRID_NEW WHERE rowid IN
( SELECT rowid FROM
(
SELECT wi_name, relationship, ROW_NUMBER() OVER (ORDER BY rowid DESC) RN
FROM NG_USR_0_CLIENT_GRID_NEW
WHERE wi_name = 'NB-0000001385-Process'
)
WHERE RN=2
);
I have a table TABLE_X and it has multiple columns beginning with M_ characters which needs to be dropped. I decided to use the following PLSQL code to drop almost 100 columns beginning with M_ characters. Is it a good employment of dynamic sql and cursors? Can it be better? I didn't know more simple way since ALTER TABLE ... DROP COLUMN doesn't allow subquery to specify multiple column names.
declare
rcur sys_refcursor;
cn user_tab_cols.column_name%type;
begin
open rcur for select column_name from user_tab_cols where table_name='TABLE_X' and column_name LIKE 'M_%';
loop
fetch rcur into cn;
exit when rcur%NOTFOUND;
execute immediate 'alter table TABLE_X drop column '||cn;--works great
execute immediate 'alter table TABLE_X drop column :col'using cn;--error
end loop;
close rcur;
end;
Also. Why is it impossible to use 'using cn'?
This is a reasonable use of dynamic SQL. I would seriously question an underlying data model that has hundreds of columns in a single table that start with the same prefix and all need to be dropped. That implies to me that the data model itself is likely to be highly problematic.
Even using dynamic SQL, you cannot use bind variables for column names, table names, schema names, etc. Oracle needs to know at parse time what objects and columns are involved in a SQL statement. Since bind variables are supplied after the parse phase, however, you cannot specify a bind variable that changes what objects and/or columns a SQL statement is affecting.
The syntax for dropping multiple columns in a single alter statement is this:
SQL> desc t42
Name Null? Type
----------------------------------------- -------- ----------------------
COL1 NUMBER
COL2 DATE
COL3 VARCHAR2(30)
COL4 NUMBER
SQL> alter table t42 drop (col2, col3)
2 /
Table altered.
SQL> desc t42
Name Null? Type
----------------------------------------- -------- ----------------------
COL1 NUMBER
COL4 NUMBER
SQL>
So, if you really need to optimize the operation, you'll need to build up the statement incrementally - or use a string aggregation technique.
However, I would question whether you ought to be running a statement like this often enough to need to optimize it.