My last problem: select with variable parameter in the procedure
okay, I have one more question. I would like to extend this procedure to another element. Well, we have already chosen these identifiers from the typepkstring column, from all tables on the schema, and which are not in the PK column in the Composedtypes table. It works great.
Added to this is a new condition. After choosing what I had before and what I have already achieved, I now have to check whether these specific selected identifiers have empty fields in the tables containing SOURCEPK and TARGETPK colums.
To do this, first tried to break this problem again into small, preferably on a specific table.
Here is the select which he obtains from the previous question:
SELECT DISTINCT METAINFORMATIONS.TYPEPKSTRING
FROM METAINFORMATIONS
LEFT OUTER JOIN COMPOSEDTYPES c
ON METAINFORMATIONS.TYPEPKSTRING = c.PK
WHERE c.PK IS NULL;
I select doing what I want to get now:
SELECT DISTINCT METAINFORMATIONS.TYPEPKSTRING
FROM METAINFORMATIONS
LEFT OUTER JOIN CAT2CATREL d
ON METAINFORMATIONS.TYPEPKSTRING = d.TYPEPKSTRING
WHERE d.sourcepk IS NULL AND d.targetpk IS NULL AND metainformations.typepkstring=8796093055031;
The metainformations table is a table that "naturally" meets the conditions of the previous procedure
In order to achieve what she needs only parameterized, I think that it should have the following appearance:
Generally in select which I gave the place of the CAT2CATREL table more respect, set up table names that meet this select:
select to extract table names that needs:
select table_name from all_tab_columns where column_name='SOURCEPK' OR column_name ='TARGETPK';
In addition, the number 8796093055031 should be replaced by the value from the first cursor or vTYPEPKSTRING. But can I act in this way? Maybe I should create a second cursor that references this value.
I hope I have clearly explained this problem as I can explain it once in the comments. Thanks for any advice.
Update question:
so, I modified the select from the previous procedure to this character:
strSelect := 'SELECT DISTINCT m.TYPEPKSTRING ' ||
' FROM ' || i_table_name || ' m ' ||
' LEFT OUTER JOIN ' || is_table_name || ' d ' ||
' ON m.TYPEPKSTRING = d.TYPEPKSTRING ' ||
' WHERE d.sourcepk IS NULL AND ' ||
' d.targetpk IS NULL AND ' ||
' m.typepkstring IN (select count(*) from (SELECT DISTINCT m2.TYPEPKSTRING ' ||
' FROM ' || i_table_name || ' m2 ' ||
' LEFT OUTER JOIN COMPOSEDTYPES c2 ' ||
' ON m2.TYPEPKSTRING = c2.PK ' ||
' WHERE c2.PK IS NULL)) ';
as a result of the procedure so constructed, I get keys for which I wanted but the entry is for all tables that meet select in the modified call. This means that instead of receiving, say, 2 keys, he receives 2 same keys for each table. I tried it somehow counting but then I do not receive anything at the exit.
modified call:
set serveroutput on
DECLARE
ind integer := 0;
BEGIN
FOR ind IN (select table_name from all_tab_columns where column_name='TYPEPKSTRING' AND table_name!='COMPOSEDTYPES')
LOOP
BEGIN
FOR inds IN (select distinct table_name from all_tab_columns where column_name='SOURCEPK' OR column_name ='TARGETPK')
LOOP
BEGIN
SIEROT(ind.table_name,inds.table_name);
EXCEPTION
WHEN NO_DATA_FOUND THEN
null;
END;
END LOOP;
END;
END LOOP;
END;
As far as replacing 8796093055031 goes, you can just use the first statement as a subquery in the second statment:
SELECT DISTINCT m.TYPEPKSTRING
FROM METAINFORMATIONS m
LEFT OUTER JOIN CAT2CATREL d
ON m.TYPEPKSTRING = d.TYPEPKSTRING
WHERE d.sourcepk IS NULL AND
d.targetpk IS NULL AND
m.typepkstring IN (SELECT DISTINCT m2.TYPEPKSTRING
FROM METAINFORMATIONS m2
LEFT OUTER JOIN COMPOSEDTYPES c2
ON m2.TYPEPKSTRING = c2.PK
WHERE c2.PK IS NULL);
As for the rest, if I understand what you're trying to do it seems to me that you'll need to use dynamic SQL as shown in the answer to your previous question.
Related
I am trying to convert such SQL to Oracle which is constructed dynamically and executed with Sql Server:
DECLARE #dynamicQuery varchar(8000)
DECLARE #criteriaMet BIT
SET #dynamicQuery = ''
IF #criteriaMet = 1
BEGIN
SET #dynamicQuery = 'IF NOT EXISTS(SELECT TOP 1 1 FROM DATATABLE) '
END
SET #dynamicQuery = #dynamicQuery + 'INSERT INTO DATATABLE (...) VALUES (...)'
EXEC #dynamicQuery
But with Oracle I cannot use EXISTS in IF statement and have to declare variables and select count into the variable, but doing that inside dynamic SQL drastically reduces readability and increases complexity. Is there more elegant way of doing building dynamic SQL which checks for table data presence in Oracle based on some criteria?
Your example doesn't need to be dynamic, so you can do a static count, logic to check the count, and then a static insert if suitable:
declare
cnt pls_integer;
begin
select count(*)
into cnt
from dual
where exists (select null from your_table);
if cnt = 0 then
insert into your_table (id, foo) values (1, 'bar');
end if;
end;
/
You don't need PL/SQL at all though, you can do insert ... select ... and make the exists check part of that:
insert into your_table (id, foo)
select 1, 'bar'
from dual
where not exists (select null from your_table);
db<>fiddle
Either can be converted to be dynamic if there is actually a reason to do that, such as a run-time table name; the second option is probably still going to be more readable - the first would a dynamic query followed by the logic to do a dynamic insert.
I am trying to attach additional check to existing Merge statement only if some criteria is met, so it is less changes to the original
You could change TBL2 to a subquery, which apply the conditions you want; when those aren't met there is nothing for the merge to match. Something like:
'MERGE INTO' || TBL || ' USING ( SELECT * FROM ' || TBL2 ||
' WHERE NOT EXISTS (SELECT NULL FROM ' || TBL || ')' ||
' )' ON (' || COLS || ' ) WHEN NOT MATCHED THEN INSERT ( ' || COLS || ' VALUES (' || COLS || ')'
... where the middle line
' WHERE NOT EXISTS (SELECT NULL FROM ' || TBL || ')' ||
can apply whatever conditions you want.
I have a procedure which I'm using to output row counts to a .csv file but some of the where clauses I may want to use are contained in a table. How can I use them to create conditions for the counts?
I've attempted using concatenation pipes to select against the table that holds the where clauses but I'm confused about syntax and where they should go and I believe this is where I need the most help.
These are the columns in the table that contains some of the where clauses I ultimately want to use in the procedure.
SCHEMA, DATABASE, FULL_TABLE, DRIVER_TABLE, MAND_JOIN
And the values may be such as:
PROD, DB1, RLTSHP, BOB.R_ID, A.AR_ID = B.AR_ID
The procedure I have written is as follows:
create or replace procedure PROJECT is
--variables
l_dblink varchar2(100) := 'DB1';
ROW_COUNT number;
file_handle UTL_FILE.file_type;
BEGIN
utl_file.put_line(file_handle, 'OWNER,TABLE_NAME,ROW_COUNT');
--main loop
for rws in (select /*+parallel */ owner, table_name
from dba_tables#DB1 a
where table_name in (select table_name
from meta_table
where driver_table is not null
and additional_joins is null)
and a.owner in (select distinct schema
from meta_table c)
order by table_name)
loop
execute immediate 'select count(*) from ' ||rws.owner||'.'||rws.table_name || '#' || l_dblink into ROW_COUNT;
utl_file.put_line(file_handle,
rws.OWNER || ',' ||
rws.TABLE_NAME || ',' ||
ROW_COUNT);
end loop;
END PROJECT;
/
However, instead of the simple select count(*) reflected in the above, I want to find a way to include data in the meta_table to construct "where" clauses that use table joins to limit the output so that I'm not counting all rows, but rows that meet the criteria in the join I've constructed.
For example, so that the actual count that gets executed will be something like this:
select count(*)
from PROD.RLTSHP#DB1 b,
BOB.R_ID#DB1 a
where A.AR_ID = B.AR_ID;
Essentially I would be constructing the query using the entries in the meta_table. I think I can do this with concat's / pipes but I'm not sure exactly how to.
Can you help?
You need to extend your simple statement to assemble the join criteria as well. The one catch is that you must give the tables aliases which match the aliases used in additional_joins i.e. B for FULL and A for DRIVER. These have to be standard for all rows in your META_TABLE otherwise you will generate invalid SQL.
create or replace procedure PROJECT is
l_dblink varchar2(100) := 'DB1';
ROW_COUNT number;
file_handle UTL_FILE.file_type;
v_sql varchar2(32767);
BEGIN
utl_file.put_line(file_handle, 'OWNER,TABLE_NAME,ROW_COUNT');
<< main_loop >>
for rws in (select mt.*
from dba_tables#DB1 db
join meta_table mt
on mt.driver_table = db.table_name
and mt.owner = db.owner
where mt.db_link = l_dblink
order by mt.table_name)
loop
-- simple query
v_sql := 'select count(*) from ' || rws.owner||'.'||rws.driver_table || '#' || l_dblink;
-- join query
if rws.additional_joins is not null
and rws.full_table is not null then
v_sql := v_sql|| ' b, '|| rws.full_table ||'#'||l_dblink|| ' a where ' ||rws.additional_joins;
end if;
-- uncomment this for debugging
--dbms_output.put_line(v_sql);
execute immediate v_sql into ROW_COUNT;
utl_file.put(file_handle,
rws.OWNER || ',' ||
rws.TABLE_NAME || ',' ||
utl_file.put_line(file_handle, ROW_COUNT);
end loop main_loop;
END PROJECT;
/
Notes
We have to use a variable to assemble the statement because the final SQL is conditional on the contents of a row. This enables efficient debugging because we have something we can display. Dynamic SQL is hard, because it turns compilation errors into runtime errors. Diagnosis is difficult when we can't see the actual executed code.
I have tweaked your driving query to make the joins safer.
The column names you used in the code are not consist with the column names you used for the table structure. So there may be naming bugs which you'll need to fix for yourself.
I have retained the Old Skool implicit join syntax. I was tempted to generate ANSI 92 SQL (inner join ... on) but it's not clear that the additional_joins will contain only join criteria.
Pro tip. Instead of commenting your loops - --main loop - use an actual PL/SQL label - <<main_loop>> so you can link the matching end loop statement, as I have done in this code.
Improvements you may want to add:
validate that FULL_TABLE exists in target database
include FULL_TABLE in UTL_FILE output
validate that columns referenced in ADDITIONAL_JOIN are valid (using DBA_TAB_COLUMNS, but it's trickier because you will have to parse the column names from the text)
worry about whether the content of ADDITIONAL_JOIN is a valid and complete join condition
First of all I don't recommend to use PARALLEL hint. It can kill your db if you will have a lot of queries with PARALLEL hints.
I assume that columns MAND_JOIN means, that always we have value there.
create or replace procedure PROJECT is
lc_sql_template CONSTANT varchar2(4000) :=
'select count(*) ' || CHR(10) ||
' from #TableOwner.#TableName#DB1 b' || CHR(10) ||
' inner join #FullTableName#DB1 a ON #JoinCodition';
lv_row_count number;
lv_file_handle UTL_FILE.file_type;
lv_sql varchar2(32767);
BEGIN
utl_file.put_line(lv_file_handle, 'OWNER,TABLE_NAME,ROW_COUNT');
for rws in (select mt.*
from dba_tables#DB1 db
inner join meta_table mt
on mt.driver_table = db.table_name
and mt.owner = db.owner
where mt.driver_table is not null
and mt.additional_joins is null
order by mt.table_name)
loop
lv_sql := lc_sql_template;
lv_sql := replace(lv_sql, '#TableOwner' , rws.owner);
lv_sql := replace(lv_sql, '#TableName' , rws.driver_table);
lv_sql := replace(lv_sql, '#FullTableName' , rws.full_table);
lv_sql := replace(lv_sql, '#JoinCodition' , rws.mand_join);
$if $$DevMode = true $then -- I even recommand to log this all the time
your_log_package.info(lv_sql);
$end
execute immediate lv_sql into lv_row_count;
utl_file.put(lv_file_handle, rws.OWNER || ',' || rws.TABLE_NAME || ',' || lv_row_count);
end loop main_loop;
exception
when others then
your_log_package.error(lv_sql);
raise;
end PROJECT;
I need to know what columns of one table have only null values. I understand that I should do a loop in user_tab_columns. But how detect only columns with null value?
Thanks and sorry for my English
To perform a query where you don't know the column identifies in advance, you need to use dynamic SQL. Assuming you already know the table is not empty, you could do something like:
declare
l_count pls_integer;
begin
for r in (
select table_name, column_name
from user_tab_columns
where table_name = 'T42'
and nullable = 'Y'
)
loop
execute immediate 'select count(*) '
|| ' from "' || r.table_name || '"'
|| ' where "' || r.column_name || '" is not null'
into l_count;
if l_count = 0 then
dbms_output.put_line('Table ' || r.table_name
|| ' column ' || r.column_name || ' only has nulls');
end if;
end loop;
end;
/
Remember to set serveroutput on or your client's equivalent before executing.
The cursor gets the columns from the table which are declared as nullable (if they aren't, not much point checking them; though this won't catch explicit check constraints). For each column it builds a query to count the rows where that column is not null. If that count is zero then it didn't find any that are not null, therefore they all are. Again, assuming you know the table isn't empty before you start.
I've included the table name in the cursor select list and references so you only need to change the name in one place to search a different table, or you could use a variable for that name. Or check multiple tables at once by changing that filter.
You may get better performance by selecting a dummy value from any non-null row, with a rownum stop check - which means it will stop as soon as it finds a non-null value, rather than having to check every row to get an actual count:
declare
l_flag pls_integer;
begin
for r in (
select table_name, column_name
from user_tab_columns
where table_name = 'T42'
and nullable = 'Y'
)
loop
begin -- inner block to allow exception trapping within loop
execute immediate 'select 42 '
|| ' from "' || r.table_name || '"'
|| ' where "' || r.column_name || '" is not null'
|| ' and rownum < 2'
into l_flag;
-- if this foudn anything there is a non-null value
exception
when no_data_found then
dbms_output.put_line('Table ' || r.table_name
|| ' column ' || r.column_name || ' only has nulls');
end;
end loop;
end;
/
or you could do something similar with an exists() check.
If you don't know that the table has data then you can do a simple count(*) from the table before the loop to check if it is empty, and report that instead:
...
begin
if l_count = 0 then
dbms_output.put_line('Table is empty');
return;
end if;
...
Or you could combine it with the cursor query, but this would need some work if you wanted to check multiple tables at once as it would stop as soon as it found any empty one (have to leave you something to do... *8-)
declare
l_count_any pls_integer;
l_count_not_null pls_integer;
begin
for r in (
select table_name, column_name
from user_tab_columns
where table_name = 'T42'
and nullable = 'Y'
)
loop
execute immediate 'select count(*),'
|| ' count(case when "' || r.column_name || '" is not null then 1 end)'
|| ' from "' || r.table_name || '"'
into l_count_any, l_count_not_null;
if l_count_any = 0 then
dbms_output.put_line('Table ' || r.table_name || ' is empty');
exit; -- only report once
elsif l_count_not_null = 0 then
dbms_output.put_line('Table ' || r.table_name
|| ' column ' || r.column_name || ' only has nulls');
end if;
end loop;
end;
/
You could of course populate a collection or make it a pipelined function or whatever if you didn't want to reply on dbms_output, but I assume this is a one-off check so it is probably acceptable.
You can loop though your columns and count null rows. If it is same with your table count, then that column has only null values.
The first question is: one column with zero row could be regarded as only (null) value containing column. But it can remain your decision: the scripts below provide solutions to both ways. (In my opinion: no. The empty columns is not a column with only (null) value)
If you want to know the (null) values about one table you can it with count(column):
select count(column) from table
and when the count(column) = 0 then the column has only (null) value or has no value. (So, you cannot make a correct decision).
E.g. The following three tables (x, y and z) has the following columns:
select * from x;
N_X M_X
---------------
100 (null)
200 (null)
300 (null)
select * from y;
N_Y M_Y
---------------
101 (null)
202 (null)
303 apple
select * from z;
N_Z M_Z
---------------
The count() selects:
select count(n_x), count(m_x) from x;
COUNT(N_X) COUNT(M_X)
-----------------------
3 0
select count(n_y), count(m_y) from y;
COUNT(N_Y) COUNT(M_Y)
-----------------------
3 1
select count(n_z), count(m_Z) from z;
COUNT(N_Z) COUNT(M_Z)
-----------------------
0 0
As you can see, the difference between x and y is appears but you cannot decide that the table z has no rows or only full of (null) values.
The general solution:
I have separeted the schema and the db level but the basic idea is common:
Schema level: the current user’s table
DB level: all users or a chosen schema
The number of (null) in one columns:
all_tab_columns.num_nulls
(Or: user_tab_columns, num_nulls).
And we need the num_rows of the table:
all_all_tables.num_rows
(Or: user_all_tables.num_rows)
Where the num_nulls equals to num_rows there are only (null) values.
Firstly, you need to run the DBMS_STATS for refresh the statistics.
on database level:
exec DBMS_STATS.GATHER_DATABASE_STATS;
(it can use a lot of resources)
on schema level:
EXEC DBMS_STATS.gather_schema_stats('TRANEE',DBMS_STATS.AUTO_SAMPLE_SIZE); (owner = tranee)
-- column with zero row = column has only (null) values -> exclude num_nulls > 0 condition
-- column with zero row <> column has only (null) values -> include num_nulls > 0 condition
the scripts:
-- 1. current user
select
a.table_name,
a.column_name,
a.num_nulls,
b.num_rows
from user_tab_columns a, user_all_tables b
where a.table_name = b.table_name
and num_nulls = num_rows
and num_nulls > 0;
-- 2. chosen user / all user -> exclude the a.owner = 'TRANEE' condition
select
a.owner,
a.table_name,
a.column_name,
a.num_nulls,
b.num_rows
from all_tab_columns a, all_all_tables b
where a.owner = b.owner
and a.table_name = b.table_name
and a.owner = 'TRANEE'
and num_nulls = num_rows
and num_nulls > 0;
TABLE_NAME COLUMN_NAME NUM_NULLS NUM_ROWS
----------------------------------------------------
LEADERS COMM 4 4
EMP_ACTION ACTION 12 12
X M_X 3 3
These tables and columns have only (null) values in tranee schema.
I would like to check all columns of all tables for entries which are not contained in the table of valid dates.
In other words the elements of all columns with type date have to one of the entries of the table column CALENDAR.BD.
My problem is that executing the function by select checkAllDateColumns() from DUAL results only in
invalid table name, line 16
and I don't see why?
CREATE OR REPLACE FUNCTION checkAllDateColumns RETURN NUMBER IS
l_count NUMBER;
BEGIN
FOR t IN (SELECT table_name
FROM all_tables
WHERE owner = 'ASK_QUESTION')
LOOP
dbms_output.put_line ('Current table : ' || t.table_name);
FOR c IN (SELECT column_name
FROM all_tab_columns
WHERE TABLE_NAME = t.table_name AND data_type = 'DATE')
LOOP
execute immediate 'select count(*) from :1 where :2 not in (select BD from CALENDAR where is_business_day = 1)'
into l_count
using t.table_name, c.column_name;
IF (l_count > 0) THEN
RETURN 1;
END IF;
END LOOP;
END LOOP;
RETURN 0;
END checkAllDateColumns;
/
Btw: I am fine with the statement stopping on the first mismatch, currently I would like to figure out the dynamic sql...
You can't use a bind variable for the table or column names, only for column values. It's trying to interpret :1 as an identifier, it isn't using the value from the using clause. Your returning into clause should also just be into. This works:
execute immediate 'select count(*) from "' || t.table_name
|| '" where "' || c.column_name
|| '" not in (select BD from CALENDAR where is_business_day = 1)'
into l_count;
You might get a performance difference using a left-join approach, but it depends on your data:
execute immediate 'select count(*) from "' || t.table_name
|| '" t left join calendar c on c.bd = trunc(t."'
|| c.column_name || '") and c.is_business_day = 1 '
|| ' where c.bd is null'
into l_count;
I added a trunc() in case any of the fields might have times in them; you can do the same in the other version too of course.
And in both I've included double-quotes around the table and column names, just in case there are any quoted identifiers - without those there's a risk of getting an ORA-00904 'invalid identifier' error. But hopefully you don't have any to worry about anyway.
You also don't really need nested loops; you can either get the table name from all_tab_columns, or if you prefer, join all_tables and all_tab_columns in a single cursor. You should also be checking that the owner is the same in both tables, in case there are two versions of a table in different schemas.
There was a great answer from #ShannonSeverance on this question
Copying a row in the same table without having to type the 50+ column names (while changing 2 columns)
that showed how to dynamically copy a row within a table to the same table (changing the pk)
declare
r table_name%ROWTYPE;
begin
select *
into r
from table_name
where pk_id = "original_primary_key";
--
select pk_seq.nextval into r.pk_id from dual;
-- For 11g can use instead: r.pk_id := pk_seq.nextval;
r.fk_id := "new_foreign_key";
insert into table_name values r;
end;
I would like to apply this approach but within a function that is called each time from within an array of table names
So basically I can do the select using execute immediate - but how do I declare 'r'? Can I replace 'table_name' in the code with a variable that is passed to the function?
table(1)="Table1";
table(2)="Table2";
for t 1..table.count loop
CopyTableContacts(table(i));
end loop;
TIA
Mike
In the end, I have amended my function slightly
I now build 2 arrays
1 - the list of table names from the USER_TAB_COLUMNS table
2 - for each table in array1, I build a comma delimited list of the column names from the ALL_TAB_COLUMNS table
So I end up with 2 arrays (example...)
tableName(1) = 'MEMBER'
tableName(2) = 'SALARY'
tableColumns(1) = 'ID, SURNAME, SEX, DOB'
tableColumns(2) = 'ID, CURRENTSAL, BONUS, GRADE'
I then pass these 2 arrayvalues to my function and use some dynamic SQL during the looping of the tableName() array...
PROCEDURE CopyTableRow(inOrigMemNo NUMBER, inNewMemNo NUMBER, inTableName USER_TAB_COLUMNS.TABLE_NAME%TYPE, inTableString LONG) AS
selectString VARCHAR2(32000):=null;
newTableString LONG:=null;
insertTableString LONG:=null;
sqlResultCount NUMBER:=0;
BEGIN
/*CHECK IF THERE IS AT LEAST ONE ROW TO COPY*/
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM ' || inTableName || ' WHERE ID= ' || inOrigMemNo INTO sqlResultCount;
IF sqlResultCount > 0 THEN
/*BUILD INSERT STATEMENT FOR EACH ROW RETURNED*/
dbms_output.put_line('At least one row found on ' || inTableName || '(' || sqlResultCount || ')');
newTableString := REPLACE(inTableString, 'ID', 'REPLACE(ID, ID,' || inNewMemNo || ')');
selectString := 'SELECT ' || newTableString || ' FROM ' || inTableName || ' WHERE ID = ' || inOrigMemNo;
insertTableString := 'INSERT INTO ' || inTableName || '(' || inTableString || ') (' || selectString || ')';
END IF;
I am then left with an INSERT statement based on the table definition and value that I can execute
This seems to work fine and suits my current needs
NOTE - it only works if each table only has one row to copy. My next challenge is to cope with some of the tables returning multiple rows for the ID that require copying (which will be interesting seeing as I've painted myself into a non-cursor corner!)
Mike