How do I get all records associated with a record in an aggregate root table - oracle

For testing purposes, I would like to generate an insert script for all records in all tables associated with a particular record in one of the root tables. For example, I might have a "Participant" table, which has any number of associated entries in the "Documents" table, which in turn has any number of associated entries in the "PrintRequests" table and so on and so forth. I have hundreds of these tables in the database.
Is there any way to select/script out all the records in all tables that are associated with for example ParticipantId = 1? This way, for a representative participant, I can extract all the associated records in all the tables.
One of my ideas was to restore a back up of the full database, modify all foreign key constraints to have cascade delete and then delete everything that is not participantid = 1 and let the database take care of deleting everything that is not related to the participant of interest and then script out the entire database of what remains.
For this, I might have to drop and recreate all the constraints, which I am unsure about how to do across the entire database.
Alternately, are there any other tools that would be able to do this? A migration tool for example that can take a query and only migrate the records and associated child records of that query?

While it is entirely possible to build scripts to walk through all the primary key and foreign key constraints and, via liberal use of dynamic SQL, generate these scripts, doing so would be a non-trivial undertaking. I would strongly suspect that you would be better served using a product like DataBee to generate your data subset.

You can script it creating dynamic SQL statements, but I think it is really lots of work. I think you will be faster to just find all the tables with a column "ParticipantId" with something like this
select * from all_tab_columns where column_name = 'PARTICIPANTID'
And then do some fast edit / replace / other sort of script action to generate yourself the delete statements.
Regarding the constraints. This is similar. Getting all the constraints with
SELECT owner, table_name, constraint_name
FROM dba_constraints
where table_name in (select table_name from all_tab_columns where column_name = 'PARTICIPANTID')
You switch on and off constraints using
ALTER TABLE <table name> ENABLE/DISABLE constraint <constraint name>;
Maybe this you could do with a loop. Borrowing from this page
begin
for i in
(select constraint_name, table_name from user_constraints where table_name in (select table_name from all_tab_columns where column_name = 'PARTICIPANTID')
) LOOP
execute immediate 'alter table '||i.table_name||' disable constraint '||i.constraint_name||'';
end loop;
end;
I am not sure about your cascading delete thing but the above gives a bit an idea how the delete would look like:
begin
for i in
(select constraint_name, table_name from user_constraints where table_name in (select table_name from all_tab_columns where column_name = 'PARTICIPANTID')
) LOOP
execute immediate 'delete from '||i.table_name||' where participantid = ''1'' ';
end loop;
end;
Hope it helps.

Related

Oracle Apex application

We are doing a project on Oracle Apex for university. We have 12 tables and try to build an app for our project. When we try to add a new page for some of our tables (not all of them) we encounter this error error description.
Can someone know how to solve this issue which is really blocking us right now.
We tried everything to solve it. All of our constraints in our tables work. What we don't understand is why we can create sometimes new pages from some tables but for other it does not work.
To me, that (unfortunately) looks like bug as you don't have any impact on Apex' data dictionary tables.
If you connect as a privileged user and check what's exactly being violated, you'll see something like this.
Which table is that constraint related to? Apparently, none:
SQL> select table_name from dba_constraints where owner = 'APEX_200200' and constraint_name = 'WWV_DICTIONARY_CACHE_OBJ_IDX2';
no rows selected
Any luck with (unique) indexes, then? Yes!
SQL> select table_name from dba_indexes where owner = 'APEX_200200' and index_name = 'WWV_DICTIONARY_CACHE_OBJ_IDX2';
TABLE_NAME
------------------------------
WWV_DICTIONARY_CACHE_OBJ
Which columns are used to enforce uniqueness?
SQL> select column_name from dba_ind_columns where index_name = 'WWV_DICTIONARY_CACHE_OBJ_IDX2';
COLUMN_NAME
--------------------------------------------------------------------------------
SECURITY_GROUP_ID
OBJECT_ID
OBJECT_TYPE
SQL>
That's to get you started; you know which table you used for that page so write some more queries and you'll - hopefully - find some move info.
How to "fix" that error? I hope you won't delete or update anything on Apex' dictionary tables! Maybe you'd rather rename that table (to avoid uniqueness violation) and try to use it, with its new name, while creating the page in your application.
If a workspace contains other object types with the same name as a table, APEX data dictionary cache job, ORACLE_APEX_DICTIONARY_CACHE, fails with ORA-00001: UNIQUE CONSTRAINT (APEX_190200.WWV_DICTIONARY_CACHE_OBJ_IDX1)
Workaround: Remove the duplicate object that is not a table. You can list database objects by selecting from sys.dba_objects.
Oracle APEX 19.2 Known Issues

How to make insert statement re-runnable?

Need to add two following insert statements:
insert into table1(schema, table_name, table_alias)
values ('ref_owner','test_table_1','tb1');
insert into table1(schema, table_name, table_alias)
values ('dba_owner','test_table_2','tb2');
Question is how can I make those two insert statements re-runnable meaning, if those two insert statement are compiled again, it should throw row exists error or something along those lines...?
Additional notes:
1. I've seen examples of Merge in Oracle however, thats only when you're using two tables to match records. In this case im only using a single table.
2. The table does not have any primary, unique or foreign keys - only check constraints on one of the columns.
Any help is highly appreciated.
You can use a MERGE statement, as follows:
MERGE into table1 t1
USING (SELECT 'ref_owner' AS SCHEMA_NAME, 'test_table_1' AS TABLE_NAME, 'tb1' AS ALIAS_NAME FROM DUAL
UNION ALL
SELECT 'dba_owner', 'test_table_2', 'tb2' FROM DUAL) d
ON (t1.SCHEMA = d.SCHEMA_NAME AND
t1.TABLE_NAME = d.TABLE_NAME)
WHEN NOT MATCHED THEN
INSERT (SCHEMA, TABLE_NAME, TABLE_ALIAS)
VALUES (d.SCHEMA_NAME, d.TABLE_NAME, d.ALIAS_NAME)
Best of luck.
You should have a primary key, especially when you want to check for duplicate records and data integrity.
Provide a primary key for your table, or, if you somehow do not want to do that, create a unique constraint for all of the columns in the table, so no duplicate rows are possible.

How to change lots table columns datatypes from one to another in oracle

I have a table with a lots of columns with BLOB type and I need to change it to nvarchar2.
So, to change type I can use following script:
alter table AUDIT_LOG
modify
(
column_name type_name,
column_name2 type_name2
-- etc
);
And to get all columns with given datatype I can use the following:
select column_name, 'NVARCHAR2(4000)'
from all_tab_columns
where table_name = 'TAB_NAME' and data_type = 'BLOB';
But how to join this two scripts into one?
You cannot do DML and DDL operation together in same query. You have to use dynamic SQL in a PL/SQL block
Create a variable and generate the whole alter table query in it.
Execute Immidiate
Refer this and I am sure you will be able to add rest of the logic as per your requirement.
http://www.java2s.com/Tutorial/Oracle/0440__PL-SQL-Statements/EXECUTEIMMEDIATEdynamicsqltoaltersession.htm

Extract Table(s) having name in numbers

I am using Oracle developer , my DB have 150+ tables having different namings
i Want to extract all tables having names like
tbl_1234
tbl_22
tbl_45
tbl_719
All tables whose naming convention is like " table name , underscore , number "
Pleaseee help me on this
try following query:
select table_name from user_tables where regexp_like (table_name, '_[0-9]+$');
and you can use, of course, the all_tables or dba_tables view, if you have approriate rights

sybase insert from another database in same server

i am trying to get all extra data from one data base and trying to insert into another.
But i want to omit the column name and am trying to make only the table name as hard coded to achieve this. But we have some fields which are system generated in a table like an id which is not that necessary a data but still will create a integrity issue. How can i do a insert of just the wanna be details omitting those above columns, the names of the columns to omit also changes.. I can't do a total insert, just the addition of some extra data.
so far i have come to this.
while 1=1
begin
if exists(select 1 from db1.table1 not in (select * from db2.table1)
begin
insert into db2.table1 (columns) select (columns) from db1.table1
end
if(rowCount=0)
break
end
please advise how i can optimize this to get the least possible hard coding
Have left the pk part intentionally, as
the query being big.
If you want to something like:
insert into TAB
select * from TAB2
or
insert into TAB
select col1,col2 from TAB2
or
insert into TAB (col1,col2)
select * from TAB2
where TAB1 and TAB2 have different count or type of columns it's not possible, because it will generate an error.

Resources