Related
We have 2 tables as shown below:
Table A:
ROWNUM
description
1
{"to": "+1111", "from": "9999"}
2
{"to": "+5555", "from": "8888"}
Table B:
COL1
COL2
+1111
222
+5555
666
Please help me with an Oracle query which replaces part of the description column present in Table A from above table.
The numbers present after text "to:" i.e., +1111 and +5555 of Table A (description column)should be compared with COL1 of Table B and replace with corresponding COL2 value.
For example : replace +1111 with 222 in Table A
replace +5555 with 666 in Table A
Table A should look like this post running of the query.
Table A:
ROWNUM
description
1
{"to": "222", "from": "9999"}
2
{"to": "666", "from": "8888"}
Thanks in advance :)
You can use techniques dedicated to JSON within a PL/SQL code values such as
DECLARE
v_jsoncol tableA.description%TYPE;
v_json_obj json_object_t;
v_new_jsoncol tableA.description%TYPE;
v_col1 tableB.col1%TYPE;
v_col2 VARCHAR2(25);
l_key_list json_key_list;
BEGIN
FOR c IN
(
SELECT *
FROM tableA
)
LOOP
v_json_obj := TREAT(json_element_t.parse(c.description) AS json_object_t);
l_key_list := v_json_obj.get_keys;
FOR i IN 1 .. l_key_list.COUNT
LOOP
IF l_key_list (i) = 'to' THEN
v_col1 := v_json_obj.get_string (l_key_list (i));
SELECT TO_CHAR(col2)
INTO v_col2
FROM tableB
WHERE col1 = v_col1;
v_json_obj.put(l_key_list (i),v_col2);
v_new_jsoncol := v_json_obj.to_string;
UPDATE tableA SET description = v_new_jsoncol WHERE row_num = c.row_num;
END IF;
END LOOP;
END LOOP;
END;
/
Demo
I used instr to get 3rd and 4th " chars position to get the value inside and replace it with other query .
Note : ROWNUM , description is reserved keywords, so i advise not to use them as column names
here is the final code:
SELECT ROWNUM ,
REPLACE (description ,
SUBSTR( description , INSTR(description, '"', 1, 3)+1,
INSTR(description, '"', 1, 4) - INSTR(description, '"', 1, 3)-1) ,
(select COL2 from tblB where COL1 =
SUBSTR( description , INSTR(description, '"', 1, 3)+1,
INSTR(description, '"', 1, 4) - INSTR(description, '"', 1, 3)-1)
)
)
from tblA
Don't use string functions for this. You should use JSON functions and can use JSON_MERGEPATCH:
MERGE INTO table_a dst
USING (
SELECT a.ROWID AS rid,
b.col2
FROM table_a a
INNER JOIN table_b b
ON JSON_VALUE(a.description, '$.to' RETURNING VARCHAR2(10)) = b.col1
) src
ON (dst.ROWID = src.RID)
WHEN MATCHED THEN
UPDATE
SET description = JSON_MERGEPATCH(
dst.description,
JSON_OBJECT(KEY 'to' VALUE src.col2)
);
Which, for your sample data:
CREATE TABLE Table_A (description CLOB CHECK (description IS JSON));
INSERT INTO table_a (description)
SELECT '{"to": "+1111", "from": "9999"}' FROM DUAL UNION ALL
SELECT '{"to": "+5555", "from": "8888"}' FROM DUAL;
CREATE TABLE Table_B (COL1, COL2) AS
SELECT '+1111', 222 FROM DUAL UNION ALL
SELECT '+5555', 666 FROM DUAL;
Then:
SELECT * FROM table_a;
Outputs:
DESCRIPTION
{"to":222,"from":"9999"}
{"to":666,"from":"8888"}
db<>fiddle here
Have to compare the data differences between the below two tables. I have achieved this by writing a MINUS query but that does not fit for current assignment. Because few tables have 50- 60 columns and each time have to mention the columns before execution.
I have followed Expert's response and not succeeded in achieving the goal. Basically I want to write a procedure which:
Accepts both table names as parameters.
Fetch all the columns of CustomerTable.
Then MINUS query between CustomerTable and StagingCustTable only with the columns fetched in step-2.
Logging any differences.
CustomerTable
Custromer_Number
Address
order_Number
Contact
Country
Post_Code
Amount
StagingCustTable
Custromer_Number
Address
order_Number
Contact
Country
Post_Code
Amount
Run_Id
Record_Id
I would not use a procedure but a query to generate a final query.
Kind of dynamic SQL.
Simple example - let say we have the following tables and data in them:
CREATE TABLE CustomerTable(
Custromer_Number int,
Address varchar2(100),
order_Number int,
Contact int,
Country varchar2(10),
Post_Code varchar2(10),
Amount number
);
INSERT ALL
INTO CustomerTable VALUES (1, 'aaa', 1, 1, 'AA', '111', 111.11 )
INTO CustomerTable VALUES (2, 'bbb', 2, 2, 'BB', '222', 222.22 )
SELECT 1 FROM dual;
CREATE TABLE StagingCustTable
AS SELECT t.*, 1 As run_id, 1 as record_id
FROM CustomerTable t
WHERE 1=0;
INSERT ALL
INTO StagingCustTable VALUES (1, 'aaa', 1, 1, 'AA', '111', 111.11, 1, 1 )
INTO StagingCustTable VALUES (3, 'ccc', 3, 3, 'CC', '333', 333.33, 3, 3 )
SELECT 1 FROM dual;
commit;
Now when you run this simple query:
SELECT 'SELECT ' || listagg( column_name, ',' ) WITHIN GROUP ( ORDER BY column_id )
|| chr(10) || ' FROM ' || max( table_name )
|| chr(10) || ' MINUS '
|| chr(10) || 'SELECT ' || listagg( column_name, ',' ) WITHIN GROUP ( ORDER BY column_id )
|| chr(10) || ' FROM StagingCustTable ' as MySql
FROM user_tab_columns
WHERE table_name = upper( 'CustomerTable' );
you will get the following result:
MYSQL
-------------------------------------------------------------------------
SELECT CUSTROMER_NUMBER,ADDRESS,ORDER_NUMBER,CONTACT,COUNTRY,POST_CODE,AMOUNT
FROM CUSTOMERTABLE
MINUS
SELECT CUSTROMER_NUMBER,ADDRESS,ORDER_NUMBER,CONTACT,COUNTRY,POST_CODE,AMOUNT
FROM StagingCustTable
Now just copy the above query, paste it to your SQL client, run it - and the task is done in a few minutes.
I created a dummy database for learning purposes, and I purposefully created some duplicated records in one of the tables. In every case I want to flag one of the duplicated records as Latest='Y', and the other record as 'N', and for every single record the Latest flag would be 'Y'.
I tried to use PlSQL to go through all of my records, but when I try to use the previously calculated value (which would tell that its a duplicated record) it says that:
ORA-06550: line 20, column 17:
PLS-00201: identifier 'COUNTER' must be declared
Here is the statement I try to use:
DECLARE
CURSOR cur
IS
SELECT order_id, order_date, person_id,
amount, successfull_order, country_id, latest, ROWCOUNT AS COUNTER
FROM (SELECT order_id,
order_date,
person_id,
amount,
successfull_order,
country_id,
latest,
ROW_NUMBER () OVER (PARTITION BY order_id, order_date,
person_id, amount, successfull_order, country_id
ORDER BY order_id, order_date,
person_id, amount, successfull_order, country_id) ROWCOUNT
FROM orders) orders
FOR UPDATE OF orders.latest;
rec cur%ROWTYPE;
BEGIN
FOR rec IN cur
LOOP
IF MOD (COUNTER, 2) = 0
THEN
UPDATE orders
SET latest = 'N'
WHERE CURRENT OF cur;
ELSE
UPDATE orders
SET latest = 'Y'
WHERE CURRENT OF cur;
END IF;
END LOOP;
END;
I am new to PlSQL so I tried to modify the statements I found here:
http://www.adp-gmbh.ch/ora/plsql/cursors/for_update.html
What should I change in my statement, or should I use a different approach?
Thanks for your answers in advance!
Botond
Your refer the ROWNUM as COUNTER in your cursor.
While fetching, you should be accessing it from the cursor reference like MOD (rec.COUNTER, 2)
You need to declare the variable COUNTER and then you need to maintain (ie increment) it in your loop.
I suspect that you example is just for learning PL/SQL. However be aware that it's often much more performant to do things with a single SQL statement, as opposed to using cursor loops.
Your issue is that COUNTER is an attribute of the cursor record rec and not a PL/SQL variable. So:
IF MOD (COUNTER, 2) = 0
Should be:
IF MOD (rec.COUNTER, 2) = 0
However, you do not need to use PL/SQL or cursors, it can be done in a single MERGE statement:
Oracle Setup:
CREATE TABLE orders ( order_id, order_date, latest ) AS
SELECT 1, DATE '2017-01-01', CAST( NULL AS CHAR(1) ) FROM DUAL UNION ALL
SELECT 1, DATE '2017-01-02', NULL FROM DUAL UNION ALL
SELECT 1, DATE '2017-01-03', NULL FROM DUAL UNION ALL
SELECT 2, DATE '2017-01-04', NULL FROM DUAL UNION ALL
SELECT 2, DATE '2017-01-01', NULL FROM DUAL UNION ALL
SELECT 3, DATE '2017-01-06', NULL FROM DUAL;
Update Statement:
MERGE INTO orders dst
USING ( SELECT ROW_NUMBER() OVER ( PARTITION BY order_id
ORDER BY order_date DESC ) AS rn
FROM orders
) src
ON ( src.ROWID = dst.ROWID )
WHEN MATCHED THEN
UPDATE SET latest = CASE src.rn WHEN 1 THEN 'Y' ELSE 'N' END;
Output:
SELECT * FROM orders;
ORDER_ID ORDER_DATE LATEST
-------- ---------- ------
1 2017-01-01 N
1 2017-01-02 N
1 2017-01-03 Y
2 2017-01-04 Y
2 2017-01-01 N
3 2017-01-06 Y
I want a query in oracle to get the column name from the table by passing value.
Means that In most of the case - We write the query like that - select * from table where column = 'value'. But in my case i don't know the column name.
Can any one suggest me.
Thanks in advance...
You can try to build a dynamic query to check all the tables of your DB.
setup:
create table tab1 ( v1 varchar2(100), n1 number, v1b varchar2(100));
create table tab2 ( v2 varchar2(100), n2 number, v2b varchar2(100));
create table tab3 ( v3 varchar2(100), n3 number, v3b varchar2(100));
insert into tab1 values ('Maria', 1, 'aa');
insert into tab1 values ('xx', 2, 'bb');
insert into tab2 values ('yy', 3, 'Maria');
insert into tab2 values ('zz', 3, 'cc');
insert into tab3 values ('WW', 4, 'DD');
build the dynamic query:
select 'select table_name,
matches from (' || listagg(statement, ' UNION ALL ') within group (order by table_name) || ')
where matches > 0'
from (
select 'select ''' || table_name ||
''' as TABLE_NAME, count(1) as MATCHES from ' || table_name || ' WHERE ' ||
listagg(column_name || ' = ''Maria''', ' OR ') within group (order by column_name) as statement,
table_name
from user_tab_columns col
where data_type = 'VARCHAR2'
group by table_name
)
This will return a query, that you can run to check all the tables; in my example, this will build the query (not formatted) :
SELECT table_name, matches
FROM (SELECT 'TAB1' AS TABLE_NAME, COUNT(1) AS MATCHES
FROM TAB1
WHERE V1 = 'Maria'
OR V1B = 'Maria'
UNION ALL
SELECT 'TAB2' AS TABLE_NAME, COUNT(1) AS MATCHES
FROM TAB2
WHERE V2 = 'Maria'
OR V2B = 'Maria'
UNION ALL
SELECT 'TAB3' AS TABLE_NAME, COUNT(1) AS MATCHES
FROM TAB3
WHERE V3 = 'Maria'
OR V3B = 'Maria')
WHERE matches > 0;
Running this query will give:
TABL MATCHES
---- ----------
TAB1 1
TAB2 1
Please notice that I used USER_TAB_COLUMNS, thus searching only in the tables of the login schema; if you want to search in different schemas, you can use ALL_TAB_COLUMNS or DBA_TAB_COLUMNS, depending on what you need and on the privileges of you user; see here for something more.
Also, consider that USER_TAB_COLUMNS will get the colums of tables and views; if you want to limit your search to tables, you can join USER_TAB_COLUMNS(ALL_TAB_COLUMNS, DBA_TAB_COLUMNS) to USER_TABLES (ALL_TABLES, DBA_TABLES) by TABLE_NAME, or TABLE_NAME and OWNER If you decide to use ALL or DBA tables:
SQL> create view vTab1 as select * from tab1;
View created.
SQL> select count(1)
2 from user_tab_columns
3 where table_name = 'VTAB1';
COUNT(1)
----------
3
SQL> select count(1)
2 from user_tab_columns
3 inner join user_tables using(table_name)
4 where table_name = 'VTAB1';
COUNT(1)
----------
0
SQL>
select table_name from user_Tables where table_name = 'bogus';
I've an Oracle database with two schemas. One is old and another is new. I would like to update the old schema with the new columns of the new schema.
I get the tables which have changes with the following query.
select distinct table_name
from
(
select table_name,column_name
from all_tab_cols
where owner = 'SCHEMA_1'
minus
select table_name,column_name
from all_tab_cols
where owner = 'SCHEMA_2'
)
With this query I get the tables. How can I update the old schema tables with the new columns? I don't need the data, just the columns.
A schema comparison tool is a good idea. The database schema is far more complicated than most people give credit, and every difference between two database schemas has the potential to cause bugs.
If you're still keen to do it yourself, the best approach I've found is to extract the schema definitions to text, then run a text compare. As long as everything is sorted alphabetically, you can then use Compare Documents feature in Microsoft Word (or FC.EXE, DIFF or equivalent), to highlight the differences.
The following SQLPlus script outputs the schema definition alphabetically, to allow comparison. There are two sections. The first section lists each column, in the format:
table_name.column_name: data_type = data_default <nullable>
The second section lists indexes and constraints, as follows:
PK constraint_name on table_name (pk_column_list)
FK constraint_name on table_name (fk_column_list)
CHECK constraint_name on table_name (constraint_definition)
The script serves as a useful references for extracting some of the Oracle schema details. This can be good knowledge to have when you're out at client sites and you don't have your usual tools available, or when security policies prevent you from accessing a client site database directly from your own PC.
set serveroutput on;
set serveroutput on size 1000000;
declare
rowcnt pls_integer := 0;
cursor c_column is
select table_name, column_name, data_type,
data_precision, data_length, data_scale,
data_default, nullable,
decode(data_scale, null, null, ',') scale_comma,
decode(default_length, null, null, '= ') default_equals
from all_tab_columns where owner = 'BCC'
order by table_name, column_name;
cursor c_constraint is
select c.table_name, c.constraint_name,
decode(c.constraint_type,
'P','PK',
'R','FK',
'C','CHECK',
c.constraint_type) constraint_type,
c.search_condition,
cc.column_1||cc.comma_2||cc.column_2||cc.comma_3||cc.column_3||cc.comma_4||cc.column_4||
cc.comma_5||cc.column_5||cc.comma_6||cc.column_6||cc.comma_7||cc.column_7 r_columns
from all_constraints c,
( select owner, table_name, constraint_name, nvl(max(position),0) max_position,
max( decode( position, 1, column_name, null ) ) column_1,
max( decode( position, 2, decode(column_name, null, null, ',' ), null ) ) comma_2,
max( decode( position, 2, column_name, null ) ) column_2,
max( decode( position, 3, decode(column_name, null, null, ',' ), null ) ) comma_3,
max( decode( position, 3, column_name, null ) ) column_3,
max( decode( position, 4, decode(column_name, null, null, ',' ), null ) ) comma_4,
max( decode( position, 4, column_name, null ) ) column_4,
max( decode( position, 5, decode(column_name, null, null, ',' ), null ) ) comma_5,
max( decode( position, 5, column_name, null ) ) column_5,
max( decode( position, 6, decode(column_name, null, null, ',' ), null ) ) comma_6,
max( decode( position, 6, column_name, null ) ) column_6,
max( decode( position, 7, decode(column_name, null, null, ',' ), null ) ) comma_7,
max( decode( position, 7, column_name, null ) ) column_7
from all_cons_columns
group by owner, table_name, constraint_name ) cc
where c.owner = 'BCC'
and c.generated != 'GENERATED NAME'
and cc.owner = c.owner
and cc.table_name = c.table_name
and cc.constraint_name = c.constraint_name
order by c.table_name,
decode(c.constraint_type,
'P','PK',
'R','FK',
'C','CHECK',
c.constraint_type) desc,
c.constraint_name;
begin
for c_columnRow in c_column loop
dbms_output.put_line(substr(c_columnRow.table_name||'.'||c_columnRow.column_name||': '||
c_columnRow.data_type||'('||
nvl(c_columnRow.data_precision, c_columnRow.data_length)||
c_columnRow.scale_comma||c_columnRow.data_scale||') '||
c_columnRow.default_equals||c_columnRow.data_default||
' <'||c_columnRow.nullable||'>',1,255));
rowcnt := rowcnt + 1;
end loop;
for c_constraintRow in c_constraint loop
dbms_output.put_line(substr(c_constraintRow.constraint_type||' '||c_constraintRow.constraint_name||' on '||
c_constraintRow.table_name||' ('||
c_constraintRow.search_condition||
c_constraintRow.r_columns||') ',1,255));
if length(c_constraintRow.constraint_type||' '||c_constraintRow.constraint_name||' on '||
c_constraintRow.table_name||' ('||
c_constraintRow.search_condition||
c_constraintRow.r_columns||') ') > 255 then
dbms_output.put_line('... '||substr(c_constraintRow.constraint_type||' '||c_constraintRow.constraint_name||' on '||
c_constraintRow.table_name||' ('||
c_constraintRow.search_condition||
c_constraintRow.r_columns||') ',256,251));
end if;
rowcnt := rowcnt + 1;
end loop;
end;
/
Unfortunately, there are a few limitations:
Embedded carriage returns and whitespace in data_defaults, and check constraint definitions, may be highlighted as differences, even though they have zero effect on the schema.
Does not include alternate keys, unique indexes or performance indexes. This would require a third SELECT statement in the script, referencing all_ind_columns and all_indexes catalog views.
Does not include security details, synonyms, packages, triggers, etc. Packages and triggers would be best compared using an approach similar to the one you originally proposed. Other aspects of the schema definition could be added to the above script.
The FK definitions above identify the referencing foreign key columns, but not the PK or the table being referenced. Just one more detail I never got around to doing.
Even if you don't use the script. There's a certain techie pleasure in playing with this stuff. ;-)
Matthew
I'm afraid I can't do more for you at the moment, but this should give you a basic idea.
It selects ADD and DROP column statements that you could execute after carefully reviewing them.
It does not handle
created/dropped tables
data type/precision changes of existing columns (ALTER TABLE MODIFY)
DEFAULT VALUES (so you can't apply it on a table with data when new column is NOT NULL)
Check constraints, Foreign Key constraints
I tried it with some basic data-types (NUMBER, VARCHAR2, DATE) and it worked. Good luck :)
SELECT 'ALTER TABLE ' || LOWER(table_name)
|| ' ADD ' || LOWER(column_name) || ' ' || data_type
|| CASE WHEN data_type NOT IN ('DATE') THEN '(' || data_length || ')' END
|| CASE WHEN nullable='Y' THEN ' NOT NULL' END
|| ';' cmd
FROM all_tab_cols c2
WHERE owner = 'SCHEMA_1'
AND NOT EXISTS ( SELECT 1
FROM all_tab_cols c1
WHERE owner = 'SCHEMA_2'
AND c1.table_name = c2.table_name
AND c1.column_name = c2.column_name )
UNION ALL
SELECT 'ALTER TABLE ' || LOWER(table_name)
|| ' DROP COLUMN ' || LOWER(column_name) || ';'
FROM all_tab_cols c2
WHERE owner = 'SCHEMA_2'
AND NOT EXISTS ( SELECT 1
FROM all_tab_cols c1
WHERE owner = 'SCHEMA_1'
AND c1.table_name = c2.table_name
AND c1.column_name = c2.column_name )
ORDER BY cmd;
I started writing an answer for this but my list of caveats became longer than the answer so I decided to scrap it.
You should go for a schema comparison tool.
There are free versions available - take a look at this question on Server Fault:
https://serverfault.com/questions/26360/how-can-i-diff-two-oracle-10g-schemas
My suggestion would be to download Oracle's SQL Developer and use the built-in schema diff tool (although this requires that you have the Change Management Pack license).