How to call oracle merge into from shell script - oracle

I have one merge into statement and I'd like to execute it from shell script. Below is the merge into script.
MERGE INTO TESTA t USING TESTA_TEMP s ON
(
t.WAFER_ID = s.WAFER_ID
)
WHEN MATCHED THEN
UPDATE
SET
t.bin1 = s.bin1 WHEN NOT MATCHED THEN
INSERT
(
id,
bin1,
wafer_id
)
VALUES
(
s.id,
s.bin1,
s.wafer_id
);
It is working fine if I execute this in `sql plus manually. It shows below in the console.
4 rows merged.
SQL>
However, when I move this code in a shell script below, it won't work. Execute the shell script nothing happened, the console just shows the execution of shell script but never end.
#!/bin/sh
sqlplus -s user/pwd#sid <<END
SET serveroutput ON;
BEGIN
MERGE INTO TESTA t USING TESTA_TEMP s ON
(
t.WAFER_ID = s.WAFER_ID
)
WHEN MATCHED THEN
UPDATE
SET
t.bin1 = s.bin1 WHEN NOT MATCHED THEN
INSERT
(
id,
bin1,
wafer_id
)
VALUES
(
s.id,
s.bin1,
s.wafer_id
);
END;
/
END
I have also tried to replace this merge into statement with a simple inert statement, that was working. So is there any limitation call such merge into statement in a shell script? Anything wrong with my code?

Try the following:
#!/bin/sh
sqlplus -s user/pwd#sid <<EOF
set feed on
set pages 0
MERGE INTO TESTA t USING TESTA_TEMP s ON
(
t.WAFER_ID = s.WAFER_ID
)
WHEN MATCHED THEN
UPDATE
SET
t.bin1 = s.bin1 WHEN NOT MATCHED THEN
INSERT
(
id,
bin1,
wafer_id
)
VALUES
(
s.id,
s.bin1,
s.wafer_id
);
exit
EOF

Related

Oracle equivalent query for this postgress query - CONFLICT [duplicate]

The UPSERT operation either updates or inserts a row in a table, depending if the table already has a row that matches the data:
if table t has a row exists that has key X:
update t set mystuff... where mykey=X
else
insert into t mystuff...
Since Oracle doesn't have a specific UPSERT statement, what's the best way to do this?
The MERGE statement merges data between two tables. Using DUAL
allows us to use this command. Note that this is not protected against concurrent access.
create or replace
procedure ups(xa number)
as
begin
merge into mergetest m using dual on (a = xa)
when not matched then insert (a,b) values (xa,1)
when matched then update set b = b+1;
end ups;
/
drop table mergetest;
create table mergetest(a number, b number);
call ups(10);
call ups(10);
call ups(20);
select * from mergetest;
A B
---------------------- ----------------------
10 2
20 1
The dual example above which is in PL/SQL was great becuase I wanted to do something similar, but I wanted it client side...so here is the SQL I used to send a similar statement direct from some C#
MERGE INTO Employee USING dual ON ( "id"=2097153 )
WHEN MATCHED THEN UPDATE SET "last"="smith" , "name"="john"
WHEN NOT MATCHED THEN INSERT ("id","last","name")
VALUES ( 2097153,"smith", "john" )
However from a C# perspective this provide to be slower than doing the update and seeing if the rows affected was 0 and doing the insert if it was.
An alternative to MERGE (the "old fashioned way"):
begin
insert into t (mykey, mystuff)
values ('X', 123);
exception
when dup_val_on_index then
update t
set mystuff = 123
where mykey = 'X';
end;
Another alternative without the exception check:
UPDATE tablename
SET val1 = in_val1,
val2 = in_val2
WHERE val3 = in_val3;
IF ( sql%rowcount = 0 )
THEN
INSERT INTO tablename
VALUES (in_val1, in_val2, in_val3);
END IF;
insert if not exists
update:
INSERT INTO mytable (id1, t1)
SELECT 11, 'x1' FROM DUAL
WHERE NOT EXISTS (SELECT id1 FROM mytble WHERE id1 = 11);
UPDATE mytable SET t1 = 'x1' WHERE id1 = 11;
None of the answers given so far is safe in the face of concurrent accesses, as pointed out in Tim Sylvester's comment, and will raise exceptions in case of races. To fix that, the insert/update combo must be wrapped in some kind of loop statement, so that in case of an exception the whole thing is retried.
As an example, here's how Grommit's code can be wrapped in a loop to make it safe when run concurrently:
PROCEDURE MyProc (
...
) IS
BEGIN
LOOP
BEGIN
MERGE INTO Employee USING dual ON ( "id"=2097153 )
WHEN MATCHED THEN UPDATE SET "last"="smith" , "name"="john"
WHEN NOT MATCHED THEN INSERT ("id","last","name")
VALUES ( 2097153,"smith", "john" );
EXIT; -- success? -> exit loop
EXCEPTION
WHEN NO_DATA_FOUND THEN -- the entry was concurrently deleted
NULL; -- exception? -> no op, i.e. continue looping
WHEN DUP_VAL_ON_INDEX THEN -- an entry was concurrently inserted
NULL; -- exception? -> no op, i.e. continue looping
END;
END LOOP;
END;
N.B. In transaction mode SERIALIZABLE, which I don't recommend btw, you might run into
ORA-08177: can't serialize access for this transaction exceptions instead.
I'd like Grommit answer, except it require dupe values. I found solution where it may appear once: http://forums.devshed.com/showpost.php?p=1182653&postcount=2
MERGE INTO KBS.NUFUS_MUHTARLIK B
USING (
SELECT '028-01' CILT, '25' SAYFA, '6' KUTUK, '46603404838' MERNIS_NO
FROM DUAL
) E
ON (B.MERNIS_NO = E.MERNIS_NO)
WHEN MATCHED THEN
UPDATE SET B.CILT = E.CILT, B.SAYFA = E.SAYFA, B.KUTUK = E.KUTUK
WHEN NOT MATCHED THEN
INSERT ( CILT, SAYFA, KUTUK, MERNIS_NO)
VALUES (E.CILT, E.SAYFA, E.KUTUK, E.MERNIS_NO);
I've been using the first code sample for years. Notice notfound rather than count.
UPDATE tablename SET val1 = in_val1, val2 = in_val2
WHERE val3 = in_val3;
IF ( sql%notfound ) THEN
INSERT INTO tablename
VALUES (in_val1, in_val2, in_val3);
END IF;
The code below is the possibly new and improved code
MERGE INTO tablename USING dual ON ( val3 = in_val3 )
WHEN MATCHED THEN UPDATE SET val1 = in_val1, val2 = in_val2
WHEN NOT MATCHED THEN INSERT
VALUES (in_val1, in_val2, in_val3)
In the first example the update does an index lookup. It has to, in order to update the right row. Oracle opens an implicit cursor, and we use it to wrap a corresponding insert so we know that the insert will only happen when the key does not exist. But the insert is an independent command and it has to do a second lookup. I don't know the inner workings of the merge command but since the command is a single unit, Oracle could execute the correct insert or update with a single index lookup.
I think merge is better when you do have some processing to be done that means taking data from some tables and updating a table, possibly inserting or deleting rows. But for the single row case, you may consider the first case since the syntax is more common.
A note regarding the two solutions that suggest:
1) Insert, if exception then update,
or
2) Update, if sql%rowcount = 0 then insert
The question of whether to insert or update first is also application dependent. Are you expecting more inserts or more updates? The one that is most likely to succeed should go first.
If you pick the wrong one you will get a bunch of unnecessary index reads. Not a huge deal but still something to consider.
Try this,
insert into b_building_property (
select
'AREA_IN_COMMON_USE_DOUBLE','Area in Common Use','DOUBLE', null, 9000, 9
from dual
)
minus
(
select * from b_building_property where id = 9
)
;
From http://www.praetoriate.com/oracle_tips_upserts.htm:
"In Oracle9i, an UPSERT can accomplish this task in a single statement:"
INSERT
FIRST WHEN
credit_limit >=100000
THEN INTO
rich_customers
VALUES(cust_id,cust_credit_limit)
INTO customers
ELSE
INTO customers SELECT * FROM new_customers;

How to swap values between before and after `=` using Oracle?

I have declared a value in parameter #Data as ACCOUNT_NO|none|M=ACCOUNT_NO,ADD1|none|M=ADD1
I need to get a result as ACCOUNT_NO=ACCOUNT_NO|none|M,ADD1=ADD1|none|M.
Which means I need to swap between the values before and after =
I have the SQL Server Query for achieving this but I need Oracle query.
Declare #Data varchar(100)='ACCOUNT_NO|none|M=ACCOUNT_NO,ADD1|none|M=ADD1';
WITH
myCTE1 AS
(
SELECT CAST('<root><r>' + REPLACE(#Data,',','</r><r>') + '</r></root>' AS XML) AS parts1
)
,myCTE2 AS
(
SELECT CAST('<root><r>' + REPLACE(p1.x.value('.','varchar(max)'),'=','</r><r>') + '</r></root>' AS XML) as parts2
FROM myCTE1
CROSS APPLY parts1.nodes('/root/r') AS p1(x)
)
SELECT STUFF
(
(
SELECT ',' + parts2.value('/root[1]/r[2]','varchar(max)') + '=' + parts2.value('/root[1]/r[1]','varchar(max)')
FROM myCTE2
FOR XML PATH(''),TYPE
).value('.','varchar(max)'),1,1,'');
Expected Output if I execute the query ACCOUNT_NO=ACCOUNT_NO|none|M,ADD1=ADD1|none|M. Can anyone give an idea to do this one?
Sounds like a job for REGEXP_REPLACE:
WITH datatab as (select 'ACCOUNT_NO|none|M=ACCOUNT_NO,ADD1|none|M=ADD1' info from dual)
select info,
regexp_replace(info, '([^=]+)=([^=,]+),([^=]+)=([^=,]+)', '\2=\1,\4=\3') new_info
from datatab;
INFO NEW_INFO
--------------------------------------------- ---------------------------------------------
ACCOUNT_NO|none|M=ACCOUNT_NO,ADD1|none|M=ADD1 ACCOUNT_NO=ACCOUNT_NO|none|M,ADD1=ADD1|none|M
(as a complete aside, that's the first time I've ever written a regular expression and had it work first time. Apparently, I have gone over to the dark side... *{;-) )
ETA: If you need this in a procedure/function, you don't need to bother selecting the regular expression, you can do it in PL/SQL directly.
Here's an example of a function that returns the swapped over result:
create or replace function swap_places (p_data in varchar2)
return varchar2
is
begin
return regexp_replace(p_data, '([^=]+)=([^=,]+),([^=]+)=([^=,]+)', '\2=\1,\4=\3');
end swap_places;
/
-- example of calling the function to check the result
select swap_places('ACCOUNT_NO|none|M=ACCOUNT_NO,ADD1|none|M=ADD1') col1 from dual;
COL1
-------------------------------------------------
ACCOUNT_NO=ACCOUNT_NO|none|M,ADD1=ADD1|none|M

SP2-0042: unknown command "ENDD" - rest of line ignored

I have script which is calling sqlplus to perform an operation. But, the sqlplus is returning this error always once the script is executed.
unknown command "ENDD" - rest of line ignored.
Here is my sql written inside the script :
map=sqlplus -s <<ENDD
$db_connection_string
SET BLANKLINES ON
SET VERIFY OFF HEADING OFF ECHO OFF FEEDBACK OFF
ALTER SESSION SET CURRENT_SCHEMA=$MY_DB;
select
TRIM(( select Col1 from tableA where tableB.col2=col2 and col3 = 100)) ||'#' ||
TRIM(tableB.col3 )||'#'||
tableB.col4 ||'#'||
tableB.col5 ||'#'||
tableB.col6 ||'#'||
TRIM(( select tableC.col1 from tableC WHERE tableC.col2=10050 AND tableC.col4 = 1 and tableC.col3 = tableB.col4)) AS ACT#TNID#INS#DC#CHNL#EXTSYSIDK
from tableB
where
tableB.col3='$evn'
and rownum <2;
ENDD
You're missing the command substitution syntax:
map=$(sqlplus -s <<ENDD
...
ENDD
)

Strange Oracle XMLType.getClobVal() result

I use Oracle 11g (on Red Hat). I have simple regular table with XMLType column:
CREATE TABLE PROJECTS
(
PROJECT_ID NUMBER(*, 0) NOT NULL,
PROJECT SYS.XMLTYPE,
);
Using Oracle SQL Developer (on Windows) I do:
select T1.PROJECT P1 from PROJECTS T1 where PROJECT_ID = '161';
It works. I get one cell. I can double click and download whole XML file.
Then I tried to get result as CLOB:
select T1.PROJECT.getClobVal() P1 from PROJECTS T1 where PROJECT_ID = '161';
It works. I get one cell. I can double click and see whole text and copy it. BUT there is a problem. When I copy it to clipboard I get only first 4000 characters. It seems that there is 0x00 character at position 4000 and the rest of CLOB is not copied.
To confirm this, I wrote check in java:
// ... create projectsStatement
Reader reader = projectsStatement.getResultSet().getCharacterStream( "P1" );
BufferedReader bf = new BufferedReader( reader );
char buffer[] = new char[ 1024 ];
int count = 0;
int globalPos = 0;
while ( ( count = bf.read( buffer, 0, buffer.length ) ) > 0 )
for ( int i = 0; i < count; i++, globalPos++ )
if ( buffer[ i ] == 0 )
throw new Exception( "ZERO at " + Integer.toString(globalPos) );
Reader returns full XML but my exception is thrown because there is null character at position 4000. I could remove this single byte but this would be rather strange workaround.
I don't use VARCHAR2 there but maybe this problem is related to VARCHAR2 limitation (4000 bytes) somehow ? Any other ideas ? Is this an Oracle bug or am I missing something ?
-------------------- Edit --------------------
Value was inserted using following stored procedure:
create or replace
procedure addProject( projectId number, projectXml clob ) is
sqlstr varchar2(2000);
begin
sqlstr := 'insert into projects ( PROJECT_ID, PROJECT ) VALUES ( :projectId, :projectData )';
execute immediate sqlstr using projectId, XMLTYPE(projectXml);
end;
Java code used to call it:
try ( CallableStatement cs = connection.prepareCall("{call addProject(?,?)}") )
{
cs.setInt( "projectId", projectId );
cs.setCharacterStream( "projectXml", new StringReader(xmlStr) , xmlStr.length() );
cs.execute();
}
-------------------- Edit. SIMPLE TEST --------------------
I will use all I learned from your answers. Create simplest table:
create table T1 ( P XMLTYPE );
Prepare two CLOBs with XMLs. First with null character, second without.
declare
P1 clob;
P2 clob;
P3 clob;
begin
P1 := '<a>';
P2 := '<a>';
FOR i IN 1..1000 LOOP
P1 := P1 || '0123456789' || chr(0);
P2 := P2 || '0123456789';
END LOOP;
P1 := P1 || '</a>';
P2 := P2 || '</a>';
Check if null is in the first CLOB and not in the second one:
DBMS_OUTPUT.put_line( DBMS_LOB.INSTR( P1, chr(0) ) );
DBMS_OUTPUT.put_line( DBMS_LOB.INSTR( P2, chr(0) ) );
We will get as expected:
14
0
Try to insert first CLOB into XMLTYPE. It will not work. It is not possible to insert such value:
insert into T1 ( P ) values ( XMLTYPE( P1 ) );
Try to insert second CLOB into XMLTYPE. It will work:
insert into T1 ( P ) values ( XMLTYPE( P2 ) );
Try to read inserted XML into third CLOB. It will work:
select T.P.getClobVal() into P3 from T1 T where rownum = 1;
Check if there is null. There is NO null:
DBMS_OUTPUT.put_line( DBMS_LOB.INSTR( P3, chr(0) ) );
It seams that there is no null inside database and as long as we are in the PL/SQL context, there is no null. But when I try to use following SQL in SQL Developer ( on Windows ) or in Java ( on Red Hat EE and Tomcat7 ) I get null character at position 4000 in all returned CLOBs:
select T.P.getClobVal() from T1 T;
BR,
JM
It's not an Oracle bug (it stores and retrieves the \0 just fine. It's a client/windows bug (Different clients behave differently in regards to "NUL" as does windows)
chr(0) is not a valid character in non-blobs really (I'm curious how you ever get the XMLType to accept it in the first place as usually it wouldn't parse).
\0 is used in C to denote the end of a string (NUL terminator) and some GUIs would stop processing the string at that point. For example:
![SQL> select 'IM VISIBLE'||chr(0)||'BUT IM INVISIBLE'
2 from dual
3 /
'IMVISIBLE'||CHR(0)||'BUTIM
---------------------------
IM VISIBLE BUT IM INVISIBLE
SQL>
yet toad fails miserably on this:
sql developer fares better, as you can see it:
but if you copy it, the clipboard will only copy it up to the nul character. this copy paste error isn't SQL developers fault though, it's a problem with windows clipboard not allowing NUL to paste properly.
you should just replace(T1.PROJECT.getClobVal(), chr(0), null) to get round this when using sql developer/windows clipboard.
I also was experiencing this same issue exactly as described by Mikosz (seeing an extra 'NUL' character around the 4000th character when outputting my XMLType value as a Clob). While playing around in SQLDeveloper I noticed an interesting workaround. I was trying to see the output of my XMLType, but was tired of scrolling to the 4000th character, so I started wrapping the Clob output in a substr(...). Much to my surprise, the issue actually disappeared. I incorporated this into my Java app and confirmed that the issue was no longer present and my Clob could be retrieved without the extra character. I know that this isn't an ideal workaround, and I'm still not sure why it works (would love if someone could explain it to me), but here's an abbreviated example of what I've currently got working:
// Gets the xml contents
String sql = "select substr(x.xml_content.getClobVal(), 0) as xml_content from my_table x";
ps = con.prepareStatement(sql);
if(rs.next()) {
Reader reader = new BufferedReader(rs.getCharacterStream("xml_content"));
...
}
Bug:14781609 XDB: XMLType.getclobval() returns a temporary LOB when XML is stored in a CLOB.
fix in patchset 11.2.0.4
and another solution
if read as blob, then no error like
T1.PROJECT.getBlobVal(nls_charset_id('UTF8'))
Easy enough to verify if it's the .getClobVal() call or not - perform an INSTR test in PL/SQL (not Java) on your resultant CLOB to see if the CHR(0) exists or not.
If it does not, then I would point the finger at your Oracle client install.

SQLPlus - spooling to multiple files from PL/SQL blocks

I have a query that returns a lot of data into a CSV file. So much, in fact, that Excel can't open it - there are too many rows. Is there a way to control spool to spool to a new file everytime 65000 rows have been processed? Ideally, I'd like to have my output in files named in sequence, such as large_data_1.csv, large_data_2.csv, large_data_3.csv, etc...
I could use dbms_output in a PL/SQL block to control how many rows are output, but then how would I switch files, as spool does not seem to be accessible from PL/SQL blocks?
(Oracle 10g)
UPDATE:
I don't have access to the server, so writing files to the server would probably not work.
UPDATE 2:
Some of the fields contain free-form text, including linebreaks, so counting line breaks AFTER the file is written is not as easy as counting records WHILE the data is being returned...
Got a solution, don't know why I didn't think of this sooner...
The basic idea is that the master sqplplus script generates an intermediate script that will split the output to multiple files. Executing the intermediate script will execute multiple queries with different ranges imposed on rownum, and spool to a different file for each query.
set termout off
set serveroutput on
set echo off
set feedback off
variable v_rowCount number;
spool intermediate_file.sql
declare
i number := 0;
v_fileNum number := 1;
v_range_start number := 1;
v_range_end number := 1;
k_max_rows constant number := 65536;
begin
dbms_output.enable(10000);
select count(*)
into :v_err_count
from ...
/* You don't need to see the details of the query... */
while i <= :v_err_count loop
v_range_start := i+1;
if v_range_start <= :v_err_count then
i := i+k_max_rows;
v_range_end := i;
dbms_output.put_line('set colsep ,
set pagesize 0
set trimspool on
set headsep off
set feedback off
set echo off
set termout off
set linesize 4000
spool large_data_file_'||v_fileNum||'.csv
select data_string
from (select rownum rn, data_object
from
/* Details of query omitted */
)
where rn >= '||v_range_start||' and rn <= '||v_range_end||';
spool off');
v_fileNum := v_fileNum +1;
end if;
end loop;
end;
/
spool off
prompt executing intermediate file
#intermediate_file.sql;
set serveroutput off
Try this for a pure SQL*Plus solution...
set pagesize 0
set trimspool on
set headsep off
set feedback off
set echo off
set verify off
set timing off
set linesize 4000
DEFINE rows_per_file = 50
-- Create an sql file that will create the individual result files
SET DEFINE OFF
SPOOL c:\temp\generate_one.sql
PROMPT COLUMN which_dynamic NEW_VALUE dynamic_filename
PROMPT
PROMPT SELECT 'c:\temp\run_#'||TO_CHAR( &1, 'fm000' )||'_result.txt' which_dynamic FROM dual
PROMPT /
PROMPT SPOOL &dynamic_filename
PROMPT SELECT *
PROMPT FROM ( SELECT a.*, rownum rnum
PROMPT FROM ( SELECT object_id FROM all_objects ORDER BY object_id ) a
PROMPT WHERE rownum <= ( &2 * 50 ) )
PROMPT WHERE rnum >= ( ( &3 - 1 ) * 50 ) + 1
PROMPT /
PROMPT SPOOL OFF
SPOOL OFF
SET DEFINE &
-- Define variable to hold number of rows
-- returned by the query
COLUMN num_rows NEW_VALUE v_num_rows
-- Find out how many rows there are to be
SELECT COUNT(*) num_rows
FROM ( SELECT LEVEL num_files FROM dual CONNECT BY LEVEL <= 120 );
-- Create a master file with the correct number of sql files
SPOOL c:\temp\run_all.sql
SELECT '#c:\temp\generate_one.sql '||TO_CHAR( num_files )
||' '||TO_CHAR( num_files )
||' '||TO_CHAR( num_files ) file_name
FROM ( SELECT LEVEL num_files
FROM dual
CONNECT BY LEVEL <= CEIL( &v_num_rows / &rows_per_file ) )
/
SPOOL OFF
-- Now run them all
#c:\temp\run_all.sql
Use split on the resulting file.
utl_file is the package you are looking for. You can write a cursor and loop over the rows (writing them out) and when mod(num_rows_written,num_per_file) == 0 it's time to start a new file. It works fine within PL/SQL blocks.
Here's the reference for utl_file:
http://www.adp-gmbh.ch/ora/plsql/utl_file.html
NOTE:
I'm assuming here, that it's ok to write the files out to the server.
Have you looked at setting up an external data connection in Excel (assuming that the CSV files are only being produced for use in Excel)? You could define an Oracle view that limits the rows returned and also add some parameters in the query to allow the user to further limit the result set. (I've never understood what someone does with 64K rows in Excel anyway).
I feel that this is somewhat of a hack, but you could also use UTL_MAIL and generate attachments to email to your user(s). There's a 32K size limit to the attachments, so you'd have to keep track of the size in the cursor loop and start a new attachment on this basis.
While your question asks how to break the greate volume of data into chunks Excel can handle, I would ask if there is any part of the Excel operation that can be moved into SQL (PL/SQL?) that can reduce the volume of data. Ultimately it has to be reduced to be made meaningful to anyone. The database is a great engine to do that work on.
When you have reduced the data to more presentable volumes or even final results, dump it for Excel to make the final presentation.
This is not the answer you were looking for but I think it is always good to ask if you are using the right tool when it is getting difficult to get the job done.

Resources