PowerBuilder Insert a record to a SQL table if it is not available messagebox if not - validation

I am fairly new to PowerBuilder Classic 12.
I need a sample code to check whether a record is available and if not insert from a textbox.
I would probably need a DataStore since someone suggested a preference to SQL statements.
Thanks.

I'm not sure if this is answering your Q, but here you go:
string ls_CheckRecord
string ls_InsertRecord
SELECT record2Check
INTO :ls_Check
FROM myTable
WHERE conditionsMet;
If ( SQLCA.SQLCode <> 0 ) Then
Messagebox("SQL Error",SQLCA.SQLErrText)
Return
End if
If (NOT IsNull(ls_Check) ) Then
ls_InsertRecord = sle_ToInsert.Text
INSERT myTable
(myColumn)
VALUES
(:ls_InsertRecord) ;
If ( SQLCA.SQLCode <> 0 ) Then
Messagebox("SQL Error",SQLCA.SQLErrText)
Return
End if
End if

Related

Oracle equivalent query for this postgress query - CONFLICT [duplicate]

The UPSERT operation either updates or inserts a row in a table, depending if the table already has a row that matches the data:
if table t has a row exists that has key X:
update t set mystuff... where mykey=X
else
insert into t mystuff...
Since Oracle doesn't have a specific UPSERT statement, what's the best way to do this?
The MERGE statement merges data between two tables. Using DUAL
allows us to use this command. Note that this is not protected against concurrent access.
create or replace
procedure ups(xa number)
as
begin
merge into mergetest m using dual on (a = xa)
when not matched then insert (a,b) values (xa,1)
when matched then update set b = b+1;
end ups;
/
drop table mergetest;
create table mergetest(a number, b number);
call ups(10);
call ups(10);
call ups(20);
select * from mergetest;
A B
---------------------- ----------------------
10 2
20 1
The dual example above which is in PL/SQL was great becuase I wanted to do something similar, but I wanted it client side...so here is the SQL I used to send a similar statement direct from some C#
MERGE INTO Employee USING dual ON ( "id"=2097153 )
WHEN MATCHED THEN UPDATE SET "last"="smith" , "name"="john"
WHEN NOT MATCHED THEN INSERT ("id","last","name")
VALUES ( 2097153,"smith", "john" )
However from a C# perspective this provide to be slower than doing the update and seeing if the rows affected was 0 and doing the insert if it was.
An alternative to MERGE (the "old fashioned way"):
begin
insert into t (mykey, mystuff)
values ('X', 123);
exception
when dup_val_on_index then
update t
set mystuff = 123
where mykey = 'X';
end;
Another alternative without the exception check:
UPDATE tablename
SET val1 = in_val1,
val2 = in_val2
WHERE val3 = in_val3;
IF ( sql%rowcount = 0 )
THEN
INSERT INTO tablename
VALUES (in_val1, in_val2, in_val3);
END IF;
insert if not exists
update:
INSERT INTO mytable (id1, t1)
SELECT 11, 'x1' FROM DUAL
WHERE NOT EXISTS (SELECT id1 FROM mytble WHERE id1 = 11);
UPDATE mytable SET t1 = 'x1' WHERE id1 = 11;
None of the answers given so far is safe in the face of concurrent accesses, as pointed out in Tim Sylvester's comment, and will raise exceptions in case of races. To fix that, the insert/update combo must be wrapped in some kind of loop statement, so that in case of an exception the whole thing is retried.
As an example, here's how Grommit's code can be wrapped in a loop to make it safe when run concurrently:
PROCEDURE MyProc (
...
) IS
BEGIN
LOOP
BEGIN
MERGE INTO Employee USING dual ON ( "id"=2097153 )
WHEN MATCHED THEN UPDATE SET "last"="smith" , "name"="john"
WHEN NOT MATCHED THEN INSERT ("id","last","name")
VALUES ( 2097153,"smith", "john" );
EXIT; -- success? -> exit loop
EXCEPTION
WHEN NO_DATA_FOUND THEN -- the entry was concurrently deleted
NULL; -- exception? -> no op, i.e. continue looping
WHEN DUP_VAL_ON_INDEX THEN -- an entry was concurrently inserted
NULL; -- exception? -> no op, i.e. continue looping
END;
END LOOP;
END;
N.B. In transaction mode SERIALIZABLE, which I don't recommend btw, you might run into
ORA-08177: can't serialize access for this transaction exceptions instead.
I'd like Grommit answer, except it require dupe values. I found solution where it may appear once: http://forums.devshed.com/showpost.php?p=1182653&postcount=2
MERGE INTO KBS.NUFUS_MUHTARLIK B
USING (
SELECT '028-01' CILT, '25' SAYFA, '6' KUTUK, '46603404838' MERNIS_NO
FROM DUAL
) E
ON (B.MERNIS_NO = E.MERNIS_NO)
WHEN MATCHED THEN
UPDATE SET B.CILT = E.CILT, B.SAYFA = E.SAYFA, B.KUTUK = E.KUTUK
WHEN NOT MATCHED THEN
INSERT ( CILT, SAYFA, KUTUK, MERNIS_NO)
VALUES (E.CILT, E.SAYFA, E.KUTUK, E.MERNIS_NO);
I've been using the first code sample for years. Notice notfound rather than count.
UPDATE tablename SET val1 = in_val1, val2 = in_val2
WHERE val3 = in_val3;
IF ( sql%notfound ) THEN
INSERT INTO tablename
VALUES (in_val1, in_val2, in_val3);
END IF;
The code below is the possibly new and improved code
MERGE INTO tablename USING dual ON ( val3 = in_val3 )
WHEN MATCHED THEN UPDATE SET val1 = in_val1, val2 = in_val2
WHEN NOT MATCHED THEN INSERT
VALUES (in_val1, in_val2, in_val3)
In the first example the update does an index lookup. It has to, in order to update the right row. Oracle opens an implicit cursor, and we use it to wrap a corresponding insert so we know that the insert will only happen when the key does not exist. But the insert is an independent command and it has to do a second lookup. I don't know the inner workings of the merge command but since the command is a single unit, Oracle could execute the correct insert or update with a single index lookup.
I think merge is better when you do have some processing to be done that means taking data from some tables and updating a table, possibly inserting or deleting rows. But for the single row case, you may consider the first case since the syntax is more common.
A note regarding the two solutions that suggest:
1) Insert, if exception then update,
or
2) Update, if sql%rowcount = 0 then insert
The question of whether to insert or update first is also application dependent. Are you expecting more inserts or more updates? The one that is most likely to succeed should go first.
If you pick the wrong one you will get a bunch of unnecessary index reads. Not a huge deal but still something to consider.
Try this,
insert into b_building_property (
select
'AREA_IN_COMMON_USE_DOUBLE','Area in Common Use','DOUBLE', null, 9000, 9
from dual
)
minus
(
select * from b_building_property where id = 9
)
;
From http://www.praetoriate.com/oracle_tips_upserts.htm:
"In Oracle9i, an UPSERT can accomplish this task in a single statement:"
INSERT
FIRST WHEN
credit_limit >=100000
THEN INTO
rich_customers
VALUES(cust_id,cust_credit_limit)
INTO customers
ELSE
INTO customers SELECT * FROM new_customers;

Stored procedure is taking too much time to update the table columns

I have created a stored procedure which is taking too much of time to update the columns of the table. Say 3 hrs to update 2.5k records out of 43k records.
So can I reduce the time of updating the records. Below is my logic for the same.
procedure UPDATE_MST_INFO_BKC
(
P_SAPID IN NVARCHAR2
)
as
v_cityname varchar2(500):='';
v_neid varchar2(500):='';
v_latitude varchar2(500):='';
v_longitude varchar2(500):='';
v_structuretype varchar2(500):='';
v_jc_name varchar2(500):='';
v_jc_code varchar2(500):='';
v_company_code varchar2(500):='';
v_cnt number :=0;
begin
select count(*) into v_cnt from structure_enodeb_mapping where RJ_SAPID=P_SAPID and rownum=1;
if v_cnt > 0 then
begin
select RJ_CITY_NAME, RJ_NETWORK_ENTITY_ID,LATITUDE,LONGITUDE,RJ_STRUCTURE_TYPE,RJ_JC_NAME,RJ_JC_CODE,'6000'
into v_cityname,v_neid,v_latitude, v_longitude, v_structuretype,v_jc_name,v_jc_code,v_company_code from structure_enodeb_mapping where RJ_SAPID=P_SAPID and rownum=1;
update tbl_ipcolo_mast_info set
CITY_NAME = v_cityname,
NEID = v_neid,
FACILITY_LATITUDE = v_latitude,
FACILITY_LONGITUDE = v_longitude,
RJ_STRUCTURE_TYPE = v_structuretype,
RJ_JC_NAME = v_jc_name,
RJ_JC_CODE = v_jc_code,
COMPANY_CODE = v_company_code
where SAP_ID=P_SAPID;
end;
end if;
end UPDATE_MST_INFO_BKC;
What adjustments can I make to this?
As far as I understand your code, It is updating TBL_IPCOLO_MAST_INFO having SAP_ID = P_SAPID Means It is updating one record and you must be calling the procedure for each record.
It is a good practice of calling the procedure once and update all the record in one go. (In your case 2.5k records must be updated in one call of this procedure only)
For your requirement, Currently, I have updated the procedure code to only execute MERGE statement, which will be same as multiple SQLs in your question for single P_SAPID.
PROCEDURE UPDATE_MST_INFO_BKC (
P_SAPID IN NVARCHAR2
) AS
BEGIN
MERGE INTO TBL_IPCOLO_MAST_INFO I
USING (
SELECT
RJ_CITY_NAME,
RJ_NETWORK_ENTITY_ID,
LATITUDE,
LONGITUDE,
RJ_STRUCTURE_TYPE,
RJ_JC_NAME,
RJ_JC_CODE,
'6000' AS COMPANY_CODE,
RJ_SAPID
FROM
STRUCTURE_ENODEB_MAPPING
WHERE
RJ_SAPID = P_SAPID
AND ROWNUM = 1
)
O ON ( I.SAP_ID = O.RJ_SAPID )
WHEN MATCHED THEN
UPDATE SET I.CITY_NAME = O.RJ_CITY_NAME,
I.NEID = O.RJ_NETWORK_ENTITY_ID,
I.FACILITY_LATITUDE = O.LATITUDE,
I.FACILITY_LONGITUDE = O.LONGITUDE,
I.RJ_STRUCTURE_TYPE = O.RJ_STRUCTURE_TYPE,
I.RJ_JC_NAME = O.RJ_JC_NAME,
I.RJ_JC_CODE = O.RJ_JC_CODE,
I.COMPANY_CODE = O.COMPANY_CODE;
END UPDATE_MST_INFO_BKC;
Cheers!!
3 hours? That's way too much. Are sap_id columns indexed? Even if they aren't, data set of 43K rows is just too small.
How do you call that procedure? Is it part of another code, perhaps some unfortunate loop which does something row-by-row (which is, in turn, slow-by-slow)?
A few objections:
are all those variables' datatypes really varchar2(500)? Consider declaring them so that they'd take table column's datatype, e.g. v_cityname structure_enodeb_mapping.rj_city_name%type;. Also, there's no need to explicitly say that their value is null (:= ''), it is so by default
select statement which checks whether there's something in the table for that parameter's value should be rewritten to use EXISTS as it should perform better than rownum = 1 condition you used.
also, consider using exception handlers (no-data-found if there's no row for a certain ID; too-many-rows if there are two or more rows)
select statement that collects data into variables has the same condition; do you really expect more than a single row for each ID (passed as a parameter)?
Anyway, the whole procedure's code can be shortened to a single update statement:
update tbl_ipcolo_mst_info t set
(t.city_name, t.neid, ...) = (select s.rj_city_name,
s.rj_network_entity_id, ...
from structure_enodeb_mapping s
where s.rj_sapid = t.sap_id
)
where t.sap_id = p_sapid;
If there is something to be updated, it will be. If there's no matching t.sap_id, nothing will happen.

Strange Oracle XMLType.getClobVal() result

I use Oracle 11g (on Red Hat). I have simple regular table with XMLType column:
CREATE TABLE PROJECTS
(
PROJECT_ID NUMBER(*, 0) NOT NULL,
PROJECT SYS.XMLTYPE,
);
Using Oracle SQL Developer (on Windows) I do:
select T1.PROJECT P1 from PROJECTS T1 where PROJECT_ID = '161';
It works. I get one cell. I can double click and download whole XML file.
Then I tried to get result as CLOB:
select T1.PROJECT.getClobVal() P1 from PROJECTS T1 where PROJECT_ID = '161';
It works. I get one cell. I can double click and see whole text and copy it. BUT there is a problem. When I copy it to clipboard I get only first 4000 characters. It seems that there is 0x00 character at position 4000 and the rest of CLOB is not copied.
To confirm this, I wrote check in java:
// ... create projectsStatement
Reader reader = projectsStatement.getResultSet().getCharacterStream( "P1" );
BufferedReader bf = new BufferedReader( reader );
char buffer[] = new char[ 1024 ];
int count = 0;
int globalPos = 0;
while ( ( count = bf.read( buffer, 0, buffer.length ) ) > 0 )
for ( int i = 0; i < count; i++, globalPos++ )
if ( buffer[ i ] == 0 )
throw new Exception( "ZERO at " + Integer.toString(globalPos) );
Reader returns full XML but my exception is thrown because there is null character at position 4000. I could remove this single byte but this would be rather strange workaround.
I don't use VARCHAR2 there but maybe this problem is related to VARCHAR2 limitation (4000 bytes) somehow ? Any other ideas ? Is this an Oracle bug or am I missing something ?
-------------------- Edit --------------------
Value was inserted using following stored procedure:
create or replace
procedure addProject( projectId number, projectXml clob ) is
sqlstr varchar2(2000);
begin
sqlstr := 'insert into projects ( PROJECT_ID, PROJECT ) VALUES ( :projectId, :projectData )';
execute immediate sqlstr using projectId, XMLTYPE(projectXml);
end;
Java code used to call it:
try ( CallableStatement cs = connection.prepareCall("{call addProject(?,?)}") )
{
cs.setInt( "projectId", projectId );
cs.setCharacterStream( "projectXml", new StringReader(xmlStr) , xmlStr.length() );
cs.execute();
}
-------------------- Edit. SIMPLE TEST --------------------
I will use all I learned from your answers. Create simplest table:
create table T1 ( P XMLTYPE );
Prepare two CLOBs with XMLs. First with null character, second without.
declare
P1 clob;
P2 clob;
P3 clob;
begin
P1 := '<a>';
P2 := '<a>';
FOR i IN 1..1000 LOOP
P1 := P1 || '0123456789' || chr(0);
P2 := P2 || '0123456789';
END LOOP;
P1 := P1 || '</a>';
P2 := P2 || '</a>';
Check if null is in the first CLOB and not in the second one:
DBMS_OUTPUT.put_line( DBMS_LOB.INSTR( P1, chr(0) ) );
DBMS_OUTPUT.put_line( DBMS_LOB.INSTR( P2, chr(0) ) );
We will get as expected:
14
0
Try to insert first CLOB into XMLTYPE. It will not work. It is not possible to insert such value:
insert into T1 ( P ) values ( XMLTYPE( P1 ) );
Try to insert second CLOB into XMLTYPE. It will work:
insert into T1 ( P ) values ( XMLTYPE( P2 ) );
Try to read inserted XML into third CLOB. It will work:
select T.P.getClobVal() into P3 from T1 T where rownum = 1;
Check if there is null. There is NO null:
DBMS_OUTPUT.put_line( DBMS_LOB.INSTR( P3, chr(0) ) );
It seams that there is no null inside database and as long as we are in the PL/SQL context, there is no null. But when I try to use following SQL in SQL Developer ( on Windows ) or in Java ( on Red Hat EE and Tomcat7 ) I get null character at position 4000 in all returned CLOBs:
select T.P.getClobVal() from T1 T;
BR,
JM
It's not an Oracle bug (it stores and retrieves the \0 just fine. It's a client/windows bug (Different clients behave differently in regards to "NUL" as does windows)
chr(0) is not a valid character in non-blobs really (I'm curious how you ever get the XMLType to accept it in the first place as usually it wouldn't parse).
\0 is used in C to denote the end of a string (NUL terminator) and some GUIs would stop processing the string at that point. For example:
![SQL> select 'IM VISIBLE'||chr(0)||'BUT IM INVISIBLE'
2 from dual
3 /
'IMVISIBLE'||CHR(0)||'BUTIM
---------------------------
IM VISIBLE BUT IM INVISIBLE
SQL>
yet toad fails miserably on this:
sql developer fares better, as you can see it:
but if you copy it, the clipboard will only copy it up to the nul character. this copy paste error isn't SQL developers fault though, it's a problem with windows clipboard not allowing NUL to paste properly.
you should just replace(T1.PROJECT.getClobVal(), chr(0), null) to get round this when using sql developer/windows clipboard.
I also was experiencing this same issue exactly as described by Mikosz (seeing an extra 'NUL' character around the 4000th character when outputting my XMLType value as a Clob). While playing around in SQLDeveloper I noticed an interesting workaround. I was trying to see the output of my XMLType, but was tired of scrolling to the 4000th character, so I started wrapping the Clob output in a substr(...). Much to my surprise, the issue actually disappeared. I incorporated this into my Java app and confirmed that the issue was no longer present and my Clob could be retrieved without the extra character. I know that this isn't an ideal workaround, and I'm still not sure why it works (would love if someone could explain it to me), but here's an abbreviated example of what I've currently got working:
// Gets the xml contents
String sql = "select substr(x.xml_content.getClobVal(), 0) as xml_content from my_table x";
ps = con.prepareStatement(sql);
if(rs.next()) {
Reader reader = new BufferedReader(rs.getCharacterStream("xml_content"));
...
}
Bug:14781609 XDB: XMLType.getclobval() returns a temporary LOB when XML is stored in a CLOB.
fix in patchset 11.2.0.4
and another solution
if read as blob, then no error like
T1.PROJECT.getBlobVal(nls_charset_id('UTF8'))
Easy enough to verify if it's the .getClobVal() call or not - perform an INSTR test in PL/SQL (not Java) on your resultant CLOB to see if the CHR(0) exists or not.
If it does not, then I would point the finger at your Oracle client install.

Why I'm getting the ORA-01003: no statement parsed error?

Why am I getting this error and what does it mean by no statement parsed.
ORA-01003: no statement parsed
Here is the code:
PROCEDURE ORIGINAL_TABLE.UPDATE_GROUPS IS
-- cursor loaded with the swam groups
CURSOR cursor1 IS
SELECT ID, NEW_DESCRIPTION
FROM NEW_TABLE.NEW_GROUP_TABLE#DB_LINK.X;
BEGIN
FOR C1_REC IN cursor1 LOOP
UPDATE
ORIGINAL_TABLE."GROUPS"
SET
GROUP_ID = C1_REC.ID
WHERE
ORIGINAL_TABLE."GROUPS".DESCRIPTION = C1_REC.NEW_DESCRIPTION;
IF (SQL%ROWCOUNT = 0) THEN
INSERT INTO
ORIGINAL_TABLE.GROUPS("GROUP_ID", "DESCRIPTION")
VALUES (C1_REC.ID, C1_REC.NEW_DESCRIPTION);
END IF;
END LOOP;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line(SQLERRM);
END;
What I try to do with the code above is to update and old table with the values from a new table and in case that the new group doesn't exist insert it.
Update: Changed %ROWCOUNT > 0 for %ROWCOUNT = 0
Use MERGE statement, it does update/insert stuff more efficiently and pay attention your plsql doesn't provide it is intended for. It tries to make an update statement and if a record found it inserts another record. In order to fix it use
IF (SQL%ROWCOUNT = 0)
I presume the reason of the issue is the . in DBLINK name.
Moreover I would suggest to get rid of quotes for tables/fields just in case as well as schema name.
Another words delete all ORIGINAL_TABLE.
merge into groups g
using (
SELECT ID, NEW_DESCRIPTION
FROM NEW_TABLE.NEW_GROUP_TABLE#DB_LINK.X
) nt
on (nt.NEW_DESCRIPTION = g.description )
when matched then update set g.group_id = nt.id
when non matched then insert(GROUP_ID, DESCRIPTION)
values(nt.id, nt.NEW_DESCRIPTION)

Syntax of if exists in IBM Db2

The follow query drops a table if the table exists but it doesnt seem to work for IBM Db2.
Begin atomic
if( exists(
SELECT 1 FROM SYSIBM.SYSTABLES
WHERE NAME='EMAIL' AND TYPE='T' AND creator = 'schema1'
)) then
drop table EMAIL;
end if;
End
Whereas the same if exists syntax works if i have a DML statement instead of table drop statement. Any help on this is appreciated
Update 1: I read that you cannot run DDL statement within begin atomic block hence my first statement fails but the second goes fine.
The way i did it is as follows
Begin atomic
if( exists( SELECT 1
FROM SYSIBM.SYSTABLES
WHERE NAME='EMAIL' AND TYPE='T' AND creator = 'schema1'
)
)
then customStoredproc('drop table EMAIL');
end if;
End
My customStoredProc just has one stmt execute immediate #dynsql;
You are correct that DB2 prohibits DDL within an atomic SQL block. IBM has released a free add-on procedure called db2perf_quiet_drop that works the way you want.
In case if you looking for embedded SQL:
Exec SQL
update Table1 set TabCol1 ='New Value'
where Table1KeyField1 =:Table1KeyValue1
and Table1KeyField2 =:Table1KeyValue2
and Exists (
select '1' from Table2
where Table2KeyField1 =:Table2KeyValue1
and Table2KeyField2 =:Table2KeyValue2
) ;

Resources