I am familiar with Sybase which allows queries with format: IF EXISTS () THEN ... ELSE ... END IF (or very close). This a powerful statement that allows: "if exists, then update, else insert".
I am writing queries for DB2 on IBM iSeries box. I have seen the CASE keyword, but I cannot make it work. I always receive the error: "Keyword CASE not expected."
Sample:
IF EXISTS ( SELECT * FROM MYTABLE WHERE KEY = xxx )
THEN UPDATE MYTABLE SET VALUE = zzz WHERE KEY = xxx
ELSE INSERT INTO MYTABLE (KEY, VALUE) VALUES (xxx, zzz)
END IF
Is there a way to do this against DB2 on IBM iSeries? Currently, I run two queries. First a select, then my Java code decides to update/insert. I would rather write a single query as my server is located far away (across the Pacific).
+UPDATE+
DB2 for i, as of version 7.1, now has a MERGE statement which does what you are looking for.
>>-MERGE INTO--+-table-name-+--+--------------------+----------->
'-view-name--' '-correlation-clause-'
>--USING--table-reference--ON--search-condition----------------->
.------------------------------------------------------------------------.
V |
>----WHEN--+-----+--MATCHED--+----------------+--THEN--+-update-operation-+-+----->
'-NOT-' '-AND--condition-' +-delete-operation-+
+-insert-operation-+
'-signal-statement-'
See IBM i 7.1 InfoCenter DB2 MERGE statement reference page
DB/2 on the AS/400 does not have a conditional INSERT / UPDATE statement.
You could drop the SELECT statement by executing an INSERT directly and if it fails execute the UPDATE statement. Flip the order of the statements if your data is more likely to UPDATE than INSERT.
A faster option would be to create a temporary table in QTEMP, INSERT all of the records into the temporary table and then execute a bulk UPDATE ... WHERE EXISTS and INSERT ... WHERE NOT EXISTS at the end to merge all of the records into the final table. The advantage of this method is that you can wrap all of the statements in a batch to minimize round trip communication.
You can perform control-flow logic (IF...THEN...ELSE) in an SQL stored procedure. Here's sample SQL source code:
-- Warning! Untested code ahead.
CREATE PROCEDURE libname.UPSERT_MYTABLE (
IN THEKEY DECIMAL(9,0),
IN NEWVALUE CHAR(10) )
LANGUAGE SQL
MODIFIES SQL DATA
BEGIN
DECLARE FOUND CHAR(1);
-- Set FOUND to 'Y' if the key is found, 'N' if not.
-- (Perhaps there's a more direct way to do it.)
SET FOUND = 'N';
SELECT 'Y' INTO FOUND
FROM SYSIBM.SYSDUMMY1
WHERE EXISTS
(SELECT * FROM MYTABLE WHERE KEY = THEKEY);
IF FOUND = 'Y' THEN
UPDATE MYTABLE
SET VALUE = NEWVALUE
WHERE KEY = THEKEY;
ELSE
INSERT INTO MYTABLE
(KEY, VALUE)
VALUES
(THEKEY, NEWVALUE);
END IF;
END;
Once you create the stored procedure, you call it like you would any other stored procedure on this platform:
CALL UPSERT_MYTABLE( xxx, zzz );
This slightly over complex piece of SQL procedure will solve your problem:
IBM Technote
If you want to do a mass update from another table then have a look at the MERGE statement which is an incredibly powerful statement which lets you insert, update or delete depending on the values from another table.
IBM DB2 Syntax
Related
Is it possible to set a trigger to set the new row's value to be the result of a select statement? My current syntax is as follows and it's just not working:
CREATE TRIGGER "BRAND_NEW_TRIGGER"
BEFORE INSERT ON my_table
FOR EACH ROW
BEGIN
:NEW.column_one := (SELECT details_col FROM other_table WHERE property_id = :NEW.property_id);
END;
/
I've fudged the details of the code above to protect my company's security, I know the code above doesn't make too much sense but there is a valid reason I need to pull and organise the data this way.
You can do a select into
select ot.details_col
into :new.column_one
from other_table ot
where ot.property_id = :new.property_id;
Of course, I'd strongly question the data model if this makes sense. That strongly implies that you've got a data model in need of some normalization.
I have function which returns column names and i am trying to use the column name as part of my select statement, but my results are coming as column name instead of values
FUNCTION returning column name:
get_col_name(input1, input2)
Can И use this query to the results of the column from table -
SELECT GET_COL_NAME(input1,input2) FROM TABLE;
There are a few ways to run dynamic SQL directly inside a SQL statement. These techniques should be avoided since they are usually complicated, slow, and buggy. Before you do this try to find another way to solve the problem.
The below solution uses DBMS_XMLGEN.GETXML to produce XML from a dynamically created SQL statement, and then uses XML table processing to extract the value.
This is the simplest way to run dynamic SQL in SQL, and it only requires built-in packages. The main limitation is that the number and type of columns is still fixed. If you need a function that returns an unknown number of columns you'll need something more powerful, like the open source program Method4. But that level of dynamic code gets even more difficult and should only be used after careful consideration.
Sample schema
--drop table table1;
create table table1(a number, b number);
insert into table1 values(1, 2);
commit;
Function that returns column name
create or replace function get_col_name(input1 number, input2 number) return varchar2 is
begin
if input1 = 0 then
return 'a';
else
return 'b';
end if;
end;
/
Sample query and result
select dynamic_column
from
(
select xmltype(dbms_xmlgen.getxml('
select '||get_col_name(0,0)||' dynamic_column from table1'
)) xml_results
from dual
)
cross join
xmltable
(
'/ROWSET/ROW'
passing xml_results
columns dynamic_column varchar2(4000) path 'DYNAMIC_COLUMN'
);
DYNAMIC_COLUMN
--------------
1
If you change the inputs to the function the new value is 2 from column B. Use this SQL Fiddle to test the code.
I want update all values in my tables, but this can kill my database
UPDATE Table_1
SET Value = 'Some string with but changed'
where value = 'Some string without changes';
Can I do this by procedures, and it guarantee that it will not perform in infinty please i need some tips?
Edit
I read about cursors, but how can i use it
Your SQL seems fine and that is the preferred solution. A cursor will normally be far, far slower.
If you cannot create an index and the update above is really that slow, try the following. Considering I don't have the rest of the table definition to work with, I assume your primary key is a single field named ID:
First, create a temporary table with only the matching records:
CREATE TEMPORARY TABLE temp as
SELECT *
FROM Table_1
WHERE value = 'Some string without changes';
Then, update using this temporary table:
UPDATE Table_1 SET
Table_1.Value = 'Some string with but changed'
WHERE EXISTS (
SELECT *
FROM Temp
WHERE Temp.ID = Table_1.ID
);
Another approach if your DB is higher than 11g R1 version. Oracle has provided a beautiful package called DBMS_PARALLEL_EXECUTE which is used for large DMLS or any process which can be split into chunks and can be parallely done.
In Postgres, I can write
INSERT .. RETURNING *
To retrieve all values that had been generated during the insert. In Oracle, HSQLDB, I can use
String[] columnNames = ...
PreparedStatement stmt = connection.prepareStatement(sql, columnNames);
// ...
stmt.execute();
stmt.getGeneratedKeys();
To retrieve all values that had been generated. MySQL is a bit limited and only returns columns that are set to AUTO_INCREMENT. But how can this be done with Sybase SQL Anywhere? The JDBC driver does not implement these methods, and there is no INSERT .. RETURNING clause, as in Postgres. Is there way to do it, other than maybe running
SELECT ##identity
immediately after the insert?
My current implementation executes three consecutive SQL statements:
-- insert the data first
INSERT INTO .. VALUES (..)
-- get the generated identity value immediately afterwards
SELECT ##identity
-- get the remaining values from the record (possibly generated by a trigger)
SELECT * FROM .. WHERE ID = :previous_identity
The third statement can be omitted, if only the ID column is requested
In Oracle, given a simple data table:
create table data (
id VARCHAR2(255),
key VARCHAR2(255),
value VARCHAR2(511));
suppose I want to "insert or update" a value. I have something like:
merge into data using dual on
(id='someid' and key='testKey')
when matched then
update set value = 'someValue'
when not matched then
insert (id, key, value) values ('someid', 'testKey', 'someValue');
Is there a better way than this? This command seems to have the following drawbacks:
Every literal needs to be typed twice (or added twice via parameter setting)
The "using dual" syntax seems hacky
If this is the best way, is there any way around having to set each parameter twice in JDBC?
I don't consider using dual to be a hack. To get rid of binding/typing twice, I would do something like:
merge into data
using (
select
'someid' id,
'testKey' key,
'someValue' value
from
dual
) val on (
data.id=val.id
and data.key=val.key
)
when matched then
update set data.value = val.value
when not matched then
insert (id, key, value) values (val.id, val.key, val.value);
I would hide the MERGE inside a PL/SQL API and then call that via JDBC:
data_pkg.merge_data ('someid', 'testKey', 'someValue');
As an alternative to MERGE, the API could do:
begin
insert into data (...) values (...);
exception
when dup_val_on_index then
update data
set ...
where ...;
end;
I prefer to try the update before the insert to save having to check for an exception.
update data set ...=... where ...=...;
if sql%notfound then
insert into data (...) values (...);
end if;
Even now we have the merge statement, I still tend to do single-row updates this way - just seems more a more natural syntax. Of course, merge really comes into its own when dealing with larger data sets.