I'm trying to update a single column in a table for many rows, but each row will have a different updated date value based on a unique where condition of two other columns. I'm reading the data from a csv, and simply updating the date column in the row located from the combination of values in the other two columns.
I've seen this
SQL update multiple rows based on multiple where conditions
but the SET value will not be static, and will need to match each row where the other two column values are true. This is because in my table, the combination of those two other columns are always unique.
Psuedocode
UPDATE mytable SET date = (many different date values)
WHERE col_1 = x and col_2 = y
col_1 and col_2 values will change for every row in the csv, as the combination of these two values are unique. I was looking into using CASE in postgres, but I understand it cannot be used with multiple columns.
So basically, a csv row has a date value, that must be updated in the record where col_1 and col_2 equals their respective values in the csv rows. If these values don't exist in the database, they are simply ignored.
Is there an elegant way to do this in a single query? This query is part of a spring batch job, so I might not be able to use native postgres syntax, but I'm struggling to even understand the format of the query so I can worry about the syntax later. Would I need multiple update statements? If so, how can I achieve that in the write step of a spring batch job?
EDIT: Adding some sample data to explain process
CSV rows:
date, col_1, col_2
2021-12-30, 'abc', 'def'
2021-05-30, 'abc', 'zzz'
2021-07-30, 'hfg', 'xxx'
I'll need my query to locate a record where col_1='abc' AND col_2=def, then change the date column to 2021-12-30. I'll need to do this for every row, but I don't know how to format the UPDATE query.
You can insert your CSV data into a (temporary) table (say mycsv) and use UPDATE with a FROM clause. For instance:
CREATE TEMP TABLE mycsv (date DATE, col_1 TEXT, col_2 TEXT);
COPY mycsv FROM '/path/to/csv/csv-file.csv' WITH (FORMAT csv);
UPDATE mytable m SET date = c.date
FROM mycsv c WHERE m.col_1 = c.col_1 AND m.col_2 = c.col_2;
Create an Itemwriter implementation, and override the write() method. That method accepts a list of objects, each returned from your ItemProcessor implementation.
In the write method, simply loop through the objects, and call update on each one in turn.
For Example:
In the ItemWriter:
#Autowired
private SomeDao dataAccessObject;
#Override
public void write(List<? extends YourDTO> someDTOs) throws Exception {
for(YourDTO dto: someDTOs) {
dataAccessObject.update(dto);
}
}
In your DAO:
private static final String sql = "UPDATE mytable SET dateField = ? WHERE col_1 = ? and col_2 = ?";
public void update(YourDTO dto) {
Object[] parameters = { dto.getDate(), dto.getCol1(), dto.getCol2()};
int[] types = {Types.DATE, Types.STRING, Types.STRING};
jdbcTemplate.update(sql, parameters, types);
}
Related
I am trying to replace all of the NULL values to 0 in a column of a big table in HIVE.
However, every time I try to implement some code I end up generating a new column to the table. The column I am trying to change/modify still exists and still has the NULL values but the new column that is automatically generated (i.e. _c1) is what I want the column I am trying to modify, to look like.
I tried to run a COALESCE but that also ended up generating a new column. I also tried to implement a CASE WHEN, but the same results ensued.
Select *,
CASE WHEN columnname IS NULL THEN 0
ELSE columnname
END
from tablename;
Also tried
SELECT coalesce(columnname, CAST(0 AS BIGINT)) FROM tablename
I would just like to update the table with the other columns being as is but the column I want to modify still has its original name but instead of NULL values it has 0's that replaced them.
I don't want to generate a new column but modify an existing one.
How should I do that?
Use insert overwrite .. option.
insert overwrite table tablename
select c1,c2,...,coalesce(columnname,0) as columnname
from tablename
Note that you have to specify all the other column names required in select.
$data['establishments2'] = Establishments::Join("establishment_categories",'establishment_categories.establishment_id','=','establishments.id')->where('establishments.city','LIKE',$location)->where('establishments.status',0)->whereIn('establishment_id',array($est_data))->get(array('establishments.*'));
this is controller condition.
I have two tables, in table1 i am matching id with table2 and then fetching data from table1, and in table2 i have multiple values of same id in table 1. i want to get data of table1 values only one time, but as i am hvg multiple data of same id in table2 , data is repeating multiple times, can anyone please tell me how to get data only one time wheater table2 having single value or multiple value of same id... thank you
you can do it by selecting the field name you desired
instead get all field from table establishments
$data['establishments2'] = Establishments::Join("establishment_categories",'establishment_categories.establishment_id','=','establishments.id')->where('establishments.city','LIKE',$location)->where('establishments.status',0)->whereIn('establishment_id',array($est_data))->get(array('establishments.*'));
you can select specific field from table establishments like
$data['establishments2'] = Establishments::Join("establishment_categories",'establishment_categories.establishment_id','=','establishments.id')->where('establishments.city','LIKE',$location)->where('establishments.status',0)->whereIn('establishment_id',array($est_data))->get('establishments.fieldName');
or you can also do
$data['establishments2'] = `Establishments::Join("establishment_categories",'establishment_categories.establishment_id','=','establishments.id')->where('establishments.city','LIKE',$location)->where('establishments.status',0)->whereIn('establishment_id',array($est_data))->select('establishments.fieldName')->get();`
I am having a Update Statement on a large volume table.
It updates only one row at a time.
Update MyTable
Set Col1 = Value
where primary key filters
With this update statement gets executed I also want a value in return to avoid a Select Query on a same table to save resources.
What will be my syntax to achieve this?
You can use the RETURNING keyword.
Update MyTable
Set Col1 = Value
where primary key filters
returning column1,column2...
into variable1,variable2...
I'm trying to store a row in a DB2 database table where the primary key is an autoincrement. This works fine but I'm having trouble wrapping my head around how to retrieve the primary key value for further processing after successfully inserting the row. How do you achieve this? #JdbcInsert only returns the amount of rows that were inserted ...
Since there does not seem to be a way to do this with SSJS (at least to me), I moved this particular piece of logic from my SSJS controller to a Java helper bean I created for JDBC related tasks. A Statement is capable of handing back generated keys (using the method executeUpdate()). So I still create my connection via #JdbcGetConnection, but then hand it in into the bean. This is the interesting part of the bean:
/**
* SQL contains the INSERT Statement
*/
public int executeUpdate(Connection conn, String SQL){
int returnVal;
Statement stmt = conn.createStatement();
stmt.executeUpdate(SQL,
Statement.RETURN_GENERATED_KEYS);
if(!conn.getAutoCommit()) conn.commit();
ResultSet keys = stmt.getGeneratedKeys();
if(keys.next()){
returnVal = keys.getInt(1);
} else {
returnVal = -1;
}
return returnVal;
}
If you insert more than one row at a time, you'll need to change the key retrieval handling, of course.
In newer DB2 Versions you can transform every Insert into a Select to get automatic generated key columns. An example is:
select keycol from Final Table (insert into table (col1, col2) values (?,?))
keycol is the name of your identity column
The Select can be executed with the same #Function than your usual queries.
In Oracle, given a simple data table:
create table data (
id VARCHAR2(255),
key VARCHAR2(255),
value VARCHAR2(511));
suppose I want to "insert or update" a value. I have something like:
merge into data using dual on
(id='someid' and key='testKey')
when matched then
update set value = 'someValue'
when not matched then
insert (id, key, value) values ('someid', 'testKey', 'someValue');
Is there a better way than this? This command seems to have the following drawbacks:
Every literal needs to be typed twice (or added twice via parameter setting)
The "using dual" syntax seems hacky
If this is the best way, is there any way around having to set each parameter twice in JDBC?
I don't consider using dual to be a hack. To get rid of binding/typing twice, I would do something like:
merge into data
using (
select
'someid' id,
'testKey' key,
'someValue' value
from
dual
) val on (
data.id=val.id
and data.key=val.key
)
when matched then
update set data.value = val.value
when not matched then
insert (id, key, value) values (val.id, val.key, val.value);
I would hide the MERGE inside a PL/SQL API and then call that via JDBC:
data_pkg.merge_data ('someid', 'testKey', 'someValue');
As an alternative to MERGE, the API could do:
begin
insert into data (...) values (...);
exception
when dup_val_on_index then
update data
set ...
where ...;
end;
I prefer to try the update before the insert to save having to check for an exception.
update data set ...=... where ...=...;
if sql%notfound then
insert into data (...) values (...);
end if;
Even now we have the merge statement, I still tend to do single-row updates this way - just seems more a more natural syntax. Of course, merge really comes into its own when dealing with larger data sets.