How do I set an attribute to null that is currently set to an empty string '' - oracle

I'm reading in a file using EvaluateJsonPath.
Some attributes values are set as empty string.
This is very problematic when I'm dealing with DATES.
If I use the TO_DATE on my call to Insert or Update and the DATE is '' then NiFi fails because as example
sql.args.2.value is '', which cannot be converted to a timestamp.
The database is setup to allow for null values on the field.
How is one to handle DATEs when the value may be empty and null is valid for the entry when using NiFi to send the data?
============================
updates
I created a test table with just 3 columns. id, TEST_DATE, TEST_TIMESTAMP.
Using NiFi processor 'PutSQL' I am able to insert 1 or both of the columns when a valid value is present in the data read in.
The issue is when the data does not contain the value for a date and NiFi sees it as an empty ''. When the processor attempts to make the call with an empty value '' that is where the message comes from.
Is there any way to conditionally check within the SQL INSERT statement the value of the parameter similar to NVL(?,NULL) ?

The solution when dealing with creating sql.args.#.value and null values is as follow.
On the UpdateAttributes processor where one would define the sql.args.#.value, you create a RULE under the ADVANCED section.
Create a rule to check if the attribute that is used to set the value is not empty ${arg:isEmpty():not()}
Create the sql.args.#.value and set to ${arg}
This will only create the sql.args.#.value if the value is not empty. On the processor that reads in the ? (sql.args.#.values) it will use null in place of the '' empty string value. This was very problematic when dealing with the ORACLE database field types DATE and TIMESTAMP.
Yes I also verified that the SQL works correctly when you have many values and a null is being used in place of defining the sql.args.#.value.
So as an example if you have 6 fields/values to set and the #4 field (sql.args.4.value) we don't set because the value is empty '' the statement will still work correctly for the other fields.
sql.args.1.value = arg1
sql.args.2.value = arg2
sql.args.3.value = arg3
sql.args.5.value = arg5
sql.args.6.value = arg6
INSERT INTO TABLEXYZ (
COL1,
COL2,
COL3,
COL4,
COL5,
COL6)
VALUES (
?,
?,
?,
?,
?,
?
)

Related

how to pass a variable number of parameters using a jdbc prepared statement?

The following statement is used on db2 to perform an UPSERT operation:
MERGE INTO mytable AS mt USING (
SELECT * FROM TABLE (
VALUES
(?, ?),
(?, ?),
—- ^ repeated many times, one for each row to be upserted
)
) AS vt(id, val) ON (mt.id = vt.id)
WHEN MATCHED THEN
UPDATE SET val = vt.val
WHEN NOT MATCHED THEN
INSERT (id, val) VALUES (vt.id, vt.val)
;
Every time I call this statement, I will have a different number of rows to be inserted. Is it possible to make this call using a prepared statement? What would that look like?
Ref: https://stackoverflow.com/a/23784606/1033422
If the number of ? parameter-markers varies per run then you must re-prepare if the number of parameter-markers changes. I would use a Declared Global Temporary Table (DGTT) especially if there are very large numbers of rows. Yes, more statements, but easier to scale because you can dynamically index the DGTT.
For more information on temporary tables in DB2, see this question.

vertica copy command with null value for Integer

Is there any empty character i can put into the csv in order to put a null value
into an integer column
without using the ",X" pattern?
i.e. (X is a value and the first one is null)
Suppose you have a file /tmp/file.csv like this:
2016-01-10,100,abc
2016-02-21,,def
2017-01-01,300,ghi
and a target table defined as follows:
create table t1 ( dt date, id integer, txt char(10));
Then, the following command will insert NULL into "id" for the second column (the one having dt='2016-02-21'):
copy t1 from '/tmp/file.csv' delimiter ',' direct abort on error;
Now, if you want to use a special string to identify NULL values in your input file, let's say 'MYNULL':
2016-01-10,100,abc
2016-02-21,MYNULL,def
2017-01-01,300,ghi
Then... you have to run copy COPY this way:
copy t1 from '/tmp/file.csv' delimiter ',' null 'MYNULL' direct abort on error;

how to pass parameter to oracle update statement from csv file and excluding null values from csv

I have a situation where I have following csv file(say file.csv) with following data:
AcctId,Name,OpenBal,closingbal
1,abc,1000,
2,,0,
3,xyz,,
4,,,
how can I loop through this file using unix shell and say for example for column $2 (Name) , I want to get all occurances of Name column accept null values and pass it to for example following oracle query with single quotes '','' format?
select * from account
where name in (collection of values from csv file column name
but excluding null values)
and openbal in
and same thing for column 3 (collection of values from csv file column Openbal
but excluding null values)
and same thing for column 4 (collection of values from csv file column
closingbal but excluding null values)
In short what I want is pass the csv column values as input parameter to oracle sql query and update query too ? but again I dont want to include null values in it. If a column is entirely null for all rows I want to exclude it too?
Not sure why you'd want to loop through this file in a unix shell script: perhaps because you can't think of any better approach? Anyway, I'm going to skip that and offer a pure Oracle solution.
We can expose data in CSV files to the database using external tables. These are like regular tables except their data comes from files in OS directories on the database server (rather than the database's storage). Find out more.
Given this approach it is easy to write the query you want. I suggest using sub-query factoring to select from the external table once.
with cte as ( select name, openbal, closingbal
from your_external_tab )
select *
from account a
where a.name in ( select cte.name from cte )
and a.openbal in ( select cte.openbal from cte )
and a.closingbal in ( select cte.closingbal from cte )
The behaviour of the IN clause is to exclude NULL from consideration.
Incidentally, that will return a different (larger) result set from this:
select a.*
from account a
, your_external_table e
where a.name = e.name
and a.openbal= e.openbal
and a.closingbal = e.closingbal

Alter table - change the default value of a column

I have a column with REAL data type, and I'm trying to write a statement to change the default value to 4. However, when I use the select * statement to double check, the column is still empty.
ALTER TABLE recipes
MODIFY NumberOfServings DEFAULT '4';
The DEFAULT value is used at INSERT time. It gives a default value to be inserted when you do not provide a value for the column in the INSERT statement.
You may want to update the table, so that all NULL values are replaced by a given value:
UPDATE recipes SET NumberOfServings=4 WHERE NumberOfServings IS NULL;
You can also specify a value to use when the column is NULL at QUERY time.
This can be done by using the NVL function:
SELECT NVL(NumberOfServings,4) FROM recipes;
http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions105.htm
The default value applies only when you insert new records into the table. It does not update any existing values in the table. Why do you enclose a numeric value in quotes?

Oracle merge constants into single table

In Oracle, given a simple data table:
create table data (
id VARCHAR2(255),
key VARCHAR2(255),
value VARCHAR2(511));
suppose I want to "insert or update" a value. I have something like:
merge into data using dual on
(id='someid' and key='testKey')
when matched then
update set value = 'someValue'
when not matched then
insert (id, key, value) values ('someid', 'testKey', 'someValue');
Is there a better way than this? This command seems to have the following drawbacks:
Every literal needs to be typed twice (or added twice via parameter setting)
The "using dual" syntax seems hacky
If this is the best way, is there any way around having to set each parameter twice in JDBC?
I don't consider using dual to be a hack. To get rid of binding/typing twice, I would do something like:
merge into data
using (
select
'someid' id,
'testKey' key,
'someValue' value
from
dual
) val on (
data.id=val.id
and data.key=val.key
)
when matched then
update set data.value = val.value
when not matched then
insert (id, key, value) values (val.id, val.key, val.value);
I would hide the MERGE inside a PL/SQL API and then call that via JDBC:
data_pkg.merge_data ('someid', 'testKey', 'someValue');
As an alternative to MERGE, the API could do:
begin
insert into data (...) values (...);
exception
when dup_val_on_index then
update data
set ...
where ...;
end;
I prefer to try the update before the insert to save having to check for an exception.
update data set ...=... where ...=...;
if sql%notfound then
insert into data (...) values (...);
end if;
Even now we have the merge statement, I still tend to do single-row updates this way - just seems more a more natural syntax. Of course, merge really comes into its own when dealing with larger data sets.

Resources