Delete element from jsonb array in cockaroachdb - jsonb

I got field with jsonb tags: [{"value": "tag1"}]
I need to do something like this update table1 set tags = tags - '{"value": "tag1"}' - but this don't work
What query should I execute to delete element from array?

Assuming your table looks like
CREATE TABLE public.hasjsonb (
id INT8 NOT NULL,
hash JSONB NULL,
CONSTRAINT hasjsonb_pkey PRIMARY KEY (id ASC)
)
you can do this with the following statement:
INSERT INTO hasjsonb(id, hash)
(SELECT id,array_to_json(array_remove(array_agg(json_array_elements(hash->'tags')),'{"value": "tag1"}'))
FROM hasjsonb
GROUP BY id
)
ON CONFLICT(id) DO UPDATE SET hash = jsonb_set(hasjsonb.hash, array['tags'], excluded.hash);
The actual json operation here is straightforward, if longwinded. We're nesting the following functions:
hash->'tags' -- extract the json value for the "tags" key
json_array_elements -- treat the elements of this json array like rows in a table
array_agg -- just kidding, treat them like a regular SQL array
array_remove -- remove the problematic tag
array_to_json -- convert it back to a json array
What's tricky is that json_array_elements isn't allowed in the SET part of an UPDATE statement, so we can't just do SET hash = jsonb_set(hash, array['tags'], <that function chain>. Instead, my solution uses it in a SELECT statement, where it is allowed, then inserts the result of the select back into the table. Every attempted insert will hit the ON CONFLICT clause, so we get to do that UPDATE set using the already-computed json array.
Another approach here could be to use string manipulation, but that's fragile as you need to worry about commas appearing inside objects nested in your json.

You can use json_remove_path to remove the element if you know its index statically by passing an integer.
Otherwise, we can do a simpler subquery to filter array elements and then json_agg to build a new array.
create table t (tags jsonb);
insert into t values ('[{"value": "tag2"}, {"value": "tag1"}]');
Then we can remove the tag which has {"value": "tag1"} like:
UPDATE t
SET tags = (
SELECT json_agg(tag)
FROM (
SELECT *
FROM ROWS FROM (json_array_elements(tags)) AS d (tag)
)
WHERE tag != '{"value": "tag1"}'
);

Related

add values to nested table from another table (oracle)

I am trying to insert values to a nested table with an object of another table. This is what I'm trying (Sorry, I'm new working with dbs):
INSERT INTO Ocurrences (..., oSpace) VALUES
(other inserts,
/* insert I don't know to do it to nested table oSpaces */
);
How I could add a value in oSpaces inserting an object from table Spaces?
Thanks.
Just use a collection of REFerences:
INSERT INTO Ocurrences (
CCase,
/* ... Other column identifiers ..., */
oSpaces
) VALUES (
'abc',
/* ... Other column values ..., */
tSpace(
(SELECT REF(s) FROM spaces s WHERE s.intcode='1')
)
);
db<>fiddle here
As an aside, '20/02/2020' is not a DATE data-type, it is a string literal and relies on an implicit string-to-date conversion. This implicit conversion will fail if the user's NLS_DATE_FORMAT session parameter does not match your string's format and since any user can change their session parameters at any time then this is not something you should rely on.
Instead, you should use:
A date literal DATE '2020-02-20'; or
An explicit conversion TO_DATE('20/02/2020', 'DD-MM-YYYY').

Oracle NoSQL - how to find all the rows in a MAP where the key starts With a value

I have a question about the MAP data type. Say I have a column labels ( labels MAP(RECORD(value STRING, contentType STRING)) in myTable, which the “labels” column is MAP data type and the value is a RECORD data type .
I want to query the table which returns all the rows that the key of the "labels" "startsWith" particular value ("xxx.*"),
I've tried this but I am wondering if there is a better way to do
Select labels.keys($key >='xxx') as keys,
labels.values($key >='xxx') as values
from myTable where labels.keys() >=any ('xxx')
You can try
select * from myTableName t
where exists t.labels.keys(starts_with($key, 'xxx'));
or
select f.labels.keys(regex_like($key,'xxx.*')) as keys,
f.labels.values(regex_like($key,'xxx.*')) as values
from myTable f
I also suggest changing from MAP to ARRAY, which can support path filter to get the matched entries. In the previous examples, the order between the values and keys is not guaranteed
select labels[regex_like($element.label ,‘xxx.*’)] from myTable

Hive inserting values to an array complex type column

I am unable to append data to tables that contain an array column using insert into statements; the data type is array < varchar(200) >
Using jodbc I am unable to insert values into an array column by values like :
INSERT INTO demo.table (codes) VALUES (['a','b']);
does not recognises the "[" or "{" signs.
Using the array function like ...
INSERT INTO demo.table (codes) VALUES (array('a','b'));
I get the following error using array function:
Unable to create temp file for insert values Expression of type TOK_FUNCTION not supported in insert/values
Tried the workaround...
INSERT into demo.table (codes) select array('a','b');
unsuccessfully:
Failed to recognize predicate '<EOF>'. Failed rule: 'regularBody' in statement
How can I load array data into columns using jdbc ?
My Table has two columns: a STRING, b ARRAY<STRING>.
When I use #Kishore Kumar Suthar's method, I got this:
FAILED: ParseException line 1:33 cannot recognize input near '(' 'a' ',' in statement
But I find another way, and it works for me:
INSERT INTO test.table
SELECT "test1", ARRAY("123", "456", "789")
FROM dummy LIMIT 1;
dummy is any table which has atleast one row.
make a dummy table which has atleast one row.
INSERT INTO demo.table (codes) VALUES (array('a','b')) from dummy limit 1;
hive> select codes demo.table;
OK
["a","b"]
Time taken: 0.088 seconds, Fetched: 1 row(s)
Suppose I have a table employee containing the fields ID and Name.
I create another table employee_address with fields ID and Address. Address is a complex data of type array(string).
Here is how I can insert values into it:
insert into table employee_address select 1, 'Mark', 'Evans', ARRAY('NewYork','11th
avenue') from employee limit 1;
Here the table employee just acts as a dummy table. No data is copied from it. Its schema may not match employee_address. It doesn't matter.

Insert default value when null is inserted

I have an Oracle database, and a table with several not null columns, all with default values.
I would like to use one insert statement for any data I want to insert, and don't bother to check if the values inserted are nulls or not.
Is there any way to fall back to default column value when null is inserted?
I have this code:
<?php
if (!empty($values['not_null_column_with_default_value'])) {
$insert = "
INSERT INTO schema.my_table
( pk_column, other_column, not_null_column_with_default_value)
VALUES
(:pk_column,:other_column,:not_null_column_with_default_value)
";
} else {
$insert = "
INSERT INTO schema.my_table
( pk_column, other_column)
VALUES
(:pk_column,:other_column)
";
}
So, I have to omit the column entirely, or I will have the error "trying insert null to not null column".
Of course I have multiple nullable columns, so the code create insert statement is very unreadable, ugly, and I just don't like it that way.
I would like to have one statement, something similar to:
INSERT INTO schema.my_table
( pk_column, other_column, not_null_column_with_default_value)
VALUES
(:pk_column,:other_column, NVL(:not_null_column_with_default_value, DEFAULT) );
That of course is a hypothetical query. Do you know any way I would achieve that goal with Oracle DBMS?
EDIT:
Thank you all for your answers. It seams that there is no "standard" way to achieve what I wanted to, so I accepted the IMO best answer: That I should stop being to smart and stick to just omitting the null values via automatically built statements.
Not exactly what I would like to see, but no better choice.
For those who reading it now:
In Oracle 12c there is new feature: DEFAULT ON NULL. For example:
CREATE TABLE tab1 (
col1 NUMBER DEFAULT 5,
col2 NUMBER DEFAULT ON NULL 7,
description VARCHAR2(30)
);
So when you try to INSERT null in col2, this will automatically be 7.
As explained in this AskTom thread, the DEFAULT keyword will only work as a stand-alone expression in a column insert and won't work when mixed with functions or expressions such as NVL.
In other words this is a valid query:
INSERT INTO schema.my_table
( pk_column, other_column, not_null_column_with_default_value)
VALUES
(:pk_column,:other_column, DEFAULT)
You could use a dynamic query with all rows and either a bind variable or the constant DEFAULT if the variable is null. This could be as simple as replacing the string :not_null_column_with_default_value with the string DEFAULT in your $insert.
You could also query the view ALL_TAB_COLUMNS and use nvl(:your_variable, :column_default). The default value is the column DATA_DEFAULT.
I think the cleanest way is to not mention them in your INSERT-statement. You could start writing triggers to fill default values but that's heavy armor for what you're aiming at.
Isn't it possible to restructure your application code a bit? In PHP, you could construct a clean INSERT-statement without messy if's, e.g. like this:
<?php
$insert['column_name1'] = 'column_value1';
$insert['column_name2'] = 'column_value2';
$insert['column_name3'] = '';
$insert['column_name4'] = 'column_value4';
// remove null values
foreach ($insert as $key => $value) {
if (is_null($value) || $value=="") {
unset($insert[$key]);
}
}
// construct insert statement
$statement = "insert into table (". implode(array_keys($insert), ',') .") values (:". implode(array_keys($insert), ',:') .")";
// call oci_parse
$stid = oci_parse($conn, $statement);
// bind parameters
foreach ($insert as $key => $value) {
oci_bind_by_name($stid, ":".$key, $value);
}
// execute!
oci_execute($stid);
?>
The better option for performance is the first one.
Anyway, as I understand, you don't want to repeat the insert column names and values due the difficult to make modifications. Another option you can use is to run an insert with returning clause followed by an update:
INSERT INTO schema.my_table
( pk_column, other_column, not_null_column_with_default_value)
VALUES
(:pk_column,:other_column, :not_null_column_with_default_value)
RETURNING not_null_column_with_default_value
INTO :insered_value
It seems to work with PHP.
After this you can check for null on insered_value bind variable. If it's null you can run the following update:
UPDATE my_table
SET not_null_column_with_default_value = DEFAULT
WHERE pk_column = :pk_column:
I would like to use one insert
statement for any data I want to
insert, and don't bother to check if
the values inserted are nulls or not.
Define your table with a default value for that column. For example:
create table myTable
(
created_date date default sysdate,
...
)
tablespace...
or alter an existing table:
alter table myTable modify(created_date default sysdate);
Now you (or anyone using myTable) don't have to worry about default values, as it should be. Just know that for this example, the column is still nullable, so someone could explicitly insert a null. If this isn't desired, make the column not null as well.
EDIT: Assuming the above is already done, you can use the DEFAULT keyword in your insert statement. I would avoid triggers for this.

Oracle merge constants into single table

In Oracle, given a simple data table:
create table data (
id VARCHAR2(255),
key VARCHAR2(255),
value VARCHAR2(511));
suppose I want to "insert or update" a value. I have something like:
merge into data using dual on
(id='someid' and key='testKey')
when matched then
update set value = 'someValue'
when not matched then
insert (id, key, value) values ('someid', 'testKey', 'someValue');
Is there a better way than this? This command seems to have the following drawbacks:
Every literal needs to be typed twice (or added twice via parameter setting)
The "using dual" syntax seems hacky
If this is the best way, is there any way around having to set each parameter twice in JDBC?
I don't consider using dual to be a hack. To get rid of binding/typing twice, I would do something like:
merge into data
using (
select
'someid' id,
'testKey' key,
'someValue' value
from
dual
) val on (
data.id=val.id
and data.key=val.key
)
when matched then
update set data.value = val.value
when not matched then
insert (id, key, value) values (val.id, val.key, val.value);
I would hide the MERGE inside a PL/SQL API and then call that via JDBC:
data_pkg.merge_data ('someid', 'testKey', 'someValue');
As an alternative to MERGE, the API could do:
begin
insert into data (...) values (...);
exception
when dup_val_on_index then
update data
set ...
where ...;
end;
I prefer to try the update before the insert to save having to check for an exception.
update data set ...=... where ...=...;
if sql%notfound then
insert into data (...) values (...);
end if;
Even now we have the merge statement, I still tend to do single-row updates this way - just seems more a more natural syntax. Of course, merge really comes into its own when dealing with larger data sets.

Resources