Oracle NoSQL - how to find all the rows in a MAP where the key starts With a value - oracle-nosql

I have a question about the MAP data type. Say I have a column labels ( labels MAP(RECORD(value STRING, contentType STRING)) in myTable, which the “labels” column is MAP data type and the value is a RECORD data type .
I want to query the table which returns all the rows that the key of the "labels" "startsWith" particular value ("xxx.*"),
I've tried this but I am wondering if there is a better way to do
Select labels.keys($key >='xxx') as keys,
labels.values($key >='xxx') as values
from myTable where labels.keys() >=any ('xxx')

You can try
select * from myTableName t
where exists t.labels.keys(starts_with($key, 'xxx'));
or
select f.labels.keys(regex_like($key,'xxx.*')) as keys,
f.labels.values(regex_like($key,'xxx.*')) as values
from myTable f
I also suggest changing from MAP to ARRAY, which can support path filter to get the matched entries. In the previous examples, the order between the values and keys is not guaranteed
select labels[regex_like($element.label ,‘xxx.*’)] from myTable

Related

Delete element from jsonb array in cockaroachdb

I got field with jsonb tags: [{"value": "tag1"}]
I need to do something like this update table1 set tags = tags - '{"value": "tag1"}' - but this don't work
What query should I execute to delete element from array?
Assuming your table looks like
CREATE TABLE public.hasjsonb (
id INT8 NOT NULL,
hash JSONB NULL,
CONSTRAINT hasjsonb_pkey PRIMARY KEY (id ASC)
)
you can do this with the following statement:
INSERT INTO hasjsonb(id, hash)
(SELECT id,array_to_json(array_remove(array_agg(json_array_elements(hash->'tags')),'{"value": "tag1"}'))
FROM hasjsonb
GROUP BY id
)
ON CONFLICT(id) DO UPDATE SET hash = jsonb_set(hasjsonb.hash, array['tags'], excluded.hash);
The actual json operation here is straightforward, if longwinded. We're nesting the following functions:
hash->'tags' -- extract the json value for the "tags" key
json_array_elements -- treat the elements of this json array like rows in a table
array_agg -- just kidding, treat them like a regular SQL array
array_remove -- remove the problematic tag
array_to_json -- convert it back to a json array
What's tricky is that json_array_elements isn't allowed in the SET part of an UPDATE statement, so we can't just do SET hash = jsonb_set(hash, array['tags'], <that function chain>. Instead, my solution uses it in a SELECT statement, where it is allowed, then inserts the result of the select back into the table. Every attempted insert will hit the ON CONFLICT clause, so we get to do that UPDATE set using the already-computed json array.
Another approach here could be to use string manipulation, but that's fragile as you need to worry about commas appearing inside objects nested in your json.
You can use json_remove_path to remove the element if you know its index statically by passing an integer.
Otherwise, we can do a simpler subquery to filter array elements and then json_agg to build a new array.
create table t (tags jsonb);
insert into t values ('[{"value": "tag2"}, {"value": "tag1"}]');
Then we can remove the tag which has {"value": "tag1"} like:
UPDATE t
SET tags = (
SELECT json_agg(tag)
FROM (
SELECT *
FROM ROWS FROM (json_array_elements(tags)) AS d (tag)
)
WHERE tag != '{"value": "tag1"}'
);

How to do the following query in Oracle NoSQL

I am planning to use NoSQL Cloud Service as our datastore. I have question about the MAP data type. Say I have a column “labels” ( labels MAP(RECORD(value STRING, contentType STRING)) in table “myTable”, which the “labels” column is MAP datatype and the value is RECORD data type .
I want to query the table which return all the rows that the key of the “labels” = particular value, what is the sql statement looks like? I tried:
select * from myTable where labels.keys($key=‘xxxx’)
which doesn’t work.
do we need to add the index for the label field in the MAP? any performance improvement? If yes, how to add this index?
Thanks
Please try the following syntax
select * from myTable t
where t.labels.keys() =any "xxx"
Your syntax is good if you add exists
select * from myTable t
where exists t.labels.keys($key= “xxx”)
Concerning your question about performance
there will be significant performance improvement.
If you want to index only the field names (keys) of the map,
you create the index like this:
create index idx_keys on myTable(labels.keys())
If you want to index both they keys and the associated values:
create index idx_keys_values
on myTable(labels.keys(), labels.values())

HIVE GROUP_CONCAT with ORDER BY

I have a table like
I expect the output like this (group concat the results in one record, and the group_concat should sort the results by value DESC).
Here is the query I tried,
SELECT id,
CONCAT('{',CONCAT_WS(',',GROUP_CONCAT(CONCAT('"',key, '":"',value, '"'))), '}') AS value
FROM
table_name
GROUP BY id
I want the value in the destination table should be sorted (descending order) by source table value.
To do that, I tried doing GROUP_CONCAT(... ORDER BY value).
Looks like Hive does not support this. Is there any other way to achieve this in hive?
Try out this query.
Hive does not support the GROUP_CONCAT function, but instead you can use the collect_list function to achieve something similar. Also, you will need to use analytic window functions because Hive does not support ORDER BY clause inside the collect_list function
select
id,
-- since we have a duplicate group_concat values against the same key
-- we can pick any one value by using the min() function
-- and grouping by the key 'id'
-- Finally, we can use the concat and concat_ws functions to
-- add the commas and the open/close braces for the json object
concat('{', concat_ws(',', min(g)), '}')
from
(
select
s.id,
-- The window function collect_list is run against each row with
-- the partition key of 'id'. This will create a value which is
-- similar to the value obtained for group_concat, but this
-- same/duplicate value will be appended for each row for the
-- same key 'id'
collect_list(s.c) over (partition by s.id
order by s.v desc
rows between unbounded preceding and unbounded following) g
from
(
-- First, form the key/value pairs from the original table.
-- Also, bring along the value column 'v', so that we can use
-- it further for ordering
select
id,
v,
concat('"', k, '":"', v, '"') as c
from
table_name -- This it th
)
s
)
gs
-- Need to group by 'id' since we have duplicate collect_list values
group by
id

Hive inserting values to an array complex type column

I am unable to append data to tables that contain an array column using insert into statements; the data type is array < varchar(200) >
Using jodbc I am unable to insert values into an array column by values like :
INSERT INTO demo.table (codes) VALUES (['a','b']);
does not recognises the "[" or "{" signs.
Using the array function like ...
INSERT INTO demo.table (codes) VALUES (array('a','b'));
I get the following error using array function:
Unable to create temp file for insert values Expression of type TOK_FUNCTION not supported in insert/values
Tried the workaround...
INSERT into demo.table (codes) select array('a','b');
unsuccessfully:
Failed to recognize predicate '<EOF>'. Failed rule: 'regularBody' in statement
How can I load array data into columns using jdbc ?
My Table has two columns: a STRING, b ARRAY<STRING>.
When I use #Kishore Kumar Suthar's method, I got this:
FAILED: ParseException line 1:33 cannot recognize input near '(' 'a' ',' in statement
But I find another way, and it works for me:
INSERT INTO test.table
SELECT "test1", ARRAY("123", "456", "789")
FROM dummy LIMIT 1;
dummy is any table which has atleast one row.
make a dummy table which has atleast one row.
INSERT INTO demo.table (codes) VALUES (array('a','b')) from dummy limit 1;
hive> select codes demo.table;
OK
["a","b"]
Time taken: 0.088 seconds, Fetched: 1 row(s)
Suppose I have a table employee containing the fields ID and Name.
I create another table employee_address with fields ID and Address. Address is a complex data of type array(string).
Here is how I can insert values into it:
insert into table employee_address select 1, 'Mark', 'Evans', ARRAY('NewYork','11th
avenue') from employee limit 1;
Here the table employee just acts as a dummy table. No data is copied from it. Its schema may not match employee_address. It doesn't matter.

How to list distinct keys of an index?

user_indexes table has a column named 'distinct keys'. Does this value represent the number of distinct keys in the column indexed. In that case, is there a way to list all those keys ?
Does this value represent the number of distinct keys in the column indexed.
Yes, it does represent the number of distinct indexed values.
In that case, is there a way to list all those keys ?
You'll have to manually execute SELECT DISTINCT column_name FROM table_name to get list of distinct values. There is no system view, which stores the distinct values associated to an indexed column.
Since you're interested in the distinct values in an index, you would be better off running a query like this:
SELECT DISTINCT column_name FROM table_name WHERE column_name IS NOT NULL;
This is very likely to use the index to return the distinct values very quickly, without having to do a full table scan and a sort.
(Note: if the column already has a validated NOT NULL constraint, you won't need the "IS NOT NULL" where clause).

Resources