How to create an update query with typeorm and Oracle json column? - oracle

I have an Oracle table with a JSON column that I want to update, and I am using TypeORM with javascript. I need to access the JSON column in the where clause, following is the raw sql query and what I am attempting with typeORM:
Query updates the date to current date where the key's (inside the json column) value is 123.
entityManager.query(`UPDATE TABLE_NAME T
SET DATE = CURRENT_DATE
WHERE T.JSON_COLUMN.key = '123'`)
The query with createQueryBuilder:
tableRepository.createQueryBuilder()
.update('TABLE_NAME')
.set({DATE: '2021-07-23 10:07:10'})
.where('JSON_COLUMN.key = :key', {key: '123'})
.execute();
I am not sure how to access the JSON column's key in the where clause. Ideally, I would use the dot operator in SQL to access the JSON columns key value pairs, like so:
JSON_Column_Name.key = value but I cannot find a way to implement it with Oracle.
Any help would be appreciated.

Related

Delete element from jsonb array in cockaroachdb

I got field with jsonb tags: [{"value": "tag1"}]
I need to do something like this update table1 set tags = tags - '{"value": "tag1"}' - but this don't work
What query should I execute to delete element from array?
Assuming your table looks like
CREATE TABLE public.hasjsonb (
id INT8 NOT NULL,
hash JSONB NULL,
CONSTRAINT hasjsonb_pkey PRIMARY KEY (id ASC)
)
you can do this with the following statement:
INSERT INTO hasjsonb(id, hash)
(SELECT id,array_to_json(array_remove(array_agg(json_array_elements(hash->'tags')),'{"value": "tag1"}'))
FROM hasjsonb
GROUP BY id
)
ON CONFLICT(id) DO UPDATE SET hash = jsonb_set(hasjsonb.hash, array['tags'], excluded.hash);
The actual json operation here is straightforward, if longwinded. We're nesting the following functions:
hash->'tags' -- extract the json value for the "tags" key
json_array_elements -- treat the elements of this json array like rows in a table
array_agg -- just kidding, treat them like a regular SQL array
array_remove -- remove the problematic tag
array_to_json -- convert it back to a json array
What's tricky is that json_array_elements isn't allowed in the SET part of an UPDATE statement, so we can't just do SET hash = jsonb_set(hash, array['tags'], <that function chain>. Instead, my solution uses it in a SELECT statement, where it is allowed, then inserts the result of the select back into the table. Every attempted insert will hit the ON CONFLICT clause, so we get to do that UPDATE set using the already-computed json array.
Another approach here could be to use string manipulation, but that's fragile as you need to worry about commas appearing inside objects nested in your json.
You can use json_remove_path to remove the element if you know its index statically by passing an integer.
Otherwise, we can do a simpler subquery to filter array elements and then json_agg to build a new array.
create table t (tags jsonb);
insert into t values ('[{"value": "tag2"}, {"value": "tag1"}]');
Then we can remove the tag which has {"value": "tag1"} like:
UPDATE t
SET tags = (
SELECT json_agg(tag)
FROM (
SELECT *
FROM ROWS FROM (json_array_elements(tags)) AS d (tag)
)
WHERE tag != '{"value": "tag1"}'
);

How to do the following query in Oracle NoSQL

I am planning to use NoSQL Cloud Service as our datastore. I have question about the MAP data type. Say I have a column “labels” ( labels MAP(RECORD(value STRING, contentType STRING)) in table “myTable”, which the “labels” column is MAP datatype and the value is RECORD data type .
I want to query the table which return all the rows that the key of the “labels” = particular value, what is the sql statement looks like? I tried:
select * from myTable where labels.keys($key=‘xxxx’)
which doesn’t work.
do we need to add the index for the label field in the MAP? any performance improvement? If yes, how to add this index?
Thanks
Please try the following syntax
select * from myTable t
where t.labels.keys() =any "xxx"
Your syntax is good if you add exists
select * from myTable t
where exists t.labels.keys($key= “xxx”)
Concerning your question about performance
there will be significant performance improvement.
If you want to index only the field names (keys) of the map,
you create the index like this:
create index idx_keys on myTable(labels.keys())
If you want to index both they keys and the associated values:
create index idx_keys_values
on myTable(labels.keys(), labels.values())

ExecuteSQL doesn't select table if it having dateTime Offset value?

I have created table with single column having data type -dateTimeOffset value and inserted some values.
create table dto (dto datetimeoffset(7))
insert into dto values (GETDATE()) -- inserts date and time with 0 offset
insert into dto values (SYSDATETIMEOFFSET()) -- current date time and offset
insert into dto values ('20131114 08:54:00 +10:00') -- manual way
In Nifi,i have specified "Select * from dto" query in Execute SQL .
It shows below error..,
java.lang.IllegalArgumentException: createSchema: Unknown SQL type -155 cannot be converted to Avro type
If i change that column into dateTime then ExecuteSQL runs correctly but it doesn't worked in DateTimeOffset column.
Any help appreciated.
Many thanks
datetimeoffset is a MSSQL-specific JDBC type and is not supported by ExecuteSQL (which supports the standard JDBC types). You could try to cast the datetimeoffset field into some other standard type such as datetime, as described here.
I've created a Custom Processor and adapted the JdbcCommon.java class to include SQL Server's DATETIMEOFFSET. It's just one line of code. I'll try to see if I can ask them to merge this on the official repo.
This is a piece of my JdbcCommon.java:
case TIMESTAMP:
case TIMESTAMP_WITH_TIMEZONE:
case -101: // Oracle's TIMESTAMP WITH TIME ZONE
case -102: // Oracle's TIMESTAMP WITH LOCAL TIME ZONE
case -155: // SQL Server's DATETIMEOFFSET <---- added this line
addNullableField(builder, columnName,
u -> options.useLogicalTypes
? u.type(LogicalTypes.timestampMillis().addToSchema(SchemaBuilder.builder().longType()))
: u.stringType());
break;

SSRS - Pass MDX parameter value to SQL query

I have a EmployeeID parameter:
="[Employee].[Employee Id].&[12345678912345]"
I have a SQL query that calls for EmployeeID parameter like this:
Where...AND (EmployeeID = #EmployeeID)
I then go into the sql dataset's parameters list and set EmployeeID's value to:
=LEFT(RIGHT(Parameters!EmoloyeeID.Value,15),14)
This should give 12345678912345. EmployeeID column in SQL table has datatype of nvarchar(25).
Now when I use lookup to connect Cube's dataset (dataset of the tablix I am working on) with this SQL dataset,
=Lookup(Cube's EmployeeName field, SQL's EmployeeName field, SQL's EmployeeStatus, "SQLDataSet")
I get no output. I get blank. (I know for fact that there is data because when I execute SQL query in SSMS with EmployeeID declared to 12345678912345, I get right EmployeeeName (matches with Cube's EmployeeName value) and EmployeeStatus values.
What am I doing wrong? Am I doing something wrong to EmployeeID parameter's value manipulation?
Is this a typo?
=LEFT(RIGHT(Parameters!EmoloyeeID.Value,15),14)
Just want to check to ensure it's not a simple fix :-)

SQL Server 2008 search for date

I need to search rows entered on a specific date.
However the datatype of column I need to search on is datetime, and the datatype of argument is Date.
I can use the the query like
Select result
from table
where
convert(date, Mycolumn) = #selectedDate
but this would affect the SARGability of the query and will not use indexes created on mycolumn.
I was trying to use the following query:
Select result
from table
where
Mycolumn
BETWEEN #selectedDate AND Dateadd(s, -1, Dateadd(D, 1, #selectedDate))
However this does not work since the #selectedDate is Date type and a second can't be added or removed.
Can someone help me with a working query?
Thanks.
It is my understanding that using:
convert(date, Mycolumn) = #selectedDate
is SARGable. It will use the index on Mycolumn (if one exists). This can easily be confirmed by using the execution plan.
Select result
from table
where
Mycolumn >= #selectedDate
AND Mycolumn < Dateadd(D, 1, #selectedDate)
If you need to do these searches a lot, you could add a computed, persisted column that does the conversion to DATE, put an index on it and then search on that column
ALTER TABLE dbo.YourTable
ADD DateOnly AS CAST(MyColumn AS DATE) PERSISTED
Since it's persisted, it's (re-)calculated only when the MyColumn value changes, e.g. it's not a "hidden" call to a stored function. Since it's persisted, it can also be indexed and used just like any other regular column:
CREATE NONCLUSTERED INDEX IX01_YourTable_DateOnly ON dbo.YourTable(DateOnly)
and then do:
SELECT result FROM dbo.YourTable WHERE DateOnly = #SelectedDate
Since that additional info is stored in the table, you'll be using a bit more storage - so you're doing the classic "space vs. speed" trade-off; you need a bit more space, but you get more speed out of it.

Resources