how to compare two json data in snowflake if 2nd json has more key then insert that key and their corresponding value in first json and if 2nd json has less key and value then delete unmatch key and value from 1st json in snowflake
Related
I'm having a csv file with two columns for example column A and Columb B. Column B consists of string value like this : I am, doing good. so when I try to insert this data into a database only the string I am is getting inserted. I just want to know what attribute I need to add to the process group so that I am, doing good will get inserted to the database
The attached image consists of the attributes in the current process group
I have a csv flowfile with single record. I need to create its file name based on couple of column values in the csv file. Can you please let me know how we can do it by using the column name only not the position of the column as column position may change. Example
CSV File
Name , City, State, Country, Gender
John, Dallas, Texas, USA, M
File name should be John_USA.csv
I am trying extract text processor and pulling the first data row using -
row = ^.\r?\n(.)
And then updateattribute processor I am pulling the values from the columns using below expression
${row:getDelimitedField(1)}_${row:getDelimitedField(4)}.csv
But this use the position of the column not the column name. How can I build it using the column name not the position of columns
The way I will do it (maybe be not the efficient one):
Convert the CSV to json
Pass content to attributes (so you can access the field you want like dictionnary (key-value))
Update Attributes
Convert it back to CSV (thus you can control the schema, and the position of the fields).
I am trying to merge two datasets. The dataset on the left contains data about input variables. The dataset on the right contains data about output variable. The two datasets has a common column that contains data of type string. I am trying to merge them into a single dataset, suign the common column, in H2O Flow. When I call the merge operation I get following error:
ERROR MESSAGE: DistributedException from /10.151.9.92:54321: 'Operation not allowed on string vector.'
H2O is running on my local machine.
This error indicates that the data type of the column you want to merge on is of type string. Merges are not allowed on columns of type string, so when you parse your datasets, set the data type of your merge column to enum. After that you should be able to do the merge.
How can i sort my redis cache.
The data:
SADD key '{"id":250,"store_id":3,"url_path":"\/blog\/testblog123123",
"status":"Published","title":"TestBlog123123",
'"description":"","image":null,"description_2":"",
"date":"2017-04-17","blogcategory":"Category 3"}'
Next I need to sort my KEY by id.
This works:
SORT key BY *->id DESC
... but only when:
id > 10
because redis sort ONLY first number.
Maybe I should use another command to add, but I need JSON format.
You could use a sorted set from scratch?
ZADD key 250 '{"id":250,"store_id":3,"url_path":"\/blog\/testblog123123",
"status":"Published","title":"TestBlog123123",
'"description":"","image":null,"description_2":"",
"date":"2017-04-17","blogcategory":"Category 3"}'
I am also not sure why to use Set here at all, because uniqueness of a set element will only be guaranteed for the whole JSON string. And if your JSON serializer changes order of two fields in JSON dict, it will produce another string which is unique again and you'll end up with a dangling old string. Same applies, if you add more fields to the string.
I read that inorder to populate binary values for Insert query you need to create a PreparedStatement and then use setBytes() API to set the byte array as the binary parameter.
My problem is that when i do the same I get "data exception: String data,right truncation".
I read about this that this might come if we populate a value of size more than the declared size. But here I am using a very small byte [] ("s".getbytes()).
I also tried setBinaryStream() but with the same result!
I also tried setting null value. Still I get the same error.
The length of the VARBINARY or LONGVARBINARY column must be enough to accept the data you are inserting. Your CREATE TABLE statement can contain VARBINARY as the type of the column, allowing up to 16MB per each data item.
If you use BINARY as the type, it means only one byte is allowed.