I have a table with about 2m records with following structure:
{
"fields": "values",
"transactions": [
{
'from': '...',
'value': '...'
}
]
}
Now I want to separate the data in transaction field to be separate records in another table. Is there some native way to do it in rethinkdb?
I was able to do it using concatMap:
r.table('transactions').insert(
r.table('records').concatMap(function (doc){return doc('transactions')})
)
Related
I am trying to use liquibase to perform db changes for each deployed version (I'm using oracle db).
When I insert new data to a table, I'm using sequence to populate the ID field. But it is also important to me to have the abilty to rollback this insert - meaning delete the newly created row with the id create from the next value of the sequnece.
My question is how to write a rollback to the changeSet that will delete the new row using the created id from the sequence. (I can't use the sequence itself because its value can change many times before perfroming the rollback)
For example:
{
"changeSet": {
"id": 1,
"author": "somebody",
"changes": [
{
"insert": {
"tableName": "EMPLOYEES".
"columns": [
{
"column": {
"name": "id",
"valueSequenceNext": "EMPLOYEES_SEQ"
}
},
{
"column": {
"name": "name",
"value": "john dou"
}
}
]
}
}
],
"rollback": "here rollback the insert using the sequence"
}
}
If it's possible that you created many new inserts using that sequence, before performing rollback, then it's not possible to rollback it automatically.
Quick, not ideal workaround:
what about creating additional TEMP_TABLE table to store this created_id?
columns: CREATED_ID, CHANGESET_ID, ORDINAL_NUMBER
write a rollback to DELETE row from TEMP_TABLE where ORDINAL_NUMBER is MAX(ORDINAL_NUMBER)?
Can't see any other options, Liquibase itself doesn't store any info about inserted records.
How can I use the jsonb query functions to get matching TrackingDetails from the whole table, when the jsonb field in my table contains json in this format? For example, I have an order_events table, with a column called event_data, that will contain this json:
{
"Responses": [
{
"Response": {
"TrackingDetails": {
"IntegrationId": "IntegrationId",
"MessagePart": 0,
"MessageTotal": 0,
"MessageGroupId": "MessageGroupId",
"SequenceNumber": "SequenceNumber",
"InterfaceRecordId": "InterfaceRecordId",
"SalesOrderNumber": "SalesOrderNumber",
"SalesOrderReference": "SalesOrderReference",
"DispatchNumber": "DispatchNumber",
"Courier": "Courier",
"TrackingNumber": "TrackingNumber",
"TrackingUrl": "TrackingUrl"
}
}
}
]
}
I would like to get the TrackingDeatils as a Json node, by TrackingNumber, across all rows in the table. It would include multiple rows matched. Thanks for any help.
My index looks like this:
"_source": {
"ProductName": "Random Product Name",
"Views": {
"Washington": [
{ "4nce5bbszjfppltvc": "2018-04-07T18:25:16.160Z" },
{ "4nce5bba8jfpowm4i": "2018-04-07T18:05:39.714Z" },
{ "4nce5bbszjfppltvc": "2018-04-07T18:36:23.928Z" },
]
}
}
I am trying to count the number of unique objects in Views.Washington.
In this case, the result would be 2, since two objects have the same key names. ( first and third object in the array ).
Obviously, my first thought was to use aggregations, but I am not sure how to use them with nested objects, like these.
Can this be done with normal aggregations?
Will I need to use a script?
Yes this can be done with Aggregations: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-nested-aggregation.html
hi i'm using spring data in My project and I'm trying group by two fields, heres the request:
#Query( "SELECT obj from Agence obj GROUP BY obj.secteur.nomSecteur,obj.nomAgence" )
Iterable<Agence> getSecteurAgenceByPc();
but it doesnt work for me..what i want is this result:
-Safi
-CTM
CZC1448YZN
2UA13817KT
-Rabat
-CTM
CZC1349G1B
2UA0490SVR
-Agdal
G3M4NOJ
-Essaouira
-CTM
CZC1221B85
-Gare Routiere Municipale
CZC145YL3
What I get is
{
"status": 0,
"data":
[
{
"secteur": "Safi",
"agence": "CTM"
},
{
"secteur": "Safi",
"agence": "Dep"
},
{
"secteur": "Rabat",
"agence": "Agdal"
},
{
"secteur": "Rabat",
"agence": "CTM"
},
{
"secteur": "Essaouira",
"agence": "CTM"
},
{
"secteur": "Essaouira",
"agence": "Gare Routiere Municipale"
}
]
}
What you want is not possible with JPQL.
What does Group By do?
It combines all rows that are identical in the columns in the group by clause in to one row. Since it combines multiple rows into one, data in other columns can only be present in some combined fashion. For example, you can include MIN/MAX or AVG values, but never the orginal values.
Also the result with always be a table, never a tree.
Also note: there is no duplicated data. Every combination of secteur and agence appears exactly once.
If you want a tree structure, you have to write some java code for that.
I am trying to add heterogenous data (i.e. of different "types") to Elasticsearch. Each (top-level) object contains a user's settings for an application. A simplified example is:
{
'name':'test',
'settings': [
{
'key':'color',
'value':'blue'
},
{
'key':'isTestingMode',
'value':true
},
{
'visibleColumns',
'value': [
'column1',
'column3',
'column4',
]
},
...
...
}
When I try to add this, the POST fails with an MapperParsingException. Searching around, it seems like this is because the 'value' field has different types.
Is there any way to just store arbitrary data like this?
This is not possible.
Mapping is per field and mapping is not array aware.
This means that you can keep settings.value as string or array but not both.
An easy tweak would be to define all value as array -
{
'name':'test',
'settings': [
{
'key':'color',
'value': [ 'blue' ]
},
{
'key':'isTestingMode',
'value': [ true ]
},
{
'visibleColumns',
'value': [
'column1',
'column3',
'column4',
]
},
...
...
}
If that is not acceptable , then another idea would be to apply source transform which will do this normalization to the settings.value field before it is indexed. This way , the source is kept as it is AND you will get what you want.