I want to insert new document on update fail - is there any way to do this? Now RethinkDB allows me only to update document on insert fail via {upsert: true} in insert command.
You can use replace with a branch and an explicit merge.
replace is like update, except it completely replaces a document rather than merging with it. The following are equivalent (in Ruby code):
table.get(id).update{|row| {a: row['a']+1}}
table.get(id).replace{|row| row.merge({a: row['a']+1})}
So if you want to do an "update", or else insert a row if there is no row, you could do this:
table.get(id).replace {|row|
r.branch(
row.eq(nil),
INSERT_OBJECT,
row.merge(UPDATE_OBJECT))
}
Related
I want to create a new table based on this one:
that filters for Warehouse=2 and "drops" the columns "Price" and "Cost" like this:
I have managed to apply the filter in the first step using:
FILTER(oldtable;oldtable[Warehouse]=2)
and then in the next step cold create another table that only selects the required columns using:
newtable2=SELECTCOLUMNS("newtable1";"Articlename";...)
But I want to be able to combine these two functions and create the table straight away.
This is very simple, because in your first step, a table is returned which you can use directly in your second statement.
newTabel = SELECTCOLUMNS(FILTER(warehouse;warehouse[Warehouse]=2);"ArticleName";warehouse[Articlename];"AmountSold";warehouse[AmountSold];"WareHouse";warehouse[Warehouse])
If you want to keep the overview, you can also use variables and return:
newTabel =
var filteredTable = FILTER(warehouse;warehouse[Warehouse]=2)
return SELECTCOLUMNS(filteredTable;"ArticleName";warehouse[Articlename];"AmountSold";warehouse[AmountSold];"WareHouse";warehouse[Warehouse])
Can someone please help me, how to execute bulk insert with header "Content-Type: application/x-ndjson" in elastic4s ? I have tried this
client.execute {
bulk(
indexInto("cars" / "car").source(getCarsFromJson)
).refresh(RefreshPolicy.WaitFor)
}.await
It works for one element in json, but when i add another element to json, no element are added to elastic.
Are you sure you are using the right syntax? Shouldn't it say
"cars/car"
Instead of
"cars" / "car"
The source method on indexInto will not support multiple json objects, because you're trying to put multiple documents inside a single document insert.
Instead, you will need to take your json, parse it into objects, and iterate over them adding an insert document for each one.
Something like the following:
def getCarsFromJson: Seq[String] = /// must return a sequence of json strings
val inserts = getCarsFromJson.map { car => indexInto("cars" /"car").source(car) }
client.execute {
bulk(inserts:_*).refresh(RefreshPolicy.WaitFor)
}
After looking at some SO questions and issues on RethinkDB github, I failed to come to a clear conclusion if atomic Upsert is possible?
Essentially I would like to perform the same operation as ZINCRBY using Redis.
If member does not exist in the sorted set, it is added with increment
as its score (as if its previous score was 0.0). If key does not
exist, a new sorted set with the specified member as its sole member
is created.
The current implementation appears to differ from almost all databases that I have used. With the data being replaced or inserted not updated. This is a simple use case, like update the last visit, update the number of clicks, update a product quantity. So I must be missing something very obvious, because I cannot see a simple way to do this.
Yes, it is possible. After get on the key, perform an atomic replace. Something like this might work:
function set_or_increment_score(player, points){
return r.table('scores').get(player).replace(
row =>
{ id: player,
score: r.branch(
row.eq(null),
points,
row('score').add(points))
});
}
It has the following behaviour:
> set_or_increment_score("alice", 1).run(conn)
{ inserted: 1 }
> set_or_increment_score("alice", 2).run(conn)
{ replaced: 1 }
It works because get returns null when the document doesn't exist, and a replace on a non-existing document tuns into an insert. See the documentation for replace
So I end up using the following code to go around the no Update issue.
r.db("test").table("t").insert(
{id:"A", type:"player", species:"warrior", score:0, xp:0, armor:0},
{conflict: function(id, oldDoc, newDoc) {
return newDoc.merge(oldDoc).merge(
{armor: oldDoc("armor").add(1)});
}
}
)
Do you think this is more readable/elegant or do you see any issues with the code compared to your sample?
I have a table where each row has a JSON structure as follows that I'm trying to index in a postgresql database and was wondering what the best way to do it is:
{
"name" : "Mr. Jones",
"wish_list": [
{"present_name": "Counting Crows",
"present_link": "www.amazon.com"},
{ "present_name": "Justin Bieber",
"present_link": "www.amazon.com"},
]
}
I'd like to put an index on each present_name within the wish_list array. The goal here is that I'd like to be able to find each row where the person wants a particular gift through an index.
I've been reading on how to create an index on a JSON which makes sense. The problem I'm having is creating an index on each element of an array within a JSON object.
The best guess I have is using something like the json_array_elements function and creating an index on each item returned through that.
Thanks for a push in the right direction!
Please check JSONB Indexing section in Postgres documentation.
For your case index config may be the following:
CREATE INDEX idx_gin_wishlist ON your_table USING gin ((jsonb_column -> 'wish_list'));
It will store copies of every key and value inside wish_list, but you should be careful with a query which hits the index. You should use #> operator:
SELECT jsonb_column->'wish_list'
FROM your_table WHERE jsonb_column->'wish_list' #> '[{"present_link": "www.amazon.com", "present_name": "Counting Crows"}]';
Strongly suggested to check existing nswers:
How to query for array elements inside JSON type
Index for finding an element in a JSON array
How can you run a Sequel migration that updates a newly added column with a value from the row?
The Sequel documentation shows how you can update the column with a static value:
self[:artists].update(:location=>'Sacramento')
What I need to do is update the new column with the value of the ID column:
something like:
self[:artists].each |artist| do
artist.update(:location => artist[:id])
end
But the above doesn't work and I have been unable to figure out how to get it to go.
Thanks!
artist in your loop is a Hash, so you are calling Hash#update, which just updates the Hash instance, it doesn't modify the database. That's why your loop doesn't appear to do anything.
I could explain how to make the loop work (using all instead of each and updating a dataset restricted to the matching primary key value), but since you are just assigning the value of one column to the value of another column for all rows, you can just do:
self[:artists].update(:location=>:id)
if you need update all rows of a table, because it is a new column that need be populate
artists = DB[:artists]
artists.update(:column_name => 'new value')
or if you need, update only a unique row into your migration file you can:
artists = DB[:artists]
artists.where(:id => 1).update(:column_name1 => 'new value1', :column_name2 => "other")