Is it possible to do an "upsert" with an increment?
So if the query is an insert the counter will initialize with 1 and if it an update it will increment the counter by 1
table.insert({id: 1, counter: 1}, {conflict: function(id, oldVal, newVal) {
return newVal.merge({counter: oldVal('counter').add(1)})
}})
Introduced in v2.3.0 IIUC:
https://github.com/rethinkdb/rethinkdb/releases/tag/v2.3.0
Related
I have a table in DynamoDB with a composite key
user_id (PartitionKey)
timestamp (Sort Key)
I want to retrieve a group of items, I have a set of user_ids I want to retrieve but I also need the timestamp to be the MAX for every single user_id.
For example:
user_id
timestamp
attr
1
1614910613
value1
1
1614902527
value 2
2
1614910683
value 2
2
1614881311
value 2
3
1614902527
value 2
I want to retrieve the rows for user_id 1 and 2, with the max timestamp for each user. In this case, my result set should be:
user_id
timestamp
attr
1
1614910613
value1
2
1614910683
value 2
I can do this now on an item-by-item basis doing this Key expression:
cond := expression.KeyAnd(
expression.Key("user_id").Equal(expression.Value(userId)),
expression.Key("timestamp").LessThanEqual(expression.Value(int32(time.Now().Unix()))),
)
expr, err:= expression.NewBuilder().WithKeyCondition(cond).Build()
if err!=nil {
panic(err)
}
input := &dynamodb.QueryInput{
ExpressionAttributeNames: expr.Names(),
ExpressionAttributeValues: expr.Values(),
KeyConditionExpression: expr.KeyCondition(),
TableName: aws.String("td.notes"),
Limit: aws.Int64(1),
ScanIndexForward: aws.Bool(false),
}
My problem is, I don't know how to pass a set of values for the user_id key instead of a single value. I took a look at the BatchGetItem, I know I can do something like this:
mapOfAttrKeys := []map[string]*dynamodb.AttributeValue{}
for _, place := range userIDs {
mapOfAttrKeys = append(mapOfAttrKeys, map[string]*dynamodb.AttributeValue{
"user_id": &dynamodb.AttributeValue{
S: aws.String(place),
},
"timestamp": &dynamodb.AttributeValue{
N: aws.String(timestamp),
},
})
}
input := &dynamodb.BatchGetItemInput{
RequestItems: map[string]*dynamodb.KeysAndAttributes{
tableName: &dynamodb.KeysAndAttributes{
Keys: mapOfAttrKeys,
},
},
}
But I cannot put a "less than" condition in the timestamp.
Thanks!
GetItem and BatchGetItem require the exact primary key.
You'll need to issue separate Query() for each user you want to return.
The key to returning max, is as you've found
Limit: aws.Int64(1),
ScanIndexForward: aws.Bool(false),
You really shouldn't even need this
expression.Key("timestamp").LessThanEqual(expression.Value(int32(time.Now().Unix())))
I have a method called getAllEmployees() which returns a pageable.
Lets say the page size is 1 and started with page 0
Pageable pageable = PageRequest.of(0, 1);
Page<String> allEmployees = service.getAllEmployeeNames(pageable)
while(true) {
for(String name: allEmployees.getContent()) {
// check whether the employee needs to be deleted or not based on certain conditions
boolean isDelete = service.checkEmployeeToBeDeleted()
if (isDelete)
EmployeeEntity entity = service.findByName(name);
service.delete(entity);
}
if (!page.hasNext()) {
break;
}
pageable = page.nextPageable();
}
In this scenario, all employees are not deleted only those matching the condition will be
deleted
as page size is 1
Let's say, total 6 employees
emp pageable(pageNumber, pageSize)
1 (0 ,1)
2 (1 ,1)
3 (2 ,1)
4 (3 ,1)
5 (4 ,1)
6 (5 ,1)
when 1 gets deleted the pageable will be like
2 (0 ,1)
3 (1 ,1)
4 (2 ,1)
5 (3 ,1)
6 (4 ,1)
but as we go with page.nextPageable() the next one will be like (1,1)
So on next fetch, it will pick emp 3, and emp 2 will be missed
I had a similiar problem. The problem is that you are doing the 'search' and the 'deletion' in one step while you are on an iteration. So the deleting will modify the search, but you are one step further with you initial search. (I think this is the main problem).
My solutin was, first search ALL objects that need to be deleted and put them into a list. And after the searching, delete those objects. So your code could be something like:
Page<String> allEmployees = service.getAllEmployeeNames(pageable)
List<EmployeeEntity> deleteList = new ArrayList<>();
while(true) {
for(String name: allEmployees.getContent()) {
// check whether the employee needs to be deleted or not based on certain conditions
boolean isDelete = service.checkEmployeeToBeDeleted()
if (isDelete)
EmployeeEntity entity = service.findByName(name);
deleteList.add(entity);
}
if (!page.hasNext()) {
// here iterate over delete list and service.delete(entity);
break;
}
pageable = page.nextPageable();
}
You need only use nextPageable() when you not delete employee. Just add conditions like this:
if (!isDelete) {
pageable = page.nextPageable();
}
I have two lists in elixir. One list (list1) has values which get consumed in another list (list2). So I need to iterate over list2 and update values in list1 as well as list2.
list1 = [
%{
reg_no: 10,
to_assign: 100,
allocated: 50
},
%{
reg_no: 11,
to_assign: 100,
allocated: 30
},
%{
reg_no: 12,
to_assign: 100,
allocated: 20
}
]
list2 = [
%{
student: student1,
quantity: 60,
reg_nos: [reg_no_10, reg_no_11]
},
%{
student: student2,
quantity: 40,
reg_nos: [reg_no_11, reg_no_12]
},
%{
student: student3,
quantity: 30,
reg_nos: nil
}
]
I need to assign values from list1 to quantity field of list2 till quantity is fulfilled. e.g. student1 quantity is 60 which will need reg_no 10 and reg_no 11.
With Enum.map I cannot pass updated list1 for 2nd iteration of list2 and assign value reg_nos: reg_no_11, reg_no_12for student2.
So, my question is how can I send updated list1 to 2nd iteration in list2?
I am using recursion to get quantity correct for each element in list2. But again, should I use recursion only to send updated list1 in list2? With this approach, there will be 2 nested recursions. Is that a good approach?
If I understand your question correctly, you want to change values in a given list x, based on a list of values in another list y.
What you describe is not possible in a functional language due to immutability, but you can use a reduce operation where x is the state or so-called "accumulator".
Below is an example where I have a ledger with bank accounts, and a list with transactions. If I want to update the ledger based on the transactions I need to reduce over the transactions and update the ledger per transaction, and pass the updated ledger on to the next transaction. This is the problem you are seeing as well.
As you can see in the example, in contrast to map you have a second parameter in the user-defined function (ledger). This is the "state" you build up while traversing the list of transactions. Each time you process a transaction you have a change to return a modified version of the state. This state is then used to process the second transaction, which in turn can change it as well.
The final result of a reduce call is the accumulator. In this case, the updated ledger.
def example do
# A ledger, where we assume the id's are unique!
ledger = [%{:id => 1, :amount => 100}, %{:id => 2, :amount => 50}]
transactions = [{:transaction, 1, 2, 10}]
transactions
|> Enum.reduce(ledger, fn transaction, ledger ->
{:transaction, from, to, amount} = transaction
# Update the ledger.
Enum.map(ledger, fn entry ->
cond do
entry.id == from -> %{entry | amount: entry.amount - amount}
entry.id == to -> %{entry | amount: entry.amount + amount}
end
end)
end)
end
This pertains to my previous query.
I am able to get batches of records from ActiveRecord using the following query:
Client.offset(15 * iteration).first(15)
I'm running into an issue whereby a user inputs a new record.
Say there are 100 records, and batches are of 15.
The first 6 iterations (90 records) and the last iteration (10 records) works. However, if a new entry comes in (making it a total of 101 records), the program fails as it will either:
a) If the iteration counter is set to increment, it will query for records beyond the range of 101, resulting in nothing being returned.
b) If the iteration counter is modified to increment only when an entire batch of 15 items are complete, then repeat the last 14 items plus 1 new item.
How do i go about getting newly posted records?
dart code:
_getData() async {
if (!isPerformingRequest) {
setState(() {
isPerformingRequest = true;
});
//_var represents the iteration counter - 0, 1, 2 ...
//fh is a helper method returning records in List<String>
List<String> newEntries = await fh.feed(_var);
//Count number of returned records.
newEntries.forEach((i) => _count++);
if (newEntries.isEmpty) {
..
}
} else {
//if number of records is 10, increment _var to go to the next batch of 10.
if(_count == 10) {
_var++;
_count = 0;
}
//but if count is not 10, stay with the same batch (but previously counted/display records will be shown again)
}
setState(() {
items.addAll(newEntries);
isPerformingRequest = false;
});
}
}
How to make a rethinkdb atomic update if document exists, insert otherwise?
I want to do something like:
var tab = r.db('agflow').table('test');
r.expr([{id: 1, n: 0, x: 11}, {id: 2, n: 0, x: 12}]).forEach(function(row){
var _id = row('id');
return r.branch(
tab.get(_id).eq(null), // 1
tab.insert(row), // 2
tab.get(_id).update(function(row2){return {n: row2('n').add(row('n'))}}) // 3
)})
However this is not fully atomic, because between the time when we check if document exists (1) and inserting it (2) some other thread may insert it.
How to make this query atomic?
I think the solution is passing
conflict="update"
to the insert method.
Als see RethinkDB documentation on insert
To make the inner part atomic, you can use replace:
tab.get(_id).replace(function(row2){
return r.expr({id: _id, n: row2('n').add(row('n'))})
.default(row)
})
Using replace is the only way at the moment to perform fully atomic upserts with non-trivial update operations.
This is a standard database operation called an "upsert".
Does RethinkDB support upserts?
Another question from Sergio
Tulentsev. The first public release didn't include support for
upserts, but now it can be done by passing an extra flag to insert:
r.table('marvel').insert({
superhero: 'Iron Man',
superpower: 'Arc Reactor'
}, {upsert: true}).run()
When set to true, the new document
from insert will overwrite the existing one.
cite: Answers to common questions about RethinkDB
OK, I figured out how to do it.
The solution is to firstly try insert, and if it fail we make update:
var tab = r.db('agflow').table('test');
tab.delete();
r.expr([{id: 1, n: 0, x: 11}, {id: 2, n: 0, x: 12}]).forEach(function(row){
var _id = row('id');
var res = tab.insert(row);
return r.branch(
res('inserted').eq(1),
res,
tab.get(_id).update(function(row2){return {n: row2('n').add(1)}})
)})