Upsert hash in mongodb. Remove unused keys - ruby

From the Mongo documentation:
If you specify multiple field-value pairs, $set will update or create
each field.
I have a mongoid document like this:
class MyCounter
include Mongoid::Document
field :date, type: Date
field :properties, type: Hash
end
and when I try to change the properties like this:
hash = {properties.0 => 50, properties.1 => 100 }
MyCounter.where(something).find_and_modify(
{ '$set' => hash, {'upsert' => 'true', new: true}
)
it keeps the old keys on the property hash.
What's the correct way to completely replace (and create a new document if it doesn't exist) an hash in a document?
EDIT
I'm currently doing this which is dumb:
MyCounter.where(
date: date
).find_and_modify(
{ '$unset' => { properties: nil} }, {'upsert' => 'true', new: true}
)
MyCounter.where(
date: date
).find_and_modify(
{ '$set' => hash }, {'upsert' => 'true', new: true}
)

Just don't use $set. By only passing field: value pairs all the fields are replaced (except the _id field)
This should work fine.
MyCounter.where(
date: date
).find_and_modify(
hash, {'upsert' => 'true', new: true}
)
http://docs.mongodb.org/manual/reference/method/db.collection.update/#example-update-replace-fields

Related

Use one of the two incoming observable stream to filter data in angular

I have the below code I have the formcontrol value (its a multicheckbox list), the values is an array of true/false, [true, false, true, true, ...]. This is one of the many lists
I also have the data list - collection of objects of structure {id: number, desc: string, disabled: boolean, selected: boolean}
I need to retrieve the id matching the true by matching index, and set an observable
I have the below code
valueChanged(e: any, name: string, key: string, valuesArray: string) {
this.hasChanged = true;
from(name).pipe(
debounceTime(200),
withLatestFrom(this[name].value), // this[name] is form array, gives [true, false, false,..]
mergeMap(values => forkJoin(
of(values),
this[valuesArray].find(val => val.name === name).data
)),
mergeMap(([values, data]) => {
flatMap((val, i) => val ? data[i].id : null),
filter(id => id !== null),
map(ids => {
this.selectedFilters[key] = ids;
switch (key) {
case 'groupId':
this.curGroup.next(ids);
break;
}
});
})
);
}
I need help on the line flatmap where I want to use values, for each of the value (true/false), if val[i] is true, include data[i].id in the output i want he id array of true values [1,2,4,5,..]. I get the error
Argument of type '([values, data]: [[string, unknown], unknown]) => void' is not assignable to parameter of type '(value: [[string, unknown], unknown], index: number) => ObservableInput'.
Type 'void' is not assignable to type 'ObservableInput'.ts(2345)
Need help on how to use solve this issue. Thanks in advance.

How to filter the documents based on created_date in elasticsearch index?

I need to get the documents which are created today
Say A is my model, following is the mapping and indexed columns
def as_indexed_json(options={})
self.as_json({only: [:id,:user_id,:user_agent, :browser,:created_at]})
end
settings :index => { :number_of_shards => 1 } do
mapping :dynamic => 'false' do
indexes :id, :type => 'integer'
indexes :user_agent, :type => 'string'
indexes :created_at, :type => 'date',:format => "dd/MM/yyyy"
end
end
I could get the documents within the date range using range filter and elasticrange-model gem.
query = "{\"query\": {\"range\" : {\"created_at\" : {\"gte\": \"05/01/2018\", \"lte\": \"05/02/2018\", \"format\": \"dd/MM/yyyy||dd/MM/yyyy\"} } },\"aggs\": {\"group_by_browser\": {\"terms\": {\"field\": \"browser\"} } } }"
response = A.__elasticsearch__.search query
response.response["aggregations"]["group_by_browser"]["buckets"].map{|k| [k["key"],k["doc_count"]]}
However, not being able to find a filter which will match documents with todays dat

Complex secondary index operations

I can create a secondary index for this table:
{
contact: [
'example#example.com'
]
}
Like this: indexCreate('contact', {multi: true})
But can I create index for this:
{
contact: [
{
type: 'email',
main: true
value: 'andros705#gmail.com'
}
{
type: 'phone'
value: '0735521632'
}
]
}
Secondary index would only search in objects whose type is 'email' and main is set to 'true'
Here's how you might create such an index:
table.indexCreate(
'email',
row => row('contact').filter({type: 'email'})('value'),
{multi: true})
This works by using a multi-index. When the multi: true argument is passed to indexCreate, the index function is expected to return an array instead of a single value. Every element in that array can be used to look up the document in the index (using getAll or between). If the array is empty, the document will not show up in the index.

Create a Model Scope using hash attribute

I am using mongoid-history gem and mongoid in Ruby. I am actually linking the mongo history to a model called SocialPost, so i can make something like.
history = current_user.social_posts.history_tracks
Now i need to filter this 'history' with a scope or a method that filter attribute 'association_chain' from history tracker model, but 'history_tracks' attributes are made in this way:
<HistoryTracker _id: 57bdc1cb65e59325ae000001, created_at: 2016-08-24 15:48:27 UTC, updated_at: 2016-08-24 15:48:27 UTC, association_chain: [{"name"=>"SocialPost", "id"=>BSON::ObjectId('57ac8b0f65e5930944000000')}], modified: {"facebook_likes_count"=>2594213, "tweeter_followers_count"=>0}, original: {}, version: 1, action: "update", scope: "social_post", modifier_id: nil>
So, how i can create a search in my HistoryTracker model that allow me to search a specific group of ids inside association_chain, something like this:
HistoryTracker.where(:association_chain.in => {"name"=>"SocialPost", "id"=>[GROUPS OF IDS TO SEARCH]}
UPDATE using $elemMatch
#test case multiple social_id: empty result
HistoryTracker.any_in({:association_chain.elem_match => [{name: 'SocialPost', :id.in => social_ids}] })
#test case 1 social_id: match result
HistoryTracker.any_in({:association_chain.elem_match => [{name: 'SocialPost', :id => social_ids.first}] })
I found already a posible solution. The elemMatch should be constructed grouped with ids you need to collect, so this is what i did:
social_ids = [BSON::ObjectId('...1'), BSON::ObjectId('...2'), BSON::ObjectId('...3')]
#reorder the ids with the extra hash elements
a = []
social_ids.each do |id_bson|
a << {name: 'SocialPost', id: id_bson}
end
And now you create the 'QUERY'
HistoryTracker.any_in({:association_chain.elem_match => a})

Logstash: Pass Field Content to Filter

I'm writing a filter and I need to pass a field to it, but I don't know how. I can only specify the name of the field rather than its content:
filter {
if [path] =~ "access" {
:
enhance {
ref => uri # or "uri", [uri]
}
}
}
All will set ref to the string "uri". The config-line in the filter is
config :ref, :validate => :string, :required => true
Of course I can do event[ref] in the filter, but passing the content seems more elegant. What do I need to do?
If I understand your question, you want to know how to use ref in your Ruby code that you've written. Since you've already defined the
config :ref, :validate => :string, :required => true
you just use #ref to get the value.
For example, event['somefield'] = #ref

Resources