I have a weird situation with $gt and $gte conditions in ruby mongodb driver.
So here is code:
timeline = timeline_db.find({date: {"$gt" => s_time}, username: { "$in" => followers_array } }, sort:["date", Mongo::DESCENDING], limit: 10)
Problem is that this query return item with exactly that time I'm requesting, which would be valid if I wrote $gte instead.
$gte does return exactly same result as $gt. Why does it happen?
Related
I have been stuck on this for several hours now. I need to write a query that returns all documents where (Field A - Field B) > N
// sample data
{ _id: '...', estimated_hours: 0, actual_hours: 0 },
{ _id: '...', estimated_hours: 10, actual_hours: 9 },
{ _id: '...', estimated_hours: 20, actual_hours: 30 }
Borrowing answers from this stack question I wrote the below, In my mind this should have worked, however I am consistently getting records back that do not match the query...
## Attempt 1
n = 0
records = API::Record.where('$where': "(this.estimated_hours - this.actual_hours) > #{n}")
## should return the following, but im getting additional records
#=> [{ _id: '...', estimated_hours: 10, actual_hours: 9 }]
I know I can likely accomplish this with $project however i have to explicitly tell $project what fields I want returned. I need all the fields to be returned, we use a third party library that handles pagination
play
db.collection.find({
$where: "(this.estimated_hours - this.actual_hours) > 1"
})
Similar example for reference
I have a field name "timestamp" in my configuration. It holds an array of data in epoch time (miliseconds). I want to use Ruby filter to convert each epoch time in the array and convert into Date format consumable by Kibana. I am trying to convert each date field and store in a new field as an array. I am getting syntax errors. Can anyone help me out ? I am new to Ruby.
ruby {
code => {'
event.get("timestamp").each do |x| {
event["timestamp1"] = Time.at(x)
}
'}
}
I don't know about logstash, but the Ruby code you include within quotes is invalid. Try this:
ruby {
code => {'
event.get("timestamp").each { |x| event["timestamp1"] = Time.at(x) }
'}
}
If you intend your timestamp key to increment, then you need to include an index:
ruby {
code => {'
event.get("timestamp").each_with_index { |x, i| event["timestamp#{i}"] = Time.at(x) }
'}
}
//This will take an timestamp array with values in milliseconds from epoch time and create a new field with parsed time. This code is part of ruby filter Note : This does not convert into Date field format
code => '
timestamps = Array.new
event.get("timestamp").each_with_index { |x, i|
timestamps.push(Time.at(x.to_i / 1000)) }
event.set( "timestamp1" , timestamps)
'
I'm facing a problem with logstash configuration. You can find my logstash configuration below.
Ruby filter removes every dot - "." from my fields. It seems that every works fine - the result of data filtration is correct but elasticsearch magically responds with: "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"Field name [/ConsumerAdminWebService/getConsumerTransactions.call] cannot contain '.'"} where getConsumerTransactions.call is one of my field key.
input {
http_poller {
urls => {
uatBackend1 => {
method => get
url => "http://some-url/"
headers => {
Accept => "application/json"
}
}
}
request_timeout => 60
# Run every 30 seconds
schedule => { cron => "* * * * * UTC"}
codec => "json"
metadata_target => "http_poller_metadata"
}
}
filter {
ruby {
init => "
def remove_dots hash
new = Hash.new
hash.each { |k,v|
if v.is_a? Hash
v = remove_dots(v)
end
new[ k.gsub('.','_') ] = v
if v.is_a? Array
v.each { |elem|
if elem.is_a? Hash
elem = remove_dots(elem)
end
new[ k.gsub('.','_') ] = elem
} unless v.nil?
end
} unless hash.nil?
return new
end
"
code => "
event.instance_variable_set(:#data,remove_dots(event.to_hash))
"
}
}
output {
elasticsearch {
hosts => localhost
}
}
I'm afraid that this line of code is not correct: event.instance_variable_set(:#data,remove_dots(event.to_hash)) - result data is somehow pinned to the event but the original data persists unchanged and is delivered to Elasticsearch api.
I suppose some clarifications are required here:
I use ES version > 2.0 so dots are not allowed
Ruby filter should replace dots with "_" and it works great - resulting data is fully correct however ES replies with mentioned error. I suspect that filter does not replace event data but simply adds a new filed to Event object. ES then still reads primal data not the updated one.
To be honest Ruby is a magic to me :)
If you're using the ES version 2.0 it could be a version issue where ES doesn't pick up fields which contains . dots.
According to this response in this thread:
Field names cannot contain the . character in Elasticsearch 2.0.
As a work around you might have to mutate (rename) your field names into something like _ or - instead of using the . dot. This ticket pretty much explains this issue, where as . dots can be used in the ES versions which are after 2.0. Hope it helps!
So I have one error and two questions about Horizon. (http://horizon.io/docs/)
I have a simple table and 1 record inside, this is the row:
id: "o34242-43251-462...",
user_id: "3lw5-6232s2...",
other_id: "531h51-51351..."
When I run hz serve I'm getting this error:
Unexpected index name (invalid field): "hz_[["user_id"],[["other_id","0"]]]".
Okay, okay, invalid field... But I didn't find any informations about "valid" fields. Anybody knows the answer? What can I do?
My questions:
How to run Horizon "forever" on ubuntu? Now I'm using just hz serve and "&",
If I have few queries, for example:
let table = this.horizon('firstTable');
let table2 = this.horizon('secondTable');
table.find(someId).fetch().subscribe((item) => {
//and then I want to run other query, e.g:
table2.find(item.id).fetch().subscribe((value) => {
//and here again run other query... <-- how to avoid this?
});
});
How to e.g return a value from horizon's query, and then use this value inside other query? I don't want to write it all in one function...
Thanks for any help.
Since 'fetch' returns an RxJS observable, and it yield one result only, you can use 'toPromise' to consume it in a convenient fashion.
let table = this.horizon('firstTable');
let table2 = this.horizon('secondTable');
let item1 = await table.find(someId).fetch().toPromise();
let item2 = await table2.find(item1.id).fetch().toPromise();
Or without ES7 await, just using Promise.then:
table.find(someId).fetch().toPromise().then((item1) => {
table2.find(item1.id).fetch().toPromise().then((item2) => {
// take over the world from here
});
});
I have certain documents with a name: String and a version: Integer.
What I need is a list of documents of the highest version per name.
So I think I need to do the equivalent of group by in sql and then a having for max version per name.
I have no idea where to start to do this with mongoDB. If anyone could make this query for the mongo terminal that would be a great start, but an added bonus would be to give the sytnax for MongoMapper specifically.
If you are on Mongodb version 2.2+, you can do your query by using the aggregation framework with the group pipeline operator.
The documentation is here: http://docs.mongodb.org/manual/reference/aggregation/
MongoMapper doesn't have a helper for the aggregation framework but you can use the Ruby driver directly (driver version 1.7.0+ has an aggregate helper method). You would have to get an instance of Mongo::Collection and call the aggregate method on it. For example:
Model.collection.aggregate(["$group" =>
{"_id" => "$name",
"max_version" => {"$max" => "$version"}}
])
I hope that helps!
If you want to do a group by with Mongo DB, check the Aggregation Framework, it is the exact tool for the job !
Here you'll find the equivalent in Aggregation Framework for GROUP BY, HAVING, and more.
Thanks to #Simon I had a look at Map Reduce with MongoMapper. My take on it is probably not perfect, but it does what I want it to do. Here's the implementation:
class ChildTemplate
...
key :name, String
key :version, Integer, :default => 1
...
private
def self.map
<<-JS
function() {
emit(this.name, this);
}
JS
end
private
def self.reduce
<<-JS
function(key, values) {
var res = values[0];
for(var i=1; i<values.length; i++)
{
if(values[i].version > res.version)
{
res = values[i];
}
}
return res;
}
end
def self.latest_versions(opts = {})
results = []
opts[:out] = "ct_latest_versions"
ChildTemplate.collection.map_reduce(map, reduce, opts).find().each do |map_hash|
results << map_hash["value"]
end
return results
end