Elasticsearch sum total values for specific hours within a month - elasticsearch

I have an elasticsearch server with fields: timestamp, user and bytes_down (among others)
I would like to total the bytes_down value for a user for a month BUT only where the hours are between 8am and 8pm
I'm able to get the daily totals with the date histogram with following query (I'm using the perl API here) but can't figure out a way of reducing this down to the hour range for each day
my $query = {
index => 'cm',
body => {
query => {
filtered => {
query => {
term => {user => $user}
},
filter => {
and => [
{
range => {
timestamp => {
gte => '2014-01-01',
lte => '2014-01-31'
}
}
},
{
bool => {
must => {
term => { zone => $zone }
}
}
}
]
}
}
},
facets => {
bytes_down => {
date_histogram => {
field => 'timestamp',
interval => 'day',
value_field => 'downstream'
}
}
},
size => 0
}
};
Thanks
Dale

I think you need to use script filter instead of range filter and then you need to put it in facet_filter section of your facet:
"facet_filter" => {
"script" => {
"script" => "doc['timestamp'].date.getHourOfDay() >= 8 &&
doc['timestamp'].date.getHourOfDay() < 20"
}
}

Add a bool must range filter for every hour, I'm not sure if you're looking to do this forever or for the specific day, but this slide show from Zachary Tong is a good way to understand what you could be doing, especially with filters in general.
https://speakerdeck.com/polyfractal/elasticsearch-query-optimization?slide=28

Related

How to export nested fields in Elasticsearch Index as CSV file to Google Cloud Storage Using Logstash

I am using ElasticSearch, here we are creating the day wise index and huge amount of data is being ingested every minute. wanted to export few fields from index created every day to Google cloud storage, I am able to get direct fields from index, How to get fields from nested objects in elastic search index and send them as csv file to GCS bucket using Logstash
Tried below conf to fetch nested fields from index, it didnt work and giving empty values in output csv file:
input {
elasticsearch {
hosts => "host:443"
user => "user"
ssl => true
connect_timeout_seconds => 600
request_timeout_seconds => 600
password => "pwd"
ca_file => "ca.crt"
index => "test"
query => '
{
"_source": ["obj1.Name","obj1.addr","obj1.obj2.location"],
"query": {
"match_all": {}
}
}
'
}
}
filter {
mutate {
rename => {
"obj1.Name" => "col1"
"obj1.addr" => "col2"
"obj1.obj2.location" => "col3"
}
}
}
output {
google_cloud_storage {
codec => csv {
include_headers => true
columns => [ "col1", "col2","col3"]
}
bucket => "bucket"
json_key_file => "creds.json"
temp_directory => "/tmp"
log_file_prefix => "log_gcs"
max_file_size_kbytes => 1024
date_pattern => "%Y-%m-%dT%H:00"
flush_interval_secs => 600
gzip => false
uploader_interval_secs => 600
include_uuid => true
include_hostname => true
}
}
How to get field populated to above csv from array of objects, in below example i wanted to fetch categoryUrl:
"Hierarchy" : [
{
"level" : "1",
"category" : "test",
"categoryUrl" : "testurl1"
},
{
"level" : "2",
"category" : "test2",
"categoryUrl" : "testurl2"
}}
You need to use the Logstash field notation
mutate {
rename => {
"[obj1][Name]" => "col1"
"[obj1][addr]" => "col2"
"[obj1][obj2][location]" => "col3"
}
}
}

logstash Issues with replace logfile time with #timestamp in kibana

We have a logstash parsing script to parse the logfiles and we have written parsing script for the same. We are facing issue when trying to replace #timestamp with logfile time. Below is the filter that we have used
filter {
json {
source => "message"
target => "doc"
}
mutate {
copy => { "[doc][message]" => "mesg" }
copy => { "[doc][log][file][path]" => "logpath" }
remove_field => [ "[doc]" ]
}
if ( "/prodlogsfs/" not in [logpath] ) {
drop { }
}
if [logpath] {
dissect {
mapping => {
"logpath" => "%{deployment}deployment-%{?id}-%{?extra}"
}
}
}
grok { match => { "mesg" => [ "^\s?\[%{DATA:loglevel}\] %{TIMESTAMP_ISO8601:logts} \[%{DATA:threadname}\] %{DATA:podname} %{DATA:filler1} \[%{DATA:classname}\] %{GREEDYDATA:fullmesg}",
"(\s)+(?<exception>%{DATA}Exception)[:\s]+(?<trace>(%{DATA}at)+)"
]
} }
#Date Filter being used to replace #timestamp with logfile time
if [logts] {
date {
match => [ "logts", "ISO8601" ]
timezone => "Asia/Kolkata"
target => ["#timestamp"]
}
}
With the above code, when we check the value for #timestamp and logts in kibana, #timestamp shows the currenttime. Whereas the logts time seems to be a future time (+5.30) . Need help on how to match the #timestamp with logts.
Anyhelp on this is much appreciated. Thanks in Advance

elasticsearch only show where nested object has no values

I have the following structure (simplified):
{
"id": 100,
"vendorStatuses": [
{
"id": 200,
"status": "Open"
}
]
}
What I want to find is records where there are no vendor statuses. We recently upgraded from elasticseach 1.x to 5.x and I'm having trouble converting to get this functionality back.
My old Nest query looked like this:
!Filter<PurchaseOrder>.Nested(nfd => nfd.Path(x => x.VendorStatuses.First())
.Filter(f2 => f2.Missing(y => y.Id)));
The new query (now that Missing isn't available) looks like this so far:
Query<PurchaseOrder>
.Bool(z => z
.MustNot(a => a
.Exists(t => t
.Field(f => f.VendorStatuses)
)
)
);
Which generates this:
GET purchaseorder/_search
{
"query": {
"bool": {
"must_not": [
{
"exists": {
"field": "vendorStatuses",
}
}
]
}
}
}
But I'm still seeing results that have vendorStatuses records.
What am I doing wrong? I've tried searching for vendorStatuses.id or other fields, but it's not working. When I try to reverse the logic and do a must i see no results. I also tried doing it as a nested but couldn't get any closer with that.
The query using must_not and exists is not a nested query like the 1.x query. I think you're looking for something like
var query = Query<PurchaseOrder>
.Bool(z => z
.MustNot(a => a
.Nested(n => n
.Path(p => p.VendorStatuses)
.Query(nq => nq
.Exists(t => t
.Field(f => f.VendorStatuses)
)
)
)
)
);
client.Search<PurchaseOrder>(s => s.Query(_ => query));
which yields
{
"query": {
"bool": {
"must_not": [
{
"nested": {
"query": {
"exists": {
"field": "vendorStatuses"
}
},
"path": "vendorStatuses"
}
}
]
}
}
}
You can use operator overloading to make the query more succinct too
var query = !Query<PurchaseOrder>
.Nested(n => n
.Path(p => p.VendorStatuses)
.Query(nq => nq
.Exists(t => t
.Field(f => f.VendorStatuses)
)
)
);
I found a workaround that is far from ideal in my opinion. I created a new property on my PurchaseOrder model for NumberOfStatuses, then I just do a term search on that for value of 0.
public int NumberOfStatuses => VendorStatuses.OrEmptyIfNull().Count();
Query<PurchaseOrder>.Term(t => t.Field(po => po.NumberOfStatuses).Value(0));

Mongoid: Query based on size of embedded document array

This is similar to this question here but I can't figure out how to convert it to Mongoid syntax:
MongoDB query based on count of embedded document
Let's say I have Customer: {_id: ..., orders: [...]}
I want to be able to find all Customers that have existing orders, i.e. orders.size > 0. I've tried queries like Customer.where(:orders.size.gt => 0) to no avail. Can it be done with an exists? operator?
I nicer way would be to use the native syntax of MongoDB rather than resort to rails like methods or JavaScript evaluation as pointed to in the accepted answer of the question you link to. Especially as evaluating a JavaScript condition will be much slower.
The logical extension of $exists for a an array with some length greater than zero is to use "dot notation" and test for the presence of the "zero index" or first element of the array:
Customer.collection.find({ "orders.0" => { "$exists" => true } })
That can seemingly be done with any index value where n-1 is equal to the value of the index for the "length" of the array you are testing for at minimum.
Worth noting that for a "zero length" array exclusion the $size operator is also a valid alternative, when used with $not to negate the match:
Customer.collection.find({ "orders" => { "$not" => { "$size" => 0 } } })
But this does not apply well to larger "size" tests, as you would need to specify all sizes to be excluded:
Customer.collection.find({
"$and" => [
{ "orders" => { "$not" => { "$size" => 4 } } },
{ "orders" => { "$not" => { "$size" => 3 } } },
{ "orders" => { "$not" => { "$size" => 2 } } },
{ "orders" => { "$not" => { "$size" => 1 } } },
{ "orders" => { "$not" => { "$size" => 0 } } }
]
})
So the other syntax is clearer:
Customer.collection.find({ "orders.4" => { "$exists" => true } })
Which means 5 or more members in a concise way.
Please also note that none of these conditions alone can just an index, so if you have another filtering point that can it is best to include that condition first.
Just adding my solution which might be helpful for someone:
scope :with_orders, -> { where(orders: {"$exists" => true}, :orders.not => {"$size" => 0}}) }

Compare Multidimensional Hash

I have a huge hash (JSON) that I want to compare to a "master key" by deleting the values that are dissimilar then totaling a value set.
I thought it would be a good way to handle test scoring with complex scoring criterion.
Advice about how to do this? Any gems exist to make my life easier?
{
"A" => 10,
"B" => 7,
etc
....
The hash is constructed like test[answer] => test[point_value] and the question key/value is the answer/point value.
So if I want to compare to a master_key and remove dissimilar items (not remove similar ones like arr1-arr2 does...then total the values, what would be best?
After converting the hashes to ruby hashes i'd do something like this
tester = { :"first" => { :"0" => { :"0" => { :"B" => 10 }, :"1" => { :"B" => 7 }, :"2" => { :"B" => 5 } } }}
master = { :"first" => { :"0" => { :"0" => { :"A" => 10 }, :"1" => { :"B" => 7 }, :"2" => { :"B" => 5 } } }}
tester.reduce(0) do |score, (test, section)|
section.each do |group, questions|
questions.each do |question, answer|
if answer.keys.first == master[test][group][question].keys.first
score += answer.values.first
end
end
end
score
end

Resources