I have this hash
- "title" : "The Today Show",
- "category; "Show",
- "channel-name": "CNBC",
- "scheduling" =>
{ "start" : "7am", "stop" : "9am"},
{ "start" : "10am", "stop" : "11am"},
{ "start" : "11am", "stop": "12am"}
- "title" : "How I met your mother",
- "category; "Show",
- "channel-name": "CBS",
- "scheduling" =>
{ "start" : "7pm", "stop" : "9pm"},
{ "start" : "10pm", "stop" : "12pm"},
{ "start" : "11am", "stop": "12am"}
I need to "select" only programs which have at least one schedule beetween "7pm"-"9pm"
I tried this, but it isn't working
programs.select_by{|p|
p.scheduling.each{|ps|
ps.start <= "7pm" && ps.stop <= "9pm"
}
}
PS: I used a pseudo-code for the date-comparison just to make this code more readable :)
Try this
programs.select do |p|
p.scheduling.any? do |ps|
ps.start >= "7pm" && ps.stop <= "9pm"
end
end
Related
Hello I'm trying to take a substring of a log message using regex in kibana scripted fields. I've run into an interesting scenario that doesn't add up. I converted the message field to a keyword so I could do scripted field operations on it.
When I match with a conditional such as:
if (doc['message'].value =~ /(\b(?:\d{1,3}\.){3}\d{1,3}\b)/) {
return "match"
} else {
return "no match"
}
This will match the ip and return correctly that there is an ip in the message. However, whenever I try to do the matcher function which splits the matched text into substrings it doesn't find any matches.
Following the guide on Elastic's documentation for doing this located here:
https://www.elastic.co/blog/using-painless-kibana-scripted-fields
This is the example script they give to match the first octet of an ip in a log message. However, this returns no matches when indeed there is ip addresses in the log message. I can't even match just text characters no matter what I do it returns 0 matches.
I have enabled rexex in the elasticsearch.yml in my cluster as well.
def m = /^([0-9]+)\..*$/.matcher(doc['message'].value);
if ( m.matches() ) {
return Integer.parseInt(m.group(1))
} else {
return m.matches() + " - " + doc['message'].value;
}
This returns 0 matches. Even if I use the same expression used for the conditional:
/(\b(?:\d{1,3}.){3}\d{1,3}\b)/
the matcher will still return false.
Any idea what I'm doing wrong here according to the documentation this should work.
I tried using subs-strings when the value exists in the if conditional but there is to many variations between the log messages. I also don't see a way to split and look through the list of outputs to pick the one with ip if I just use conditional for the scripted field.
Any idea on how to solve this:
Here is a example of that is returned form
def m = /^([0-9]+)\..*$/.matcher(doc['message'].value);
if ( m.matches() ) {
return Integer.parseInt(m.group(1))
} else {
return m.matches() + " - " + doc['message'].value;
}
The funny part is they all return false and this is essentially just looking for numbers with . and I've tried all kinds of regex combinations with no luck.
[
{
"_id": "VRYK_2kB0_nHZ_3qyRwt",
"Source-IP": [
"false - #Version: 1.0"
]
},
{
"_id": "VhYK_2kB0_nHZ_3qyRwt",
"Source-IP": [
"false - 2019-02-17 00:34:11 127.0.0.1 GET /status/web - 8611 - 127.0.0.1 ELB-HealthChecker/2.0 - 200 0 0 31"
]
},
{
"_id": "VxYK_2kB0_nHZ_3qyRwt",
"Source-IP": [
"false - #Software: Microsoft Internet Information Services 10.0"
]
},
{
"_id": "WBYK_2kB0_nHZ_3qyRwt",
"Source-IP": [
"false - #Date: 2019-03-26 00:00:08"
]
},
{
"_id": "WRYK_2kB0_nHZ_3qyRwt",
"Source-IP": [
127.0.0.1 ELB-HealthChecker/2.0 - 200 0 0 15"
]
},
{
ended up being the following:
if (doc["message"].value != null) {
def m = /(\b(?:\d{1,3}\.){3}\d{1,3}\b)/.matcher(doc["message"].value);
if (m.find()) { return m.group(1) }
else { return "no match" }
}
else { return "NULL"}
{
"service " : "namespace",
"name" : "test ",
"name" : "abc",
}
I need to replace the first occurrence of name as
"name" : "test 12-12-34 12:09"
and the value "12-12-34 12:09" is in some another variable that we can access using $date
i made several tests and this behaviour is strange :
events: [
{
title : 'event2',
start : '2018-02-05',
end : '2018-02-07 08:00:00'
}
]
and the event appears as a 2 days event ( 5 and 6 ) ( bad behaviour for my project )
events: [
{
title : 'event2',
start : '2018-02-05',
end : '2018-02-07 09:00:00'
}
]
and the event appears as a 3 days event ( 5,6 and 7 ) ( good behaviour for my project )
there's something around 9 o'clock i dont know what, how can i fix it ??
In the fullCalendar options it's possible to add
nextDayThreshold: "00:00:00"
and the behaviour around 9am disappears.
The default value for nextDayThreshold is 9am.
thx again for the indication about nextDayThreshold
nevertheless it seems there is a strange behaviour with 00:00:00 only this value, i mean :
events: [
{
title : 'event2',
start : '2018-02-05',
end : '2018-02-07 01:00:00'
}
],
nextDayThreshold: "01:00:00"
gives a 3-days large event on calendar ( normal ), but
events: [
{
title : 'event2',
start : '2018-02-05',
end : '2018-02-07 00:00:00'
}
],
nextDayThreshold: "00:00:00"
gives only a 2-days large event , alas...
I want to update 2 fields in a document in a single update request, using an inline painless script:
{
"script" : {
"inline": "ctx._source.counter1 ++ ; ctx._source.counter2 == 0 ? ctx.op = 'noop' : ctx._source.counter2 ++"}
}
Problem is - if the condition is met and ctx.op = 'noop' then the first part of the script (ctx._source.counter1 ++ ;) is also not being executed.
How would u recommend I should do this?
I can split the operation into 2 update requests which will double my db calls (but maybe a 'noop' call is extremely fast).
I also tried to swap the 2 parts of script (the conditional first , the increment second) - but then I'm getting a compilation error:
"script_stack": [
" ctx._source. ...",
" ^---- HERE"
],
"script": " ctx._source.counter2 > 0 ? ctx.op = 'noop' : ctx._source.counter2++ ; ctx._source.counter1++ ",
"lang": "painless",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "Not a statement."
}
Any ideas?
Thanks
I need to to bulk insert the array of embedded documents to an existing document. I have tried the below code, but it was not working
arr_loc = []
arr_loc << Location.new(:name=> "test") << Location.new(:name=> "test2")
biz = Business.first
biz.locations = arr_loc
biz.save # not working
currently i am inserting each doc separately by looping the array, i hope there is a better cleaner way to do this.
from mongo shell we can easily do this like this
> var mongo = db.things.findOne({name:"mongo"});
> print(tojson(mongo));
{"_id" : "497da93d4ee47b3a675d2d9b" , "name" : "mongo", "type" : "database"}
> mongo.data = { a:1, b:2};
{"a" : 1 , "b" : 2}
> db.things.save(mongo);
> db.things.findOne({name:"mongo"});
{"_id" : "497da93d4ee47b3a675d2d9b" , "name" : "mongo" , "type" : "database", "data" : {"a" : 1 , "b" : 2}}
>
check the link for more info.. is it possible to do this with mongoid?
It turns out to be a problem in calling save method after assignment
biz.locations = arr_loc #this is fine
biz.save # no need for that
Mongoid updates the document on the assignment itself, no explicit save required. Refer this mongoid google group thread (Thanks Nick hoffman) for more info