Issue with Painless Script Elasticsearch Watcher - elasticsearch

I am creating a watcher in elasticsearch that reports when we havent had new entry or events in the index for 10 minutes this is further split out by looking at the source field in the entry.
I am only getting the last 10 mins of the index and seeing which source is not present in the buckets.
to do this I am first creating a list of all the source types we receive then creating a list from the bucket keys returned. Then I want to compare the lists to see which one is missing to then pass this into the message.
I am getting a generic error for the for loop. Any feedback is helpful quite new to elastic and painless so could be something simple I've missed.
"transform": {
"script": {
"source": """String vMessage = 'Clickstream data has been loaded although there are no iovation records from the following source in the last 10 mins:
';if(ctx.payload.clickstream.hits.total > 0 && ctx.payload.iovation.aggregations.source.buckets.size() < 3) { source_list = ['wintech', 'login', 'clickstream']; source_array = new String[] for (source in ctx.payload.iovation.aggregations.source.buckets){ source_array.add(source.key); } for (key in source_list){ if (!source_array.contains(key){ vMessage += '<ul><li>' + key + '</li></ul>';} } }return [ 'message': vMessage ];""",
"lang": "painless"
}
},

So I figured it out after digging through more documentation.
I was declaring my lists incorrectly. To declare a list it needs to be in format as below.
List new_list = new ArrayList();
This solved my issue and now the transform script works as expected.
"""String vMessage = 'Clickstream data has been loaded although there are no iovation records from the following source in the last 10 mins:
';if(ctx.payload.clickstream.hits.total > 0 && ctx.payload.iovation.aggregations.source.buckets.size() < 3) { List source_list = new ArrayList(['wintech', 'login', 'clickstream']); List source_array = new ArrayList(); for (source in ctx.payload.iovation.aggregations.source.buckets){ source_array.add(source.key); } for (key in source_list){ if (!source_array.contains(key)){ vMessage += '<ul><li>' + key + '</li></ul>';} } }return [ 'message': vMessage ];""",

Related

I want to update values ​in an array in an array while using MongoTemplate

First, I will show the state stored in mongodb.
As you can see, it is a structure with a list called replies in a list called comments. And inside replies there is an array called likes.
comments : [
Object1 : {
replies : [
likes : [
0 : {},
1 : {}
]
]
},
Object2 : {
replies : [
likes : [
0 : {},
1 : {}
]
]
}
]
What I want to do here is to insert/subtract a value only from the likes array inside a specific replies structure. I'm currently using Spring boot and have tried the following:
Query query = new Query();
Criteria criteria = Criteria.where("_id").is(new ObjectId(postId))
.andOperator(Criteria.where("comments")
.elemMatch(Criteria.where("_id").is(new ObjectId(commentId))
.andOperator(Criteria.where("replies")
.elemMatch(Criteria.where("_id").is(new ObjectId(replyId)))
)
)
);
query.addCriteria(criteria);
Update update = new Update();
if (state) {
// remove user id
update.pull("comments.$[].replies.$.likes", new ObjectId(userId));
} else {
// add user id
update.push("comments.$[].replies.$.likes").value(new ObjectId(userId));
}
mongoTemplate.updateFirst(query, update, MyEntity.class);
It is an operation to add or remove userId according to boolean state. As a result of the attempt, up to a specific comment is found, but userId is unconditionally entered in the first likes list of the replies list inside the comment. What I want is to get into the likes list inside a specific reply. Am I using the wrong parameter in update.push()? I would appreciate it if you could tell me how to solve it.
Not a direct answer to your question as I'm not experienced with spring's criteria builder, but here's how you would do it in mongo directly, which might help you to figure it out:
You could define arrayfilters allowing you to keep track of the corresponding indices of each comments and replies. You can then use those indices to push a new object at the exact matching indices:
db.collection.update({
_id: "<postId>"
},
{
$push: {
"comments.$[comments].replies.$[replies].likes": {
_id: "newlyInsertedLikeId"
}
}
},
{
arrayFilters: [
{
"comments._id": "<commentId>"
},
{
"replies._id": "<replyId>"
}
]
})
Here's an example on mongoplayground: https://mongoplayground.net/p/eNdDXXlyi2X

Elastic Low Level Client - How to include multiple indexes in a search query

I'm struggling to figure out how to include multiple indexes in a search using the Elastic low level client.
My understanding (right or wrong) is that I should be able to include multiple indexes by separation of commas, this doesn't work for me though. In the code example below, I find that the first index specified is still working and returning results, but the second one is ignored. Any ideas?
Appsettings.json file:
// System settings configured here for the WebApp. Applicable to all users.
"SystemSettings": {
// Sets the maximum number of distinct values returned by Elastic for a log property
"_distinctPropertyValuesLimit": 1000, // See LogPropertiesController.cs
// String for the list of Elastic Search indexes that are searched by default.
"indexesToSearch": "webapp-razor-*, systemconfig-api-*"
}
Query class:
_indexesToSearch = configuration.GetSection("SystemSettings").GetSection("indexesToSearch").Value;
var searchResponse = await _elasticLowLevelClient.SearchAsync<StringResponse>(_indexesToSearch, #"
{
""from"": """ + fromParameter + #""",
""size"": """ + rowsPerPage + #""",
""query"": {
""match"": {
""" + searchColumn + #""": {
""query"": """ + searchString + #"""
}
}
},
""sort"": [
{
""#timestamp"": {
""order"": ""desc""
}
}
]
}
");
Turns out that there must not be any spaces between the index names when multiple values are provided, see below:

How do I deal with timestamp in bosun configuration?

I'm trying to insert an alert in elasticsearch from bosun but I don't know how to fill the variable $timestamp (Have a look at my example) with the present time. Can I use functions in bosun.conf? I'd like something like now().
Can anybody help me, please?
This is an extract of an example configuration:
macro m1
{
$timestamp = **???**
}
notification http_crit
{
macro = m1
post = http://xxxxxxx:9200/alerts/http/
body = {"#timestamp":$timestamp,"level":"critical","alert_name":"my_alert"}
next = http_crit
timeout = 1m
}
alert http
{
template = elastic
$testHTTP = lscount("logstash", "", "_type:stat_http,http_response:200", "1m", "5m", "")
$testAvgHTTP = avg($testHTTP)
crit = $testAvgHTTP < 100
critNotification = http_crit
}
We use .State.Touched.Format which was recently renamed to .Last.Time.Format in the master branch. The format string is a go time format, and you would have to get it to print the correct format that elastic is expecting.
template elastic {
subject = `Time: {{.State.Touched.Format "15:04:05UTC"}}`
}
//Changed on 2016 Feb 01 to
template elastic {
subject = `Time: {{.Last.Time.Format "15:04:05UTC"}}`
}
Which when rendered would look like:
Time: 01:30:13UTC

Ruby finding duplicates in MongoDB

I am struggling to get this working efficiently I think map reduce is the answer but can't getting anything working, I know it is probably a simple answer hopefully someone can help
Entry Model looks like this:
field :var_name, type: String
field :var_data, type: String
field :var_date, type: DateTime
field :external_id, type: Integer
If the external data source malfunctions we get duplicate data. One way to stop this was when consuming the results we check if a record with the same external_id already exists, as one we have already consumed. However this is slowing down the process a lot. The plan now is to check for duplicates once a day. So we are looking get a list of Entries with the same external_id. Which we can then sort and delete those no longer needed.
I have tried adapting the snippet from here https://coderwall.com/p/96dp8g/find-duplicate-documents-in-mongoid-with-map-reduce as shown below but get
failed with error 0: "exception: assertion src/mongo/db/commands/mr.cpp:480"
def find_duplicates
map = %Q{
function() {
emit(this.external_id, 1);
}
}
reduce = %Q{
function(key, values) {
return Array.sum(values);
}
}
Entry.all.map_reduce(map, reduce).out(inline: true).each do |entry|
puts entry["_id"] if entry["value"] != 1
end
end
Am I way off? Could anyone suggest a solution? I am using Mongiod, Rails 4.1.6 and Ruby 2.1
I got it working using the suggestion in the comments of the question by Stennie using the Aggregation framework. It looks like this:
results = Entry.collection.aggregate([
{ "$group" => {
_id: { "external_id" => "$external_id"},
recordIds: {"$addToSet" => "$_id" },
count: { "$sum" => 1 }
}},
{ "$match" => {
count: { "$gt" => 1 }
}}
])
I then loop through the results and delete any unnecessary entries.

How can I validate DBRefs in a MongoDB collection?

Assuming I've got a MongoDB instance with 2 collections - places and people.
A typical places document looks like:
{
"_id": "someID"
"name": "Broadway Center"
"url": "bc.example.net"
}
And a people document looks like:
{
"name": "Erin"
"place": DBRef("places", "someID")
"url": "bc.example.net/Erin"
}
Is there any way to validate the places DBRef of every document in the people collection?
There's no official/built-in method to test the validity of DBRefs, so the validation must be performed manually.
I wrote a small script - validateDBRefs.js:
var returnIdFunc = function(doc) { return doc._id; };
var allPlaceIds = db.places.find({}, {_id: 1} ).map(returnIdFunc);
var peopleWithInvalidRefs = db.people.find({"place.$id": {$nin: allPlaceIds}}).map(returnIdFunc);
print("Found the following documents with invalid DBRefs");
var length = peopleWithInvalidRefs.length;
for (var i = 0; i < length; i++) {
print(peopleWithInvalidRefs[i]);
}
That when run with:
mongo DB_NAME validateDBRefs.js
Will output:
Found the following documents with invalid DBRefs
513c4c25589446268f62f487
513c4c26589446268f62f48a
you could add a stored function for that. please note that the mongo documentation discourages the use of stored functions. You can read about it here
In essence you create a function:
db.system.js.save(
{
_id : "myAddFunction" ,
value : function (x, y){ return x + y; }
}
);
and once the function is created you can use it in your where clauses. So you could write a function that checks for the existence of the id in the dbRef.

Resources