How do I deal with timestamp in bosun configuration? - elasticsearch

I'm trying to insert an alert in elasticsearch from bosun but I don't know how to fill the variable $timestamp (Have a look at my example) with the present time. Can I use functions in bosun.conf? I'd like something like now().
Can anybody help me, please?
This is an extract of an example configuration:
macro m1
{
$timestamp = **???**
}
notification http_crit
{
macro = m1
post = http://xxxxxxx:9200/alerts/http/
body = {"#timestamp":$timestamp,"level":"critical","alert_name":"my_alert"}
next = http_crit
timeout = 1m
}
alert http
{
template = elastic
$testHTTP = lscount("logstash", "", "_type:stat_http,http_response:200", "1m", "5m", "")
$testAvgHTTP = avg($testHTTP)
crit = $testAvgHTTP < 100
critNotification = http_crit
}

We use .State.Touched.Format which was recently renamed to .Last.Time.Format in the master branch. The format string is a go time format, and you would have to get it to print the correct format that elastic is expecting.
template elastic {
subject = `Time: {{.State.Touched.Format "15:04:05UTC"}}`
}
//Changed on 2016 Feb 01 to
template elastic {
subject = `Time: {{.Last.Time.Format "15:04:05UTC"}}`
}
Which when rendered would look like:
Time: 01:30:13UTC

Related

Issue with Painless Script Elasticsearch Watcher

I am creating a watcher in elasticsearch that reports when we havent had new entry or events in the index for 10 minutes this is further split out by looking at the source field in the entry.
I am only getting the last 10 mins of the index and seeing which source is not present in the buckets.
to do this I am first creating a list of all the source types we receive then creating a list from the bucket keys returned. Then I want to compare the lists to see which one is missing to then pass this into the message.
I am getting a generic error for the for loop. Any feedback is helpful quite new to elastic and painless so could be something simple I've missed.
"transform": {
"script": {
"source": """String vMessage = 'Clickstream data has been loaded although there are no iovation records from the following source in the last 10 mins:
';if(ctx.payload.clickstream.hits.total > 0 && ctx.payload.iovation.aggregations.source.buckets.size() < 3) { source_list = ['wintech', 'login', 'clickstream']; source_array = new String[] for (source in ctx.payload.iovation.aggregations.source.buckets){ source_array.add(source.key); } for (key in source_list){ if (!source_array.contains(key){ vMessage += '<ul><li>' + key + '</li></ul>';} } }return [ 'message': vMessage ];""",
"lang": "painless"
}
},
So I figured it out after digging through more documentation.
I was declaring my lists incorrectly. To declare a list it needs to be in format as below.
List new_list = new ArrayList();
This solved my issue and now the transform script works as expected.
"""String vMessage = 'Clickstream data has been loaded although there are no iovation records from the following source in the last 10 mins:
';if(ctx.payload.clickstream.hits.total > 0 && ctx.payload.iovation.aggregations.source.buckets.size() < 3) { List source_list = new ArrayList(['wintech', 'login', 'clickstream']); List source_array = new ArrayList(); for (source in ctx.payload.iovation.aggregations.source.buckets){ source_array.add(source.key); } for (key in source_list){ if (!source_array.contains(key)){ vMessage += '<ul><li>' + key + '</li></ul>';} } }return [ 'message': vMessage ];""",

How to define terraform aws_ami resource for Fedora Atomic Amazon Machine Image (ami)

I am trying to use terraform to get an aws_ami data resource as follows:
data "aws_ami" "fedora_atomic" {
most_recent = true
filter {
name = "name"
values = [
"ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-*"] <==== What to specify here?
}
filter {
name = "virtualization-type"
values = [
"hvm"]
}
owners = [
"099720109477"] <=== What's the owner id?
# Canonical
}
But I want to replace the above with the following image desription, which I found on the AWS console:
Fedora-Atomic-25-20170727.0.x86_64-us-east-1-HVM-standard-0 - ami-00035c7b
Question
How do I find the right values for the fields above i.e. what is the correct code for the above for a Fedora Atomic image?
I am struggling to find this information.
Many Thanks
Fedora Atomic has been EOL since 2019 and you won't find new AMIs but to answer your question, the owner is the Account ID and you can find it from the AWS Console
The name can be part of what is available in the description, ie Fedora-Atomic-25-
Combining them all
data "aws_ami" "fedora_atomic" {
most_recent = true
filter {
name = "name"
values = ["Fedora-Atomic-25-*"]
}
filter {
name = "virtualization-type"
values = [ "hvm"]
}
owners = ["125523088429"]
}
output "ami" {
value = data.aws_ami.fedora_atomic.id
}

Chat app list last messages of each peer using parse server

I am doing a chat app using parse server, everything is great but i tried make to list just last message for every remote peer. i didn't find any query limitation how to get just one message from every remote peer how can i make this ?
Query limitation with Parse SDK
To limit the number of object that you get from a query you use limit
Here is a little example:
const Messages = Parse.Object.extend("Messages");
const query = new Parse.Query(Messages);
query.descending("createdAt");
query.limit(1); // Get only one result
Get the first object of a query with Parse SDK
In you case as you really want only one result you can use Query.first.
Like Query.find the method Query.first make a query and will return only the first result of the Query
Here is an example:
const Messages = Parse.Object.extend("Messages");
const query = new Parse.Query(Messages);
query.descending("createdAt");
const message = await query.first();
I hope my answer help you 😊
If you want to do this using a single query, you will have to use aggregate:
https://docs.parseplatform.org/js/guide/#aggregate
Try something like this:
var query = new Parse.Query("Messages");
var pipeline = [
{ match: { local: '_User$' + userID } },
{ sort: { createdAt: 1 } },
{ group: { remote: '$remote', lastMessage: { $last: '$body' } } },
];
query.aggregate(pipeline)
.then(function(results) {
// results contains unique score values
})
.catch(function(error) {
// There was an error.
});

Get full JSON response from Sonarqube Web API

I am using Sonarqube web API to detect bugs in spoon. But I'm not getting the full list of about 189 bugs, but only about 100 even when I used types=BUG parameter. The GET request I'm using is https://sonarqube.ow2.org/api/issues/search?componentKeys=fr.inria.gforge.spoon:spoon-core&types=BUG . Is there any way to get the full JSON response?
You get only 100 items as 100 is the default pagesize for Web api pagination.
In your example when using:
https://sonarqube.ow2.org/api/issues/search?componentKeys=fr.inria.gforge.spoon:spoon-core&types=BUG&ps=200
you'll get all 189 bugs. The max value for pagesize is 500.
If you want to know the total count for issues you'll need to check the response:
{
"paging": {
"pageIndex": 1,
"pageSize": 100,
"total": 189 <<---------------------------
},
"issues": [
{
...
A groovy snippet using total to get all issues with looping:
import groovy.json.*
def sonarRest(url,method) {
jsonSlurper = new JsonSlurper()
raw = '...:'
bauth = 'Basic ' + javax.xml.bind.DatatypeConverter.printBase64Binary(raw.getBytes())
conn = new URL(url).openConnection() as HttpURLConnection
conn.setRequestMethod(method)
conn.setRequestProperty("Authorization", bauth)
conn.connect()
httpstatus = conn.responseCode
object = jsonSlurper.parse(conn.content)
}
issues = sonarRest('https://sonarhost/api/issues/search?severities=INFO&ps=1', 'GET')
total = (issues.total.toFloat()/100).round()
counter = 1
while(counter <= total)
{
issues = sonarRest("https://sonarhost/api/issues/search?severities=INFO&ps=100&p=$counter", 'GET')
println issues
counter++
}
Sorry I don't even need it. I can add rule to the parameter since I'm only using one rule at a time.

SolrNet Error - Unable to read data from the transport connection: The connection was closed

I'm trying to search a Solr server from a webservice using SolrNet. I set up the connection in the global.asax: Startup.Init<ApartmentDoc>("http://192.168.0.100:8080/solr/");
I'm trying to query the server in a class file via:
var solr = ServiceLocator.Current.GetInstance<ISolrOperations<ApartmentDoc>>();
var apartments = solr.Query(SolrQuery.All, new QueryOptions
{
ExtraParams = new Dictionary<string, string> {
{ "defType", "edismax" } ,
{ "fl", "*,score,_dist_:geodist() " } ,
{ "bf", "recip(geodist(),1,1000,1000)" } ,
{ "fq", string.Format("{{!geofilt d={0}}}", radius * 1.609344) } ,
{ "sfield", "Location" } ,
{ "pt", string.Format("{0},{1}", centerLat, centerLong) }
}
});
return apartments;
The error I'm getting is: Unable to read data from the transport connection: The connection was closed.
I've checked the logs in TomCat, and the request is going through and the results appear to have been returned.
Any ideas why I'm not getting the results back?
Thanks,
Drew
As the 'rows' parameter wasn't defined in your code, the request is likely timing out from trying to retrieve a large number of documents. As explained in the SolrNet documentation, always define pagination parameters.

Resources