How to force Elastic to keep more decimals from a float - elasticsearch

I have some coordinates that I pass to Elasticsearch from Logstash, but Elastic keeps only 3 decimals, so coordinate wise, I completely lose the location.
When I send the data from Logstash, I can see it got the right value:
{
"nasistencias" => 1,
"tiempo_demora" => "15",
"path" => "/home/elk/data/visits.csv",
"menor" => "2",
"message" => "5,15,Parets del Vallès,76,0,8150,41.565505,2.234999575,LARINGITIS AGUDA,11/3/17 4:20,1,38,1,2,POINT(2.2349995750000695 41.565505000000044)",
"id_poblacion" => 76,
"#timestamp" => 2017-03-11T04:20:00.000Z,
"poblacion" => "Parets del Vallès",
"edad_valor" => 0,
"patologia" => "LARINGITIS AGUDA",
"host" => "elk",
"#version" => "1",
"Geopoint_corregido" => "POINT(2.2349995750000695 41.565505000000044)",
"id_tipo" => 1,
"estado" => "5",
"cp" => 8150,
"location" => {
"lon" => 2.234999575, <- HERE
"lat" => 41.565505 <- AND HERE
},
"id_personal" => 38,
"Fecha" => "11/3/17 4:20"
}
But then, I get it on Kibana as follows:
I do the conversion as follows:
mutate {
convert => { "longitud_corregida" => "float" }
convert => { "latitude_corregida" => "float" }
}
mutate {
rename => {
"longitud_corregida" => "[location][lon]"
"latitude_corregida" => "[location][lat]"
}
}
How can I keep all the decimals? With geolocation, one decimal can return the wrong city.
Another question (related)
I add the data to the csv document as follows:
# echo "5,15,Parets del Vallès,76,0,8150,"41.565505","2.234999575",LARINGITIS AGUDA,11/3/17 4:20,1,38,1,2,POINT(2.2349995750000695 41.565505000000044)" >> data/visits.csv
But in the original document, instead of dots there are comas for the coordinates. like this:
# echo "5,15,Parets del Vallès,76,0,8150,"41,565505","2,234999575",LARINGITIS AGUDA,11/3/17 4:20,1,38,1,2,POINT(2.2349995750000695 41.565505000000044)" >> data/visits.csv
But the problem was that it was getting the coma as field separator, and all the data was being sent to Elasticsearch wrong. Like here:
Here, the latitude was 41,565505, but that coma made it understand 41 as latitude, and 565505 as longitude. I changed the coma by dot, and am not sure if float understands comas and dots, or just comas. My question is, did I do wrong changing the coma by dot? Is there a better way to correct this?

Create a GEO-Point mapping for the lat/lon fields. This will lead to a more precise and internally optimized storage in ES and allow you more sophisticated GEO-Queries.
Please keep in mind, that you'll need to reindex the data as mapping changes are not possible afterwards (if there are already docs present having the fields to change)
Zero downtime approach:
Create a new index with a optimized mapping (derive it from the current, and make your changes manually)
Reindex the data (at least some docs for verification)
Empty the new index again
Change the logstash destination to the new index (consider using aliases)
Reindex the old data into the new index

Related

Why is the max_score higher than the _score-sorted first hit's _score in Elasticsearch?

I have an Elasticsearch (8.1) index on which I run a simple match or multi_match query (I tried both, both show the same behavior, even the simplest ones).
In the result it is always the case that max_score is higher than the first hit's _score.
If I add a terms aggregation (on a keyword field) with a top_hits sub-aggregation (with sorting on _score) then the first hit from the first bucket actually has _score == max_score (but it is obviously also a different hit compared to the "main" hits). So, the top_hits aggregation actually does what I want ("fetch all matching documents and sort by _score"). The "main" hits seem to miss some results, however.
How can I make sure that the "main" hits do not "drop" documents? What is the internal mechanics behind this?
I added my PHP array that gets JSON encoded and produces the Elasticsearch query:
[
'size' => 10,
'query' => [
// the result of this does not have all documents
// that appear in the aggregation
// and the highest ranked doc has lower score than max_score
'bool' => [
'must' => [
[
'match' => [
'my_text_field' => [
'query' => 'searchword'
]
]
],
['term' => ['my_other_field' => ['value' => 3]]],
// plus some more other simple term conditions
// on other simple integral fields, but no scripts ore similar
// simple "WHERE a = 5" conditions
]
]
],
// this aggregation has other/more hits than the directly retrieved docs, matching the max_score
// If I remove the aggregation nothing changes for the actual result
'aggs' => [
'my_agg' => [
'terms' => ['field' => 'my_agg_field', 'order' => ['score' => 'desc']],
'aggs' => [
'score' => ['max' => ['script' => '_score']],
'filteredHits' => [
'top_hits' => [
'size' => 10
]
]
]
]
]
]

Elasticsearch/Kibana [ 7.17 ] how to add custom autogenerated field

we got an ES/Kibana [7.17] running, everything works fine so far but we want to autotranslate a field based on a static table how is this possible? I remember it was possible over custom formats in older Kibana versions but I cannot find how to do it in this one.
e.g.
1 => HR Department
2 => IT Department
3 => Production
etc.
Data is:
Max Muster 3
Data should be
Max Muster 3 Production
P.S. I tried adding a runtime field to the template but it always complains that the syntax is wrong
filter {
translate {
source => "[dep]
target => "[department]"
dictionary => {
"1" => "HR Dep"
"2" => "IT Dep"
"3" => "Production"
}
}
}
}

Checking if a ruby hash contains a value greater than x

I have the following object returned from an InfluxDB query, and I want to be able to check if any of the derivatives are equal or greater than say 100, if so then do stuff.
I've been trying to use select to check that field, but I really don't actually understand how to work with a data structure like this. How would I go about iterating through every derivative value in my returned object?
I'm not really seeing an example that's similar to my case in the enumerable documentation.
https://ruby-doc.org/core-2.4.0/Enumerable.html
[{
"name" => "powerdns_value",
"tags" => nil,
"values" => [
{ "time" => "2017-03-21T14:20:00Z", "derivative" => 1},
{ "time" => "2017-03-21T14:30:00Z", "derivative" => 900},
{ "time" => "2017-03-21T14:40:00Z", "derivative" => 0},
{ "time" => "2017-03-21T15:20:00Z", "derivative" => 0}
]
}]
If you just want to know if one of the hashes in your array meet the condition
arr.first['values'].any? { |hash| hash['derivative'] >= 100 }

Select specific field in MongoDB using ruby then pass the value to a variable

I'm working on a script that will return value in a specific field and exclude other fields i tried these codes:
name = 'bierc'
puts collection.find({"name"=> name},{:fields => { "_id" => 0}}).to_a
and
name = 'bierc'
collection.find("name" => name,"_id" => 0).each{|row| puts row.inspect}
These two returns
{"_id"=>BSON::ObjectId('55f0d965fcd4fe1c659cf472'), "name"=>"bierc", "song"=>"testsong"}
I want to select name only and exclude the song and especially the _id then will work on to pass the value of name field to a variable.
The option is not fields but projection. You do need _id => 0 but then 1 for any field or fields you do want to select should exclude the others.
collection.find("name" => name, projection: => { "_id" => 0, "name" => 1}})

Ordering array by dependencies with perl

Have an array of hashes,
my #arr = get_from_somewhere();
the #arr contents (for example) is:
#arr = (
{ id => "id2", requires => 'someid', text => "another text2" },
{ id => "xid4", requires => 'id2', text => "text44" },
{ id => "someid", requires => undef, text => "some text" },
{ id => "id2", requires => 'someid', text => "another text2" },
{ id => "aid", requires => undef, text => "alone text" },
{ id => "id2", requires => 'someid', text => "another text2" },
{ id => "xid3", requires => 'id2', text => "text33" },
);
need something like:
my $texts = join("\n", get_ordered_texts(#arr) );
soo need write a sub what return the array of texts from the hashes, - in the dependent order, so from the above example need to get:
"some text", #someid the id2 depends on it - so need be before id2
"another text2", #id2 the xid3 and xid4 depends on it - and it is depends on someid
"text44", #xid4 the xid4 and xid3 can be in any order, because nothing depend on them
"text33", #xid3 but need be bellow id2
"alone text", #aid nothing depends on aid and hasn't any dependencies, so this line can be anywhere
as you can see, in the #arr can be some duplicated "lines", ("id2" in the above example), need output only once any id.
Not providing any code example yet, because havent any idea how to start. ;(
Exists some CPAN module what can be used to the solution?
Can anybody points me to the right direction?
Using Graph:
use Graph qw( );
my #recs = (
{ id => "id2", requires => 'someid', text => "another text2" },
{ id => "xid4", requires => 'id2', text => "text44" },
{ id => "someid", requires => undef, text => "some text" },
{ id => "id2", requires => 'someid', text => "another text2" },
{ id => "aid", requires => undef, text => "alone text" },
{ id => "id2", requires => 'someid', text => "another text2" },
{ id => "xid3", requires => 'id2', text => "text33" },
);
sub get_ordered_recs {
my %recs;
my $graph = Graph->new();
for my $rec (#_) {
my ($id, $requires) = #{$rec}{qw( id requires )};
$graph->add_vertex($id);
$graph->add_edge($requires, $id) if $requires;
$recs{$id} = $rec;
}
return map $recs{$_}, $graph->topological_sort();
}
my #texts = map $_->{text}, get_ordered_recs(#recs);
An interesting problem.
Here's my first round solution:
sub get_ordered_texts {
my %dep_found; # track the set of known dependencies
my #sorted_arr; # output
my $last_count = scalar #_; # infinite loop protection
while (#_ > 0) {
for my $value (#_) {
# next unless we are ready for this text
next if defined $value->{requires}
and not $dep_found{ $value->{requires} };
# Add to the sorted list
push #sorted_arr, $value->{text};
# Remember that we found it
$dep_found{ $value->{id} }++;
}
if (scalar #_ == $last_count) die "some requirements don't exist or there is a dependency loop";
$last_count = scalar #_;
}
return \#sorted_arr;
}
This is not terribly efficient and probably runs in O(n log n) time or something, but if you don't have a huge dataset, it's probably OK.
I would use a directed graph to represent the dependency tree and then walk the graph. I've done something very similiar using Graph.pm
Each of your hashes would be a graph vertex and the edge would represent the dependency.This has the added benefit of supporting more complex dependencies in the future as well as providing shortcut functions for working with the graph.
you didn't say what to do of the dependencies are "independent" of each other.
E.g. id1 requires id2; id3 requires id4; id3 requires id5. What should the order be? (other than 1 before 2 and 3 before both 4/5)
What you want is basically a BFS (Breadth First Search) of a tree (directed graph) of dependencies (or a forest depending on answers to #1 - the forest being a set of non-connected trees).
To do that:
Find all of the root nodes (ids that don't have a requirement themselves)
You can easily do that by making a hash of ALL the IDs using grep on your data structure
Put all those root modes into a starting array.
Then implement BFS. If you need help implementing basic BFS using an array and a loop in Perl, ask a separate question. There may be a CPAN module but the algorithm/code is rather trivial (at least once you wrote it once :)

Resources