Can't Filter by geoip.location - elasticsearch

Using ELk 6.X It seems i cannot plot points due to geoip.location not populated?
I also added a template which i hope is correct. Not an expert but i am pretty sure my points aren't rendered bc its missing data there.
Kibana 6.4.2
Logstash 6.4.2-1
Elasticsearch 6.4.2
Following configs
input {
udp {
port => 9996
codec => netflow {
versions => [5, 7, 9, 10]
}
type => netflow
}
}
filter {
geoip {
source => "[netflow][ipv4_src_addr]"
target => "src_geoip"
database => "/usr/share/GeoIP/GeoLite2-City.mmdb"
}
geoip {
source => "[netflow][ipv4_dst_addr]"
target => "dst_geoip"
database => "/usr/share/GeoIP/GeoLite2-City.mmdb"
}
}
output
output {
if [type] == "netflow" {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash-%{+YYYY.MM.dd}"
}
} else {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
}
The Mapping is like such
"geoip": {
"dynamic": "true",
"properties": {
"ip": {
"type": "ip"
},
"latitude": {
"type": "half_float"
},
"location": {
"type": "geo_point"
},
"longitude": {
"type": "half_float"
}
}
},
Template
{
"logstash": {
"order": 0,
"version": 60001,
"index_patterns": [
"logstash-*"
],
"settings": {
"index": {
"refresh_interval": "5s"
}
},
"mappings": {
"_default_": {
"dynamic_templates": [
{
"message_field": {
"path_match": "message",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false
}
}
},
{
"string_fields": {
"match": "*",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}
],
"properties": {
"#timestamp": {
"type": "date"
},
"#version": {
"type": "keyword"
},
"geoip": {
"dynamic": true,
"properties": {
"ip": {
"type": "ip"
},
"location": {
"type": "geo_point"
},
"latitude": {
"type": "half_float"
},
"longitude": {
"type": "half_float"
}
}
}
}
}
},
"aliases": {}
}
}
My indexes come back with
src or dst but only the below
# dst_geoip.latitude 26.097
# dst_geoip.location.lat 26.097
# dst_geoip.location.lon -80.181

Related

Why after set mapping, index return nothing?

I am using Elasticsearch 7.12.0 , Logstash 7.12.0, Kibana 7.12.0 on Windows 10 x64. Logstash config file logistics.conf
input {
jdbc {
jdbc_driver_library => "D:\\tools\\postgresql-42.2.16.jar"
jdbc_driver_class => "org.postgresql.Driver"
jdbc_connection_string => "jdbc:postgresql://localhost:5433/ld"
jdbc_user => "xxxx"
jdbc_password => "sEcrET"
schedule => "*/5 * * * *"
statement => "select * from inventory_item_report();"
}
}
filter {
uuid {
target => "uuid"
}
}
output {
elasticsearch {
hosts => "http://localhost:9200"
index => "localdist"
document_id => "%{uuid}"
doc_as_upsert => "true"
}
}
Run logstash
logstash -f logistics.conf
If I don't set mapping explicit, the query
GET /localdist/_search
{
"query": {
"match_all": {}
}
}
return many result.
My mappings
POST localdist/_mapping
{
}
DELETE /localdist
PUT /localdist
{
}
POST /localdist
{
}
PUT localdist/_mapping
{
"properties": {
"unt_cost": {
"type": "double"
},
"ii_typ": {
"type": "keyword"
},
"qty_uom_id": {
"type": "keyword"
},
"prod_id": {
"type": "keyword"
},
"root_cat_id": {
"type": "keyword"
},
"uom": {
"type": "keyword"
},
"product_name": {
"type": "text"
},
"ii_id": {
"type": "keyword"
},
"wght_uom_id": {
"type": "keyword"
},
"iid_seq_id": {
"type": "long"
},
"avai_diff": {
"type": "double"
},
"invt_change_typ": {
"type": "keyword"
},
"ccy": {
"type": "keyword"
},
"exp_date": {
"type": "date"
},
"req_amt": {
"type": "text"
},
"pur_cost": {
"type": "double"
},
"tot_pri": {
"type": "long"
},
"own_pid": {
"type": "keyword"
},
"doc_type": {
"type": "keyword"
},
"ii_date": {
"type": "date"
},
"fac_id": {
"type": "keyword"
},
"shipment_type_id": {
"type": "keyword"
},
"lot_id": {
"type": "keyword"
},
"phy_invt_id": {
"type": "keyword"
},
"facility_name": {
"type": "text"
},
"amt_ohand_diff": {
"type": "double"
},
"reason_id": {
"type": "keyword"
},
"cat_id": {
"type": "keyword"
},
"qty_ohand_diff": {
"type": "double"
},
"#timestamp": {
"type": "date"
}
}
}
run query
GET /localdist/_search
{
"query": {
"match_all": {}
}
}
return nothing.
How to fix it, how to make explicit mappings works correctly?
If I got you right, you are indexing via logstash. Elastic then create the index if missing, indexes the documents, and tries to guess the mapping for your documents based on the very first documents.
TL;DR: You are DELETING your index containing the data by yourself.
With
DELETE /localdist
you are deleting the whole index including all data. After that, by issuing
PUT /localdist
{
}
you are re-creating the previously deleted index which is empty again. And at the end, you are setting the index mapping with
PUT localdist/_mapping
{
"properties": {
"unt_cost": {
"type": "double"
},
"ii_typ": {
"type": "keyword"
},
...
Now, as you have an empty elastic index with a mapping set, start the logstash pipeline again. If your documents are matching the index mapping, the docs should start to appear very quickly.

long and float fields showing up as text fields in Kibana

Running Kibana version 5.5.2.
My current setup is Logstash is taking the logs from Docker containers, runs grok filters before sending the logs to elasticsearch. The specific logs that I need to show up as long, float are two times from AWS calls to ECS and EC2 and currently a grok filter pulls them out. Here is the custom filter that pulls out the ECS timings: ECS_DESCRIBE_CONTAINER_INSTANCES (AWS)(%{SPACE})(ecs)(%{SPACE})(%{POSINT})(%{SPACE})(?<ECS_DURATION>(%{NUMBER}))(s)(%{SPACE})(?<ECS_RETRIES>(%{NONNEGINT}))(%{SPACE})(retries) so I need ECS_DURATION to be a float and ECS_RETRIES to be a long. In the docker log handler I have the following
if [ECS_DURATION] {
mutate {
convert => ["ECS_DURATION", "float"]
}
}
if [ECS_RETRIES] {
mutate {
convert => ["ECS_RETRIES", "integer"]
}
}
When I look at the field in Kibana, it still shows as a text field, but when I make the following request to elasticsearch for the mappings, it shows those fields as long and float.
GET /logstash-2020.12.18/_mapping
{
"logstash-2020.12.18": {
"mappings": {
"log": {
"_all": {
"enabled": true,
"norms": false
},
"dynamic_templates": [
{
"message_field": {
"path_match": "message",
"match_mapping_type": "string",
"mapping": {
"norms": false,
"type": "text"
}
}
},
{
"string_fields": {
"match": "*",
"match_mapping_type": "string",
"mapping": {
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
},
"norms": false,
"type": "text"
}
}
}
],
"properties": {
"#timestamp": {
"type": "date",
"include_in_all": false
},
"#version": {
"type": "keyword",
"include_in_all": false
},
"EC2_DURATION": {
"type": "float"
},
"EC2_RETRIES": {
"type": "long"
},
"ECS_DURATION": {
"type": "float"
},
"ECS_RETRIES": {
"type": "long"
},
I even created a custom mapping template in elasticsearch with the following call
PUT /_template/aws_durations?pretty
{
"template": "logstash*",
"mappings": {
"type1": {
"_source": {
"enabled": true
},
"properties": {
"ECS_DURATION": {
"type": "half_float"
},
"ECS_RETRIES": {
"type": "byte"
},
"EC2_DURATION": {
"type": "half_float"
},
"EC2_RETRIES": {
"type": "byte"
}
}
}
}
}
Have you checked that its actually going into the if [ECS_DURATION] and if [ECS_RETRIES] conditions? (I wasnt able to comment)

How to get more than one record?

I am using JDBC with logstash to get data from a PostgreSQL query and export it to ES v7 here is the configuration file:
input {
jdbc {
jdbc_connection_string => "jdbc:postgresql://ldatabase_rds_path?useSSL=true"
jdbc_user => "username"
jdbc_password => "password"
jdbc_driver_library => "/home/z/Documents/postgresql-42.2.18.jar"
jdbc_driver_class => "org.postgresql.Driver"
tracking_column => "id"
tracking_column_type => "numeric"
clean_run => true
schedule => "0 */1 * * *"
statement => "SELECT id as id, type as type, z_id as z_id, sender_id as sender_id, receiver_id as receiver_id, status as status, amount as amount, fees as fees, created as created, metadata as metadata, funding_source_from_id as funding_source_from_id, funding_source_to_id as funding_source_to_id, is_parent as is_parent, destination_type as destination_type, source_type as source_type FROM payments_transfer"
}
}
output {
stdout { codec => json_lines }
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "payments_transfer_data"
document_id => "%{id}"
}
}
It take much time to only get 1 record from the database!
I tried some solutions like explicitly define a mapping so I added a mapping for the data like that:
PUT payments_transfer_data/_mapping/doc?include_type_name=true
{
"properties": {
"#timestamp": {
"type": "date"
},
"#version": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
}
},
"amount": {
"type": "float"
},
"created": {
"type": "date"
},
"destination_type": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
}
},
"z_id": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
}
},
"fees": {
"type": "float"
},
"funding_source_from_id": {
"type": "long"
},
"funding_source_to_id": {
"type": "long"
},
"id": {
"type": "long"
},
"is_parent": {
"type": "boolean"
},
"metadata": {
"type": "keyword"
},
"receiver_id": {
"type": "long"
},
"sender_id": {
"type": "long"
},
"source_type": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
}
},
"status": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
}
},
"type": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
}
Here is the only record that get:
source_type:customer sender_id:3 destination_type:funding-source type:funded_transfer z_id:863a240c-2011-e911-8114-bacd823e9f1d receiver_id:65 status:processed amount:550 funding_source_from_id:332 #timestamp:Nov 8, 2020 # 16:00:08.809 fees:5.61 is_parent:false created:Jan 5, 2019 # 21:28:21.847 #version:1 id:2,160 funding_source_to_id: - metadata: - _id:2160 _type:doc _index:payments_transfer_data _score: -
According to the official documentation you should use tracking_column in conjunction to use_column_value. With your current setting tracking_column will not have any impact.
Have you tried using Select * from to see if everything is being pulled.

Elasticsearch long and double mapping does not work

I am using logstash for pushing logs to ES from DynamoDB:
filter {
json {
source => "message"
target => "doc"
}
mutate {
convert => {
"[doc][dynamodb][keys][customerId][N]" => "integer"
"[doc][dynamodb][newImage][callDate][N]" => "integer"
"[doc][dynamodb][newImage][price][S]" => "float"
}
}
date {
match => [ "[doc][dynamodb][newImage][callDate][N]", "UNIX" ]
target => "#timestamp"
}
}
output {
elasticsearch {
hosts => ["localhost"]
codec => "json"
index => "cdr-%{+YYYY.MM.dd}"
document_type => "cdr"
document_id => "%{[doc][dynamodb][keys][uniqueId][S]}"
template_name => "cdr"
template => "/opt/logstash/templates/logstash_dynamodb_template.json"
template_overwrite => true
}
stdout { }
}
Actually mutate.convert does not make any changes, no matter if it is removed or added.
{
"order": 0,
"template": "cdr*",
"settings": {
"index.refresh_interval": "5s"
},
"mappings": {
"cdr": {
"dynamic": "false",
"_all": {
"enabled": false
},
"properties": {
"doc": {
"properties": {
"dynamodb": {
"properties": {
"keys": {
"properties": {
"customerId": {
"properties": {
"N": {
"type": "long"
}
}
},
"uniqueId": {
"properties": {
"S": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
},
"newImage": {
"properties": {
"callDate": {
"properties": {
"N": {
"type": "date",
"format": "epoch_second"
}
}
},
"direction": {
"properties": {
"S": {
"type": "string",
"index": "not_analyzed"
}
}
},
"disposition": {
"properties": {
"S": {
"type": "string",
"index": "not_analyzed"
}
}
},
"price": {
"properties": {
"S": {
"type": "double"
}
}
},
"uniqueId": {
"properties": {
"S": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
}
}
}
}
}
}
}
}
Yes, doc.message contains all described fields, but they are not mapped. Here is the screenshot from ES:
As you can see only string mappings work as it is supposed.
Error while querying says: No mapping found for [doc.dynamodb.newImage.callDate.N] in order to sort on
Does anyone know what is the reason of such behavior?
Tip: logstash debug bin/logstash -f filters/logstash-dynamodb.conf --debug does not contain any errors.
Thanks in advance for any ideas.

Make a field as an "INDEXED" in elasticsearch

I am using ELK stack with filebeat.
I am using a default template for mapping.
I am not getting all needed fields as "indexed"
Here is my mapping file,
{
"mappings": {
"_default_": {
"_all": {
"enabled": true,
"norms": {
"enabled": false
}
},
"dynamic_templates": [
{
"template1": {
"mapping": {
"doc_values": true,
"ignore_above": 1024,
"index": "not_analyzed",
"type": "{dynamic_type}"
},
"match": "*"
}
}
],
"properties": {
"#timestamp": {
"type": "date"
},
"offset": {
"type": "long",
"doc_values": "true"
}
}
}
},
"settings": {
"index.refresh_interval": "5s"
},
"template": "filebeat-*"
}
Here is my config file for output.
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
Let's say I want a field name channelas an indexed field. How to modify the template?

Resources