I am trying to visualize my data file using Kibana
The format of my file is as follows
timeStamp;elapsed;label;responseCode;responseMessage;threadName;success;failureMessage;bytes;grpThreads;allThreads;Latency;SampleCount;ErrorCount;Hostname
2016-01-16 02:27:17,565;912;HTTP Request;200;OK;Thread Group 1-5;true;;78854;10;10;384;1;0;sundeep-Latitude-E6440 timeStamp;elapsed;label;responseCode;responseMessage;threadName;success;failureMessage;bytes;grpThreads;allThreads;Latency;SampleCount;ErrorCount;Hostname
2016-01-16 02:27:17,565;912;HTTP Request;200;OK;Thread Group 1-5;true;;78854;10;10;384;1;0;sundeep-Latitude-E6440
To map the above data, my logstash config is as follows:
input {
file {
path => [ "/home/sundeep/data/test.csv"]
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
if ([message] =~ "responseCode") {
drop { }
} else {
csv {
separator => ";"
columns => ["timeStamp", "elapsed", "label", "responseCode","responseMessage","threadName",
"success","failureMessage", "bytes", "grpThreads", "allThreads", "Latency",
"SampleCount", "ErrorCount", "Hostname"]
}
}
}
output {
elasticsearch { hosts => ["localhost:9200"]
index => "aa-%{+yyyy-MM-dd}"
}
}
The template file is as follows:
{
"template": "aa-*",
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0,
"index.refresh_interval": "5s"
},
"mappings": {
"logs": {
"properties": {
"timeStamp": {
"index": "analyzed",
"type": "date",
"format": "yyyy-MM-dd HH:mm:ss,SSS"
},
"elapsed": {
"type": "long"
},
"dummyfield": {
"type": "long"
},
"label": {
"type": "string"
},
"responseCode": {
"type": "integer"
},
"threadName": {
"type": "string"
},
"success": {
"type": "boolean"
},
"failureMessage":{
"type": "string"
},
"bytes": {
"type": "long"
},
"grpThreads": {
"type": "long"
},
"allThreads": {
"type": "long"
},
"Latency": {
"type": "long"
},
"SampleCount": {
"type": "long"
},
"ErrorCount": {
"type": "long"
},
"Hostname": {
"type": "string"
}
}
}
}
}
Now as you can see, a new index is created in elasticsearch as soon as I start logstash with the config file.
The newly created index starts from aa-* which is expected.
Now, I search for the Index in Kibana and I can see it as below:
[
However, I cannot see any data when I try to plot a line chart.
Things which I have tried:
Deleting the index from Sense and then creating again via sense (did not work)
Changing Timestamp of Log file, did not work as import was successful
Tried the Solution here Similar Question
Also, I was able to visualize another dataset, from this blog post:enter link description here
Trace log:
[2016-01-16 02:45:41,105][INFO ][cluster.metadata ] [Hulk 2099] [aa-2016-01-15] deleting index
[2016-01-16 02:46:01,370][INFO ][cluster.metadata ] [Hulk 2099] [aa-2016-01-15] creating index, cause [auto(bulk api)], templates [aa], shards 1/[0], mappings [logs]
[2016-01-16 02:46:01,451][INFO ][cluster.metadata ] [Hulk 2099] [aa-2016-01-15] update_mapping [logs]
ELK Stack
ElasticSearch - 2.1
Logstash - 2.1
Kibana - 4.3.1.1
Related
I'm trying to send data to elasticsearch but running into an issue where my number field only comes up as a string. These are the steps I took.
Step 1. Add index & map
PUT http://123.com:5101/core_060619/
{
"mappings": {
"properties": {
"date": {
"type": "date",
"format": "HH:mm yyyy-MM-dd"
},
"data": {
"type": "integer"
}
}
}
}
Result:
{
"acknowledged": true,
"shards_acknowledged": true,
"index": "core_060619"
}
Step 2. Add data
PUT http://123.com:5101/core_060619/doc/1
{
"test" : [ {
"data" : "119050300",
"date" : "00:00 2019-06-03"
} ]
}
Result:
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "Rejecting mapping update to [zyxnewcoreyxbl_060619] as the final mapping would have more than 1 type: [_doc, doc]"
}
],
"type": "illegal_argument_exception",
"reason": "Rejecting mapping update to [zyxnewcoreyxbl_060619] as the final mapping would have more than 1 type: [_doc, doc]"
},
"status": 400
}
You can not have more than one type of document in Elasticsearch 6.0.0+. If you set your document type to doc, then you can add another document by simply PUT http://123.com:5101/core_060619/doc/1, PUT http://123.com:5101/core_060619/doc/2 etc.
Elasticsearch 6.+
PUT core_060619/
{
"mappings": {
"doc": { //type of documents in index is 'doc'
"properties": {
"date": {
"type": "date",
"format": "HH:mm yyyy-MM-dd"
},
"data": {
"type": "integer"
}
}
}
}
}
Since we created mapping to have doc type of documents, now we can add new documents by simply adding /doc/_id:
PUT core_060619/doc/1
{
"test" : [ {
"data" : "119050300",
"date" : "00:00 2019-06-03"
} ]
}
PUT core_060619/doc/2
{
"test" : [ {
"data" : "111120300",
"date" : "10:15 2019-06-02"
} ]
}
Elasticsearch 7.+
Types are removed, but you can use custom like field(s):
PUT twitter
{
"mappings": {
"_doc": {
"properties": {
"type": { "type": "keyword" },
"name": { "type": "text" },
"user_name": { "type": "keyword" },
"email": { "type": "keyword" },
"content": { "type": "text" },
"tweeted_at": { "type": "date" }
}
}
}
}
PUT twitter/_doc/user-kimchy
{
"type": "user",
"name": "Shay Banon",
"user_name": "kimchy",
"email": "shay#kimchy.com"
}
PUT twitter/_doc/tweet-1
{
"type": "tweet",
"user_name": "kimchy",
"tweeted_at": "2017-10-24T09:00:00Z",
"content": "Types are going away"
}
GET twitter/_search
{
"query": {
"bool": {
"must": {
"match": {
"user_name": "kimchy"
}
},
"filter": {
"match": {
"type": "tweet"
}
}
}
}
}
Removal of mapping types
I am new to Elasticsearch and I am trying to use Logstash to load data to an index. Following is a partial of my losgstash config:
filter {
aggregate {
task_id => "%{code}"
code => "
map['campaignId'] = event.get('CAM_ID')
map['country'] = event.get('COUNTRY')
map['countryName'] = event.get('COUNTRYNAME')
# etc
"
push_previous_map_as_event => true
timeout => 5
}
}
output {
elasticsearch {
document_id => "%{code}"
document_type => "company"
index => "company_v1"
codec => "json"
hosts => ["127.0.0.1:9200"]
}
}
I was expecting that the aggregation would map for instance the column 'CAM_ID' into a property in the ElasticSearch Index as 'campaignId'. Instead, is creating a property with the name 'cam_id' which is the column name as lowercase. The same with the rest of the properties.
Following is the Index Document after logstash being executed:
{
"company_v1": {
"aliases": {
},
"mappings": {
"company": {
"properties": {
"#timestamp": {
"type": "date"
},
"#version": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"cam_id": {
"type": "long"
},
"campaignId": {
"type": "long"
},
"cam_type": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"campaignType": {
"type": "text"
}
}
}
},
"settings": {
"index": {
"creation_date": "1545905435871",
"number_of_shards": "5",
"number_of_replicas": "1",
"uuid": "Dz0x16ohQWWpuhtCB3Y4Vw",
"version": {
"created": "6050399"
},
"provided_name": "company_v1"
}
}
}
}
'campaignId' and 'campaignType' were created by me when i created the index, but logstash created the other 2.
Can someone explain me how to configure logstash to customize the indexes documents properties names when data is being loaded?
Thank you very much.
Best Regards
Using Kibana, I have created the following index:
put newsindex
{
"settings" : {
"number_of_shards":3,
"number_of_replicas":2
},
"mappings" : {
"news": {
"properties": {
"NewsID": {
"type": "integer"
},
"NewsType": {
"type": "text"
},
"BodyText": {
"type": "text"
},
"Caption": {
"type": "text"
},
"HeadLine": {
"type": "text"
},
"Approved": {
"type": "text"
},
"Author": {
"type": "text"
},
"Contact": {
"type": "text"
},
"DateCreated": {
"type": "date",
"format": "date_time"
},
"DateSubmitted": {
"type": "date",
"format": "date_time"
},
"LastModifiedDate": {
"type": "date",
"format": "date_time"
}
}
}
}
}
I have populated the index with Logstash. If I just perform a match_all query, all my records are returned as you'd expect. However, when I try to perform a targeted query such as:
get newsindex/_search
{
"query":{"match": {"headline": "construct abnomolies"}
}
}
I can see headline as a property of _source, but my query is ignored i.e. I still receive everything, regardless of whats in the headline. How do I need to change my index to make headline searchable. I'm using Elasticsearch 5.6.3
I needed to change the name property on my index to be lowercase. I noticed in the output windows the the properties under _source where lowercase. In Kibana the predictive text was offering my notation and lowercase. I've dropped my index and re-populated and it now works.
i have delete mapping with the cmd
curl -XDELETE 'http://localhost:9200/logstash_log*/'
in my conf ,i have defined the index as follow,
output {
elasticsearch {
hosts => localhost
index => "logstash_log-%{+YYYY.MM.dd}"
}
and try to create a new mapping , but i got the error
#curl -XPUT http://localhost:9200/logstash_log*/_mapping/log -d '
{
"properties":{
"#timestamp":"type":"date","format":"strict_date_optional_time||epoch_millis"},
"message":{"type":"string"},
"host":{"type":"ip"},
"name":{"type":"string","index": "not_analyzed"},
"type":{"type":"string"}
}
}'
{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":"logstash_log*","index":"logstash_log*"}],"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":"logstash_log*","index":"logstash_log*"},"status":404}
How can i fix it?
any help will be appreciated!!
You need to re-create your index like this:
# curl -XPUT http://localhost:9200/logstash_log -d '{
"mappings": {
"log": {
"properties": {
"#timestamp": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
},
"message": {
"type": "string"
},
"host": {
"type": "ip"
},
"name": {
"type": "string",
"index": "not_analyzed"
},
"type": {
"type": "string"
}
}
}
}
}'
Although since it looks like you're creating daily indices from logstash, you're probably better off creating a template instead. Store the following content inside index_template.json
{
"template": "logstash-*",
"mappings": {
"log": {
"properties": {
"#timestamp": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
},
"message": {
"type": "string"
},
"host": {
"type": "ip"
},
"name": {
"type": "string",
"index": "not_analyzed"
},
"type": {
"type": "string"
}
}
}
}
}
And then modify your logstash configuration like this:
output {
elasticsearch {
hosts => localhost
index => "logstash_log-%{+YYYY.MM.dd}"
manage_template => true
template_name => "logstash"
template => "/path/to/index_template.json"
template_overwrite => true
}
* is an invalid character for index name.
Index name must not contain the following characters [\, /, *, ?, \",
<, >, |, , ,]
What is the best way to configure ES index template with mappings in docker container? I expected to use template file but it seems that from version 2 it is not possible. Executing http request also won't work because on container creation process doesn't start. It could be done on each container launch with script which will start ES and execute HTTP request to it but it looks really ugly.
you can configure template with mappings by execute HTTP PUT request in Linux terminal, as following:
curl -XPUT http://ip:port/_template/logstash -d '
{
"template": "logstash-*",
"settings": {
"number_of_replicas": 1,
"number_of_shards": 8
},
"mappings": {
"_default_": {
"_all": {
"store": false
},
"_source": {
"enabled": true,
"compress": true
},
"properties": {
"_id": {
"index": "not_analyzed",
"type": "string"
},
"_type": {
"index": "not_analyzed",
"type": "string"
},
"field1": {
"index": "not_analyzed",
"type": "string"
},
"field2": {
"type": "double"
},
"field3": {
"type": "integer"
},
"xy": {
"properties": {
"x": {
"type": "double"
},
"y": {
"type": "double"
}
}
}
}
}
}
}
'
The "logstash-*" is your index name, you can have a try.
if using logstash, you can make template part of your logstash pipeline config
pipeline/logstash.conf
input {
...
}
filter {
...
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
template => "/usr/share/logstash/templates/logstash.template.json"
template_name => "logstash"
template_overwrite => true
index => "logstash-%{+YYYY.MM.dd}"
}
}
Reference: https://www.elastic.co/guide/en/logstash/6.1/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-template