logstash s3 input plugin not reading files from s3 - elasticsearch

I am trying to read logs from internal S3 storage(not AWS) using logstash. Below is the config. But When i try it always gives no files found in bucket. Through explorer i could see the files. Can someone help me on this. I have tried both .log and .txt files in s3 storage for testing purpose. Nothing works.
input {
s3 {
access_key_id => "xxxxxxxxxxxxxxxxxxxxxx"
secret_access_key => "xxxxxxxxxxxxxxxxxxxxx"
endpoint=>"http://example.com:9020"
bucket => "samplelogs"
temporary_directory=>"C:/xxx/ELK/logstash-6.6.1"
prefix=>"/"
add_field => { source => gzfiles }
type => "s3"
}
}
This is logs
Sending Logstash's logs to C:/XXXX/logstash-6.3.2/logs which is now configured via log4j2.properties
[2019-03-18T15:58:00,879][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-03-18T15:58:01,311][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.3.2"}
[2019-03-18T15:58:25,213][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-03-18T15:58:25,494][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://localhost:9200/]}}
[2019-03-18T15:58:25,509][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2019-03-18T15:58:25,650][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2019-03-18T15:58:25,681][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-03-18T15:58:25,681][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2019-03-18T15:58:25,713][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2019-03-18T15:58:25,728][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2019-03-18T15:58:25,744][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2019-03-18T15:58:25,759][INFO ][logstash.inputs.s3 ] Registering s3 input {:bucket=>"gvmslogs", :region=>"us-east-1"}
[2019-03-18T15:58:26,041][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x6ac2a093 run>"}
[2019-03-18T15:58:26,103][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
[2019-03-18T15:58:26,338][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-03-18T15:58:28,400][INFO ][logstash.inputs.s3 ] S3 input: No files found in bucket {:prefix=>"/"}
[2019-03-18T15:59:27,463][INFO ][logstash.inputs.s3 ] S3 input: No files found in bucket {:prefix=>"/"}

Related

logs are not going to elasticsearch

I have been trying for some time to send a simple log to Elasticsearch and after trying a very simple example, the logs are not been sent to Elasticsearch from logstash.
Services: In same server for this test
Operative Sytem: Centos 7
The logstash version is: 7.17.1
The Elasticsearch version is: 7.17.1
/etc/logstash/conf.d
input {
file {
path => "/var/log/Elasticsearch/Elasticsearch.log"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
}
output {
Elasticsearch {
hosts => ["localhost:9200"]
}
}
/var/log/logstash/logstash-plain.log
[2022-03-18T11:33:30,690][INFO ][org.reflections.Reflections] Reflections took 118 ms to scan 1 urls, producing 119 keys and 417 values
[2022-03-18T11:33:32,042][INFO ][logstash.outputs.Elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::Elasticsearch", :hosts=>["//localhost:9200"]}
[2022-03-18T11:33:32,540][INFO ][logstash.outputs.Elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://localhost:9200/]}}
[2022-03-18T11:33:32,889][WARN ][logstash.outputs.Elasticsearch][main] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2022-03-18T11:33:32,908][INFO ][logstash.outputs.Elasticsearch][main] Elasticsearch version determined (7.17.1) {:es_version=>7}
[2022-03-18T11:33:32,913][WARN ][logstash.outputs.Elasticsearch][main] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
[2022-03-18T11:33:33,037][INFO ][logstash.outputs.Elasticsearch][main] Config is not compliant with data streams. data_stream => auto resolved to false
[2022-03-18T11:33:33,113][INFO ][logstash.outputs.Elasticsearch][main] Config is not compliant with data streams. data_stream => auto resolved to false
[2022-03-18T11:33:33,311][INFO ][logstash.outputs.Elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2022-03-18T11:33:33,337][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/etc/logstash/conf.d/logstash.conf"], :thread=>"#<Thread:0x15acb961 run>"}
[2022-03-18T11:33:34,573][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>1.23}
[2022-03-18T11:33:34,664][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-03-18T11:33:34,761][INFO ][filewatch.observingtail ][main][2b6c69038f817ebf29690e5d479fe4c6e56f482b9d6cc052978d217447903269] START, creating Discoverer, Watch with file and sincedb collections
[2022-03-18T11:33:34,771][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
/var/log/Elasticsearch/Elasticsearch.log
[2022-03-18T01:30:00,079][INFO ][o.e.x.m.MlDailyMaintenanceService] [ip-.eu-west-2.compute.internal] Successfully completed [ML] maintenance task: triggerDeleteExpiredDataTask
[2022-03-18T10:23:44,010][INFO ][o.e.c.m.MetadataIndexTemplateService] [ip-.eu-west-2.compute.internal] adding template [logstash] for index patterns [logstash-]
[2022-03-18T10:23:44,189][INFO ][o.e.c.m.MetadataCreateIndexService] [ip-.eu-west-2.compute.internal] [logstash-2022.03.18-000001] creating index, cause [api], templates [logstash], shards [1]/[1]
[2022-03-18T10:23:44,522][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [ip-.eu-west-2.compute.internal] adding index lifecycle policy [logstash-policy]
[2022-03-18T10:23:44,603][INFO ][o.e.x.i.IndexLifecycleTransition] [ip-.eu-west-2.compute.internal] moving index [logstash-2022.03.18-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [logstash-policy]
[2022-03-18T10:23:44,671][INFO ][o.e.x.i.IndexLifecycleTransition] [ip-.eu-west-2.compute.internal] moving index [logstash-2022.03.18-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [logstash-policy]
[2022-03-18T10:23:44,726][INFO ][o.e.x.i.IndexLifecycleTransition] [ip-.eu-west-2.compute.internal] moving index [logstash-2022.03.18-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [logstash-policy]
[2022-03-18T10:23:55,371][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ip-*.eu-west-2.compute.internal] low disk watermark [85%] exceeded on [r51WwHrKTE-VK6UCAaR4IA][ip-*8.eu-west-2.compute.internal][/var/lib/Elasticsearch/nodes/0] free: 1.1gb[14.1%], replicas will not be assigned to this node
Any help would be really appreciated : )
You need to specify the index in the output.
Pay attention, that you have free space problem in the Elasticsearch: low disk watermark [85%] exceeded.

Logstash and elasticsearch on different machines

I have logstash and Elastsearch on different machines. When i run Logstash on same machine it works all fine(with 'localhost' in hosts) but when i specify IP address in the Hosts section of Conf file it does not creates index. the output from Logstash is as follows:-
Java HotSpot(TM) 64-Bit Server VM warning: Ignoring option UseConcMarkSweepGC; support was removed in 14.0
Java HotSpot(TM) 64-Bit Server VM warning: Ignoring option CMSInitiatingOccupancyFraction; support was removed in 14.0
Java HotSpot(TM) 64-Bit Server VM warning: Ignoring option UseCMSInitiatingOccupancyOnly; support was removed in 14.0
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/C:/Project/logstash-7.7.0/logstash-core/lib/jars/jruby-complete-9.2.11.1.jar) to field java.io.Console.cs
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Sending Logstash logs to C:/Project/logstash-7.7.0/logs which is now configured via log4j2.properties
[2020-05-19T19:45:01,169][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-05-19T19:45:01,279][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.7.0"}
[2020-05-19T19:45:02,516][INFO ][org.reflections.Reflections] Reflections took 47 ms to scan 1 urls, producing 21 keys and 41 values
[2020-05-19T19:45:03,723][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.51.100:9200/]}}
[2020-05-19T19:45:03,911][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://192.168.51.100:9200/"}
[2020-05-19T19:45:03,974][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2020-05-19T19:45:03,974][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-05-19T19:45:04,052][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://192.168.51.100:9200"]}
[2020-05-19T19:45:04,117][INFO ][logstash.outputs.elasticsearch][main] Using default mapping template
[2020-05-19T19:45:04,132][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2020-05-19T19:45:04,132][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["C:/Project/Log/sample.conf"], :thread=>"#<Thread:0x3bdb6c5e run>"}
[2020-05-19T19:45:04,210][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2020-05-19T19:45:05,271][INFO ][logstash.inputs.file ][main] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"C:/Project/logstash-7.7.0/data/plugins/inputs/file/.sincedb_8d9566297ac4987e711aafe4a88b2724", :path=>["C:/Project/Log/sample.txt"]}
[2020-05-19T19:45:05,302][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2020-05-19T19:45:05,346][INFO ][filewatch.observingtail ][main][253b58041f339951f57d5a400fe9cbebb44b789526885e5c4061ea24665dc057] START, creating Discoverer, Watch with file and sincedb collections
[2020-05-19T19:45:05,348][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-05-19T19:45:05,602][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

Logstash starting but not creating indices

I am trying to create an index in elasticsearch using a csv file. Below is the configuration.
input {
file {
path => "C:\Users\soumdash\Desktop\Accounts.csv"
start_position => "beginning"
sincedb_path => "NUL"
}
}
filter {
csv{
separator => ","
columns => ["Country_code","Account_number","User_ID","Date","Time"]
}
mutate {convert => ["Account_number","integer"]}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "accounts"
}
stdout {}
}
I am starting the logstash and from the console I can see that it has bee started and the pipeline has been created. But I cannot see the same index in kibana.
C:\Users\soumdash\Desktop\logstash-7.2.0\bin>logstash -f logstash-account.conf
Thread.exclusive is deprecated, use Thread::Mutex
Sending Logstash logs to C:/Users/soumdash/Desktop/logstash-7.2.0/logs which is now configured via log4j2.properties
[2019-07-26T14:01:27,662][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-07-26T14:01:27,711][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.2.0"}
[2019-07-26T14:01:42,181][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch index=>"accounts", id=>"b54e1c07198cf188279cb051e01c9fe6118db48fe2ce76739dc2ace82e02c078", hosts=>[//localhost:9200], document_type=>"ERC_Acoounts", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_57f41853-7ddf-48e5-a5e4-316d94c83a0f", enable_metric=>true, charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2019-07-26T14:01:46,248][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2019-07-26T14:01:46,752][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2019-07-26T14:01:46,852][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-07-26T14:01:46,862][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-26T14:01:46,910][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2019-07-26T14:01:47,046][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2019-07-26T14:01:47,205][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2019-07-26T14:01:47,236][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2019-07-26T14:01:47,236][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, :thread=>"#<Thread:0x26c630b8 run>"}
[2019-07-26T14:01:52,105][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>"main"}
[2019-07-26T14:01:52,232][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-07-26T14:01:52,249][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2019-07-26T14:01:53,290][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
I have checked and tried a few other answers on the same issue such as
Logstash creates pipeline but index is not created and
Logstash is not creating index in elastic search
But with no success result.
Can anyone please help? I am using ELK 7.2.
Can you use rubydebug inside of stdout,just to make sure that you file is read?

viewing syslogs on kibana using logstash

I have followed the steps as shown in the video. I am trying to run the below logstash3.conf file:
input{
file{
path=>"/var/log/syslog"
start_position=>"beginning"
sincedb_path => "/dev/null"
}
}
output{
elasticsearch{
hosts=>["elasticsearch:9200"]
index=>"pop"
}
stdout{}
}
I gave the following command to run logstash3.conf file
docker run -h logstash3 --name logstash3 --link elasticsearch:elasticsearch --rm -v "$PWD":/config-dir logstash -f /config-dir/logstash3.conf
I am getting the below error(after which it does not print anything to the screen):
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
10:00:19.082 [main] INFO logstash.modules.scaffold - Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
10:00:19.091 [main] INFO logstash.modules.scaffold - Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
10:00:19.163 [main] INFO logstash.setting.writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/var/lib/logstash/queue"}
10:00:19.164 [main] INFO logstash.setting.writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/var/lib/logstash/dead_letter_queue"}
10:00:19.275 [LogStash::Runner] INFO logstash.agent - No persistent UUID file found. Generating new UUID {:uuid=>"81be107c-ad55-4efb-b7a9-873179a33b06", :path=>"/var/lib/logstash/uuid"}
10:00:20.737 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
10:00:20.738 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elasticsearch:9200/, :path=>"/"}
10:00:20.969 [[main]-pipeline-manager] WARN logstash.outputs.elasticsearch - Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
10:00:21.342 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Using mapping template from {:path=>nil}
10:00:21.345 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date", "include_in_all"=>false}, "#version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
10:00:21.364 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
10:00:21.377 [[main]-pipeline-manager] INFO logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
10:00:21.962 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started
10:00:22.129 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
Kindly let me know how I can correct the error?

Importing data from file to ElasticSearch with logstash

I have script that logs temperature + humidity from diffrent sensors and stores the data from each sensor to his directory and every day a new log is made in this format YYYY-MM-DD.log.
${data_root}/A/0/*.log
${data_root}/A/1/*.log
ETC..
the logs are in this format:
2018-03-02 03:48:14 25.00 27.10
(YYYY-MM-DD TIME Temperature Humidity)
I had trouble with understanding how to correctly config my logstash instance, I figured that my input should look something like this:
input {
file{ path => "/var/wlogs/a1/*.log" type=>"a1"}
file{ path => "/var/wlogs/a2/*.log" type=>"a2"}
etc..
}
and the filter should look something like this:
filter{
if [type] == "a1" {
grok {
match => { "message" => "(?<timestamp>%{YEAR}-%{MONTHNUM:month}-%{MONTHDAY:day} %{TIME}) %{NUMBER:temperature:float} %{NUMBER:humidity:float}" }
}
}
if [type] == "a2" {....}
Im trying to export the the data in the output section to ElasticSearch with no success.
output{
elasticsearch { hosts =>["ec2-xxxxxx.eu-west-2.compute.amazonaws.com:9200"] user=>"elastic" password=>"pass" index=>"{type}"}
stdout{ codec => rubydebug}
}
here is the console output when I try to run it:
ubuntu#ip-xxx-xxx:/usr/share/logstash$ sudo bin/logstash -f ~/logstash.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2018-03-02 13:43:34.633 [main] scaffold - Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[INFO ] 2018-03-02 13:43:34.647 [main] scaffold - Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[WARN ] 2018-03-02 13:43:35.063 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2018-03-02 13:43:35.209 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.2.2"}
[INFO ] 2018-03-02 13:43:35.430 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2018-03-02 13:43:36.145 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[INFO ] 2018-03-02 13:43:36.318 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx#ec2-no.eu-west-2.compute.amazonaws.com:9200/]}}
[INFO ] 2018-03-02 13:43:36.327 [[main]-pipeline-manager] elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#ec2-no.eu-west-2.compute.amazonaws.com:9200/, :path=>"/"}
[WARN ] 2018-03-02 13:43:36.447 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://elastic:xxxxxx#ec2-3no3.eu-west-2.compute.amazonaws.com:9200/"}
[INFO ] 2018-03-02 13:43:36.610 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>nil}
[WARN ] 2018-03-02 13:43:36.611 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[INFO ] 2018-03-02 13:43:36.616 [[main]-pipeline-manager] elasticsearch - Using mapping template from {:path=>nil}
[INFO ] 2018-03-02 13:43:36.619 [[main]-pipeline-manager] elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[INFO ] 2018-03-02 13:43:36.626 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//ec2-no.eu-west-2.compute.amazonaws.com:9200"]}
[INFO ] 2018-03-02 13:43:37.054 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x25b5f422#/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:246 run>"}
[INFO ] 2018-03-02 13:43:37.081 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] agent - Pipelines running {:count=>1, :pipelines=>["main"]}
please help me figure out what I'm doing wrong and how to fix it :)
thanks in advance
P.S: Im using the latest versions of ElasticSearch, Kibana and Logstash
Don't see any error in the logs. Makes me think that the log files might have already been read in a previous attempt. Since the file offsets are maintained in the sincedb file in home directory, can you stop logstash, delete the file and try again?
For more details about the sincedb file, refer to https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html

Resources