Using actual regex in Ansible's search_regex parameter - ansible

I'm trying to use the wait_for module in Ansible to repeatedly check a log file and end when it finds either success (the word Successfully appears in the log) or failure (the string [FATAL] appears in the log).
- name: Wait for Logstash API Endpoint to be running
wait_for:
path: /var/log/logstash/logstash-plain.log
search_regex: '(\\[FATAL\\]|Successfully)'
delay: 30
timeout: 120
I've tried various approaches to the search_regex parameter including no escape characters, single escape characters etc.
I've checked that the logs' output includes [FATAL] and it definitely does, but I can't get this module to work.
Where am I going wrong?
** EDIT **
Attempting to use the following code:
On the following log:
[2019-01-15T15:43:14,735][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.5.2"}
[2019-01-15T15:44:00,534][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"Java::OrgLogstashConfigIr::InvalidIRException", :message=>"Config has duplicate Ids: \nID: jsonfilter P[filter-json{\"id\"=>\"jsonfilter\", \"source\"=>\"message\", \"remove_field\"=>[\"_sourceUri\", \"_user\", \"sourceUri\", \"user\", \"pid\", \"v\"]}|[str]pipeline:41:7:```\njson {\n id => \"jsonfilter\"\n source => \"message\"\n # remove some irrelevant fields\n remove_field => [\"_sourceUri\", \"_user\", \"sourceUri\", \"user\", \"pid\", \"v\"]\n }\n```]\nP[filter-json{\"id\"=>\"jsonfilter\", \"source\"=>\"message\", \"remove_field\"=>[\"_sourceUri\", \"_user\", \"sourceUri\", \"user\", \"pid\", \"v\"]}|[str]pipeline:191:7:```\njson {\n id => \"jsonfilter\"\n source => \"message\"\n # remove some irrelevant fields\n remove_field => [\"_sourceUri\", \"_user\", \"sourceUri\", \"user\", \"pid\", \"v\"]\n }\n```]", :backtrace=>["org.logstash.config.ir.graph.Graph.validate(org/logstash/config/ir/graph/Graph.java:294)", "org.logstash.config.ir.PipelineIR.<init>(org/logstash/config/ir/PipelineIR.java:52)", "java.lang.reflect.Constructor.newInstance(java/lang/reflect/Constructor.java:423)", "org.jruby.javasupport.JavaConstructor.newInstanceDirect(org/jruby/javasupport/JavaConstructor.java:246)", "org.jruby.RubyClass.newInstance(org/jruby/RubyClass.java:1022)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(org/jruby/RubyClass$INVOKER$i$newInstance.gen)", "usr.share.logstash.logstash_minus_core.lib.logstash.compiler.compile_sources(/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:29)", "usr.share.logstash.logstash_minus_core.lib.logstash.compiler.RUBY$method$compile_sources$0$__VARARGS__(usr/share/logstash/logstash_minus_core/lib/logstash//usr/share/logstash/logstash-core/lib/logstash/compiler.rb)", "org.jruby.RubyClass.finvoke(org/jruby/RubyClass.java:899)", "org.jruby.RubyBasicObject.callMethod(org/jruby/RubyBasicObject.java:372)", "org.logstash.config.ir.ConfigCompiler.configToPipelineIR(org/logstash/config/ir/ConfigCompiler.java:32)", "org.logstash.execution.AbstractPipelineExt.initialize(org/logstash/execution/AbstractPipelineExt.java:149)", "org.logstash.execution.AbstractPipelineExt$INVOKER$i$3$0$initialize.call(org/logstash/execution/AbstractPipelineExt$INVOKER$i$3$0$initialize.gen)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline.initialize(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:22)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline.initialize(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:90)", "org.jruby.RubyClass.newInstance(org/jruby/RubyClass.java:1022)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(org/jruby/RubyClass$INVOKER$i$newInstance.gen)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.block in execute(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:42)", "org.jruby.RubyProc.call(org/jruby/RubyProc.java:289)", "org.jruby.RubyProc.call19(org/jruby/RubyProc.java:273)", "org.jruby.RubyProc$INVOKER$i$0$0$call19.call(org/jruby/RubyProc$INVOKER$i$0$0$call19.gen)", "usr.share.logstash.logstash_minus_core.lib.logstash.agent.block in exclusive(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:92)", "org.jruby.ext.thread.Mutex.synchronize(org/jruby/ext/thread/Mutex.java:148)", "org.jruby.ext.thread.Mutex$INVOKER$i$0$0$synchronize.call(org/jruby/ext/thread/Mutex$INVOKER$i$0$0$synchronize.gen)", "usr.share.logstash.logstash_minus_core.lib.logstash.agent.exclusive(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:92)", "usr.share.logstash.logstash_minus_core.lib.logstash.agent.RUBY$method$exclusive$0$__VARARGS__(usr/share/logstash/logstash_minus_core/lib/logstash//usr/share/logstash/logstash-core/lib/logstash/agent.rb)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.execute(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:38)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0$__VARARGS__(usr/share/logstash/logstash_minus_core/lib/logstash/pipeline_action//usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb)", "usr.share.logstash.logstash_minus_core.lib.logstash.agent.block in converge_state(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:317)", "org.jruby.RubyProc.call(org/jruby/RubyProc.java:289)", "org.jruby.RubyProc.call(org/jruby/RubyProc.java:246)", "java.lang.Thread.run(java/lang/Thread.java:748)"]}
[2019-01-15T15:44:01,189][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<LogStash::Error: Don't know how to handle `Java::OrgLogstashConfigIr::InvalidIRException` for `PipelineAction::Create<main>`>, :backtrace=>["org/logstash/execution/ConvergeResultExt.java:103:in `create'", "org/logstash/execution/ConvergeResultExt.java:34:in `add'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:329:in `block in converge_state'"]}
[2019-01-15T15:44:02,220][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
Results in this failure:
Further info:

Based on my testing you were soooooooooo close. This:
search_regex: "(\\[FATAL\\]|Successfully)"
works for me. (Note " rather than ')
Also, check that Ansible has permission to read the log file. In the case that it doesn't, wait_for doesn't give you any indication that it can't read the file.

Related

java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit

I am trying to use logstash to push data into kibana, but while running logstash.conf file i am getting error
Error Message:
Sending Logstash logs to E:/dev/logstash-7.9.0/logs which is now configured via log4j2.properties
[2020-09-09T14:22:48,297][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.9.0", "jruby.version"=>"jruby 9.2.12.0 (2.5.7) 2020-07-01 db01a49ba6 Java HotSpot(TM) 64-Bit Server VM 25.261-b12 on 1.8.0_261-b12 +indy +jit [mswin32-x86_64]"}
[2020-09-09T14:22:49,410][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-09-09T14:22:54,149][INFO ][org.reflections.Reflections] Reflections took 124 ms to scan 1 urls, producing 22 keys and 45 values
[2020-09-09T14:22:57,211][ERROR][logstash.filters.csv ] Unknown setting 'seperator' for csv
[2020-09-09T14:22:57,237][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"Java::JavaLang::IllegalStateException", :message=>"Unable to configure plugins: (ConfigurationError) Something is wrong with your configuration.", :backtrace=>["org.logstash.config.ir.CompiledPipeline.<init>(CompiledPipeline.java:119)", "org.logstash.execution.JavaBasePipelineExt.initialize(JavaBasePipelineExt.java:82)", "org.logstash.execution.JavaBasePipelineExt$INVOKER$i$1$0$initialize.call(JavaBasePipelineExt$INVOKER$i$1$0$initialize.gen)", "org.jruby.internal.runtime.methods.JavaMethod$JavaMethodN.call(JavaMethod.java:837)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuper(IRRuntimeHelpers.java:1169)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuperSplatArgs(IRRuntimeHelpers.java:1156)", "org.jruby.ir.targets.InstanceSuperInvokeSite.invoke(InstanceSuperInvokeSite.java:39)", "E_3a_.dev.logstash_minus_7_dot_9_dot_0.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$initialize$0(E:/dev/logstash-7.9.0/logstash-core/lib/logstash/java_pipeline.rb:44)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:82)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:70)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:332)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:86)", "org.jruby.RubyClass.newInstance(RubyClass.java:939)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:207)", "E_3a_.dev.logstash_minus_7_dot_9_dot_0.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0(E:/dev/logstash-7.9.0/logstash-core/lib/logstash/pipeline_action/create.rb:52)", "E_3a_.dev.logstash_minus_7_dot_9_dot_0.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0$__VARARGS__(E:/dev/logstash-7.9.0/logstash-core/lib/logstash/pipeline_action/create.rb)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:82)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:70)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:207)", "E_3a_.dev.logstash_minus_7_dot_9_dot_0.logstash_minus_core.lib.logstash.agent.RUBY$block$converge_state$2(E:/dev/logstash-7.9.0/logstash-core/lib/logstash/agent.rb:357)", "org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:138)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:58)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:52)", "org.jruby.runtime.Block.call(Block.java:139)", "org.jruby.RubyProc.call(RubyProc.java:318)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:105)", "java.lang.Thread.run(Thread.java:748)"]}
warning: thread "Converge PipelineAction::Create<main>" terminated with exception (report_on_exception is true):
LogStash::Error: Don't know how to handle `Java::JavaLang::IllegalStateException` for `PipelineAction::Create<main>`
create at org/logstash/execution/ConvergeResultExt.java:129
add at org/logstash/execution/ConvergeResultExt.java:57
converge_state at E:/dev/logstash-7.9.0/logstash-core/lib/logstash/agent.rb:370
[2020-09-09T14:22:57,317][ERROR][logstash.agent ] An exception happened when converging configuration {:exception=>LogStash::Error, :message=>"Don't know how to handle `Java::JavaLang::IllegalStateException` for `PipelineAction::Create<main>`"}
[2020-09-09T14:22:57,383][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<LogStash::Error: Don't know how to handle `Java::JavaLang::IllegalStateException` for `PipelineAction::Create<main>`>, :backtrace=>["org/logstash/execution/ConvergeResultExt.java:129:in `create'", "org/logstash/execution/ConvergeResultExt.java:57:in `add'", "E:/dev/logstash-7.9.0/logstash-core/lib/logstash/agent.rb:370:in `block in converge_state'"]}
[2020-09-09T14:22:57,498][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
below is code of logstash.conf file:
input {
file {
path => "D:/Users/manis/Desktop/temp/data.csv"
start_position => "beginning"
sincedb_path => "NULL"
}
}
filter {
csv {
seperator => ","
columns => ["Id","Title","Body","Tags","CreationDate","Y"]
}
}
output {
elasticsearch{
host => "http://localhost:9200"
index => "data"
}
stdout {}
}
Although my Kibana, and elasticsearch are running perfectly fine, I have tried changing the path of config file,path of data, output format,etc but none worked for me.
Any help in this regard will be helpfull.
Thanks in advance.
The error is right there in the log.
[2020-09-09T14:22:57,211][ERROR][logstash.filters.csv ] Unknown setting 'seperator' for csv
Use separator instead of seperator.
Additionally it should be hosts instead of host which would be thrown next when seperator typo is corrected

Chef notification within each method

I have a recipe that iterates a hash containing SQL scripts in an each method and -- in case the script changed from the previous run -- the cookbook_file resource notifies the execute resource to run.
The issue is that it seems it always runs the execute using the last element of the hash.
Following the attributes file
default['sql_scripts_dir'] = 'C:\\DBScripts'
default['script_runner']['scripts'] = [
{ 'name' => 'test', 'hostname' => 'local' },
{ 'name' => 'test2', 'hostname' => 'local' },
{ 'name' => 'test3', 'hostname' => 'local' },
{ 'name' => 'test4', 'hostname' => 'local' },
]
And the recipe
directory node['sql_scripts_dir'] do
recursive true
end
node['script_runner']['scripts'].each do |script|
cookbook_file "#{node['sql_scripts_dir']}\\#{script['name']}.sql" do
source "#{script['name']}.sql"
action :create
notifies :run, 'execute[Create_scripts]', :immediately
end
execute 'Create_scripts' do
command "sqlcmd -S \"#{script['hostname']}\" -i \"#{node['sql_scripts_dir']}\\#{script['name']}.sql\""
action :nothing
end
end
And it produces the following output:
Recipe: test_runner::default
* directory[C:\DBScripts] action create
- create new directory C:\DBScripts
* cookbook_file[C:\DBScripts\test.sql] action create
- create new file C:\DBScripts\test.sql
- update content in file C:\DBScripts\test.sql from none to 8c40f1
--- C:\DBScripts\test.sql 2020-07-30 17:30:30.959220400 +0000
+++ C:\DBScripts/chef-test20200730-1500-11bz3an.sql 2020-07-30 17:30:30.959220400 +0000
## -1 +1,2 ##
+select ##version
* execute[Create_scripts] action run
================================================================================
Error executing action `run` on resource 'execute[Create_scripts]'
================================================================================
Mixlib::ShellOut::ShellCommandFailed
------------------------------------
Expected process to exit with [0], but received '1'
---- Begin output of sqlcmd -S "local" -i "C:\DBScripts\test4.sql" ----
STDOUT:
STDERR: Sqlcmd: 'C:\DBScripts\test4.sql': Invalid filename.
---- End output of sqlcmd -S "local" -i "C:\DBScripts\test4.sql" ----
Ran sqlcmd -S "local" -i "C:\DBScripts\test4.sql" returned 1
The expected behavior is that the recipe runs sequentially the 4 scripts in the example instead of running just the last one. What am I missing for getting it done?
You are creating 4 nearly identical resources all named execute[Create_scripts] and when the notification fires from the first cookbook_file resource being updated it finds the last one of them to be notified and runs against test4 (no matter which cookbook_file resource updates).
The fix is to use string interpolation to change the name of the execute resources to be unique and to notify based on that unique name:
directory node['sql_scripts_dir'] do
recursive true
end
node['script_runner']['scripts'].each do |script|
cookbook_file "#{node['sql_scripts_dir']}\\#{script['name']}.sql" do
source "#{script['name']}.sql"
action :create
notifies :run, "execute[create #{script['name']} scripts]", :immediately
end
execute "create #{script['name']} scripts" do
command "sqlcmd -S \"#{script['hostname']}\" -i \"#{node['sql_scripts_dir']}\\#{script['name']}.sql\""
action :nothing
end
end
Note that this is a manifestation of the same issues behind the old CHEF-3694 warning message where what would happen is that all the four execute resources would be merged into one resource via "resource cloning" with the properties of the subsequent resource being "last-writer-wins".
In Chef 13 this was changed to remove resource cloning and the warning, and in most circumstances having two resources named the same thing in the resource collection is totally harmless -- until you try to notify one of those resources. The resource notification system should really warn in this situation rather than silently picking the last resource that matches (but between notifications, subscribes, lazy resolution and now unified_mode that code is very complicated and you only want it to be firing under exactly the right conditions).

Logstash: TypeError: no implicit conversion of nil into string

I'm new to ELK and trying to input 3rd party REST API to Logstash to feed data into it. I'm getting TypeError: no implicit conversion of nil into string.
I'm using http_input_poller plugin to feed data. Here is configuration for it.
input {
http_poller {
urls => {
test1 => "https://example.com/api/now/table/sys_user?sysparm_limit=1"
}
request_timeout => 60
# Supports "cron", "every", "at" and "in" schedules by rufus scheduler
schedule => { cron => "* * * * * UTC"}
codec => "json"
# A hash of request metadata info (timing, response headers, etc.) will be sent here
metadata_target => "http_poller_metadata"
}
}
output {
stdout {
codec => rubydebug
}
}
I'm using this command to run
sudo ./logstash -f logstash_http_poller.conf --path.settings /etc/logstash/ --path.data
I'm getting this error message
masteradmin#usr:/usr/share/logstash/bin$ sudo ./logstash -f logstash_http_poller.conf --path.settings /etc/logstash/ --path.data
[FATAL] 2020-03-18 07:07:07.377 [main] runner - An unexpected error occurred! {:error=>#<TypeError: no implicit conversion of nil into String>, :backtrace=>["org/jruby/RubyFileTest.java:96:in `directory?'", "org/jruby/RubyFileTest.java:88:in `directory?'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:510:in `block in value'", "org/jruby/RubyKernel.java:1906:in `tap'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:509:in `value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:97:in `get_value'", "/usr/share/logstash/logstash-core/lib/logstash/environment.rb:94:in `block in LogStash'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:167:in `block in post_process'", "org/jruby/RubyArray.java:1814:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:166:in `post_process'", "/usr/share/logstash/logstash-core/lib/logstash/util/settings_helper.rb:26:in `post_process'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:246:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:242:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}
[ERROR] 2020-03-18 07:07:07.386 [main] Logstash - java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
Need help. Thanks
After some analysis of the error message I strongly believe that this error occurres due to the fact that no value for the --path.data argument was specified.
TypeError: no implicit conversion of nil into String> could indicate that nil (the non-existent path) can not be converted to a string (what the path/argument value has to be)
RubyFileTest.java:96:in ``directory?'" indicates that the ruby class expected a valid directory
So please specify a value for the argument in your run command and let me know if that solved the issue.
EDIT:
The path.data argument is not mandatory, so you can remove it. The default location would then be the data-folder inside the directory you extracted the binaries. (See https://www.elastic.co/guide/en/logstash/current/dir-layout.html for reference)

Telegraf tail with grok pattern error

I am using Telegraf to get logs information from Apache NiFi, for this task I am using this config:
[[inputs.tail]]
## files to tail.
files = ["/var/log/nifi/nifi-app.log"]
## Read file from beginning.
from_beginning = true
#name_override = "nifi_app"
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "grok"
grok_patterns = [ "%{DATE:date} %{TIME:time} %{WORD:EventType} \[%{GREEDYDATA:NifiTask} %{NOTSPACE:Thread}\] %{NOTSPACE:NifiEventType} %{GREEDYDATA:EventText} %{NUMBER:EventDuration} %{WORD:EventDurationUnits}" ]
When I try to start telegraf it give me this error:
Error parsing /etc/telegraf/telegraf.conf, toml: line 10: parse error
The pattern I wrote was tested in a Grok debugger with this text:
2018-08-02 10:53:16,976 INFO [Heartbeat Monitor Thread-1]
o.a.n.c.c.h.AbstractHeartbeatMonitor Finished processing 1 heartbeats
in 11863 nanos
These are the results of some testing:
grok_patterns = ["\[%{GREEDYDATA:NifiTask}\]"] ==> toml: line 10: parse error
grok_patterns = ["[%{GREEDYDATA:NifiTask}]"] ==> Invalid data format: grok
grok_patterns = ['\[%{GREEDYDATA:NifiTask}\]'] ==> Invalid data format: grok
grok_patterns = ["\\[%{GREEDYDATA:NifiTask}\\]"] ==> Invalid data format: grok
grok_patterns = ['[%{GREEDYDATA:NifiTask}]'] -> Invalid data format: grok
The first option for me is the right one, but doesn't works, and the problem seems to be the way the bracket is being escaped.
How is possible to solve this issue?
To fix the issue about escaping bracket, a "partial" solution is to change double quote by simple quote, with this way, in my case (telegraph version 1.13.4) the bracket is correctly escaped by \
There was more than one problem:
First problem: the grok dataformat is added to Telegraf in the 1.8 release (ref), so I must use a nightly install until this version is released.
Second problem: how to escape the brackets, there are problems doing it in a regular way, so what I finally did was to put this part in a custom pattern file, this way it works perfectly.

Issue in reading log file that contains date in it's name

I have 2 linux boxes setup in which 1 box contains one component which generates log and logstash installed in it to transfer the logs. And in other box I have redis elasticsearch and logstash. here logstash will act as logstash indexer to grok the data.
Now my problem is that in 1st box component generate new log file everyday, but only difference in log file name varies as per date.
like
counters-20151120-0.log
counters-20151121-0.log
counters-20151122-0.log
and so on, I have included below type of code in my logstash shipper conf file:
file {
path => "/opt/data/logs/counters-%{YEAR}%{MONTHNUM}%{MONTHDAY}*.log"
type => "rg_counters"
}
And in my logstash indexer, I have below type of code to catch those log files:
if [type] == "rg_counters" {
grok{
match => ["message", "%{YEAR}%{MONTHNUM}%{MONTHDAY}\s*%{HOUR}:%{MINUTE}:%{SECOND}\s*(?<counters_raw_data>[0-9\-A-Z]*)\s*(?<counters_operation_type>[\-A-Z]*)\s*%{GREEDYDATA:counters_extradata}"]
}
}
output {
elasticsearch { host => ["elastichost1","elastichost1" ] port => "9200" protocol => "http" }
stdout { codec => rubydebug }
}
Please note that this is working setup and other types log files are getting transfered and processed successfully, so there is no issue of setup.
The problem is how do I process this log file which contains date in it's file name.
Any help here?
Thanks in advance!!
Based on the comments...
Instead of trying to use regexp patterns in your path:
path => "/opt/data/logs/counters-%{YEAR}%{MONTHNUM}%{MONTHDAY}*.log"
just use glob patterns:
path => "/opt/data/logs/counters-*.log"
logstash will remember which files (inodes) that it's seen before.

Resources