When i try to run the flume agent, i am getting following statement repeatedly.unless i am stopping the task forcefully it displaying continuously, what could be the issue
please help me out
2013-05-27 03:47:12,517 (conf-file-poller-0) [DEBUG - org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatcherRunnable.run(AbstractFileConfigurationProvider.java:188)] Checking file:/etc/flume-ng/conf![enter image description here][1]/loclog.conf for changes
2013-05-27 03:47:12,517 (conf-file-poller-0) [DEBUG - org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatcherRunnable.run(AbstractFileConfigurationProvider.java:188)] Checking file:/etc/flume-ng/conf![enter image description here][1]/loclog.conf for changes
2013-05-27 03:47:12,517 (conf-file-poller-0) [DEBUG - org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatcherRunnable.run(AbstractFileConfigurationProvider.java:188)] Checking file:/etc/flume-ng/conf![enter image description here][1]/loclog.conf for changes
This is normal behaviour and should be ignored.
Flume automatically checks its config files for changes. If it detects a change it will reconfigure itself with those changes. The DEBUG entries you see above are Flume checking its config file.
Note that the reconfiguration process will not pick up all changes. I've noticed that new sources and sinks will often require a process restart.
Related
I'm using filebeat on client side > logstash on serverside > elasticsearch on server side
filebeat on clientside works properly by sending file, but the configuration i've made on logstash returning
Fail
[WARN ] 2019-12-18 14:53:30.987 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[FATAL] 2019-12-18 14:53:31.341 [LogStash::Runner] runner - Logstash could not be started because there is already another instance using the configured data directory. If you wish to run multiple instances, you must change the "path.data" setting.
[ERROR] 2019-12-18 14:53:31.364 [LogStash::Runner] Logstash - java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
Here is my configfile
input {
beats {
port =>5044
}
}
filter {
grok {
match => { "message" =>"%{TIMESTAMP_ISO8601:timestamp}] %{WORD:test}\[%{NUMBER:nom}]\[%{DATA:tes}\] %{DATA:module_name}\: %{WORD:method}%{GREEDYDATA:log_message}" }
}
}
output {
elasticsearch
{
hosts => "127.0.0.1:9200"
index=>"test_log_pbx"
}
}
code to run my logstash config
/usr/share/logstash/bin/logstash -f logstash.conf
when i run configtest it returns
Thread.exclusive is deprecated, use Thread::Mutex
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2019-12-18 14:59:53.300 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2019-12-18 14:59:56.566 [LogStash::Runner] Reflections - Reflections took 139 ms to scan 1 urls, producing 20 keys and 40 values
Configuration OK
[INFO ] 2019-12-18 14:59:57.923 [LogStash::Runner] runner - Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
please help me i dont know whats wrong
A logstash instance already running, so you can not run another instance.If you made your logstash as service, you should stop the service. If you want to run multiple instances, you should modify pipelines.yml
If you want to learn more about pipelines.yml, I put link the below.
https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html
Here is the Error:
root#taurus:/etc/icinga2/features-available# service icinga2 checkconfig
* checking Icinga2 configuration
information/cli: Icinga application loader (version: r2.7.2-1)
information/cli: Loading configuration file(s).
critical/config: Error: Error while evaluating expression: Could not load library 'libdb_ido_mysql.so.2.7.2': libdb_ido_mysql.so.2.7.2: cannot open shared object file: No such file or directory
Location: in /etc/icinga2/features-enabled/ido-mysql.conf: 6:1-6:22
/etc/icinga2/features-enabled/ido-mysql.conf(4): */
/etc/icinga2/features-enabled/ido-mysql.conf(5):
/etc/icinga2/features-enabled/ido-mysql.conf(6): library "db_ido_mysql"
^^^^^^^^^^^^^^^^^^^^^^
/etc/icinga2/features-enabled/ido-mysql.conf(7):
/etc/icinga2/features-enabled/ido-mysql.conf(8): object IdoMysqlConnection "ido-mysql" {
* checking Icinga2 configuration. Check '/var/log/icinga2/startup.log' for details.
root#taurus:/etc/icinga2/features-available# icinga2 feature list
Disabled features: command compatlog debuglog gelf graphite influxdb livestatus opentsdb perfdata statusdata syslog
Enabled features: api checker ido-mysql ido-pgsql mainlog notification
Does anybody know what i did wrong during the installation?
there were no problems, i dont get the answer.
Do you want to use Icingaweb2 with your Icinga2 installation? Then you have to install the
icinga2-ido-mysql
Package for your distribution and configure it. Here you can find a step by step instruction on how to install and configure it. If not, disable the following features:
ido-mysql ido-pgsql
Regards,
Jan
Iam getting this log info continously for a whole day.
2016-10-12 21:32:05,696 (conf-file-poller-0) [DEBUG -
org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:126)]
Checking file:conf/flume.conf for changes when executing the command
FLUME_HOME/bin/flume-ng agent --conf ./conf/ -f conf/flume.conf
-Dflume.root.logger=DEBUG,console -n TwitterAgent
Iam getting this error now after modifying the conf file
2016-10-12 22:09:19,592 (lifecycleSupervisor-1-0) [DEBUG -
com.cloudera.flume.source.TwitterSource.start(TwitterSource.java:124)]
Setting up Twitter sample stream using consumer key and
access token 2016-10-12 22:09:19,592
(lifecycleSupervisor-1-0) [ERROR -
org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:253)]
Unable to start EventDrivenSourceRunner: {
source:com.cloudera.flume.source.TwitterSource{name:Twitter,state:IDLE}
} - Exception follows. java.lang.IllegalStateException: consumer
key/secret pair already set.
As far as I can understand from what you have provide here, I think you need to add a1.sources.r1.type = org.apache.flume.source.twitter.TwitterSource into your conf file to define your Twitter sources, also make sure you are using your credentials for accessing twitter API.
I had 81,068 tasks complete but then 11,799 failed and only 12 were killed. They SEEM to all failed from
2013-09-10 03:07:36,316 INFO org.apache.hadoop.mapred.TaskInProgress: Error from attempt_201308301539_0002_m_083001_0: Error initializing attempt_201308301539_0002_m_083001_0:
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find taskTracker/jobcache/job_201308301539_0002/work in any of the configured local directories
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToRead(LocalDirAllocator.java:389)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAllocator.java:138)
at org.apache.hadoop.mapred.TaskTracker$TaskInProgress.localizeTask(TaskTracker.java:1817)
at org.apache.hadoop.mapred.TaskTracker$TaskInProgress.launchTask(TaskTracker.java:1933)
at org.apache.hadoop.mapred.TaskTracker.launchTaskForJob(TaskTracker.java:830)
at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:824)
at org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:1664)
at org.apache.hadoop.mapred.TaskTracker.access$1200(TaskTracker.java:97)
at org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:1629)
At this point, I am just looking for guidance on how I can debug this before I re-run this again. For some reason out in the cluster, it looks like all the files are deleted though I thought hadoop M/R only deleted successfull task logs????
Anyone have some advice/ideas on how to debug this further?
It looks like all the default directories for map/reduce are used... /tmp/hadoop-hduser for my hduser.
I have seen stuff on /etc/hosts but then I don't get why 81,000 tasks succeeded before finally failing???
I am using the web interface to get some of this information of course and some logs where hadoopinstalled/logs
thanks,
Dean
I'm in the process of executing Maven commands to run tests in the console (MacOSX). Recently, development efforts have produced extraneous messages in the console (info, debug, warning, etc.) I'd like to know how to remove messages like this:
INFO c.c.m.s.c.p.ApplicationProperties - Loading application properties from: app-config/shiro.properties
I've used this code to remove messages from the dbunit tests:
ch.qos.logback.classic.Logger Logger = (ch.qos.logback.classic.Logger)LoggerFactory.getLogger("org.dbunit");
Logger.setLevel(Level. ERROR);
However, I'm unsure how to disable these additional (often verbose and irritating) messages from showing up on the console so that I can see the output more easily. Additional messages appear as above and these:
DEBUG c.c.m.s.c.f.dao.AbstractDBDAO - Adding filters to the Main Search query.
WARN c.c.m.s.c.p.JNDIConfigurationProperties - Unable to find JNDI value for name: xxxxx
INFO c.c.m.a.t.d.DatabaseTestFixture - * executing sql: xxxxx
The successful answer was:
SOLUTION: Solution to issue IS adding a 'logback-test.xml' file to the root of my test folder. I used the default contents (as instructed by the documentation - thanks #Charlie). Once file exists there, FIXED!