I tried as per your document, But I am getting this error
2019-02-12 21:17:32 +0530 [error]: config error file="/etc/td-agent/td-agent.conf" error_class=Fluent::ObsoletedParameterError error="'host' parameter is already removed: Use section instead."
what it means?
Related
I use bitnami fluentd chart for Kubernetes and my setup is almost native besides of some changes.
My source section looks like
#type tail
path /var/log/containers/*my-app*.log
pos_file /opt/bitnami/fluentd/logs/buffers/fluentd-docker.pos
tag kubernetes.*
read_from_head true
and my application sends to stdout some more advanced logs information like:
2021-07-13 11:33:49.060 +0000 - [ERROR] - fatal error - play.api.http.DefaultHttpErrorHandler in postman-akka.actor.default-dispatcher-6 play.api.UnexpectedException: Unexpected exception[RuntimeException: java.net.ConnectException: Connection refused (Connection refused)]
at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:328)
at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler
and the problem is because in fluentd forwarder I can see (in /var/log/containers/*) that all records are stored in the following format:
{"log":"2021-07-13 19:54:48.523 +0000 - [ERROR] - from akka.io.TcpListener in postman-akka.actor.default-dispatcher-6 New connection accepted \n","stream":"stdout","time":"2021-07-13T19:54:48.523724149Z"}
{"log":"2021-07-13 19:54:48.523 +0000 - [ERROR] -- play.api.http.DefaultHttpErrorHandler in postman-akka.actor.default-dispatcher-6 \n","stream":"stdout","time":"2021-07-13T19:55:10.479279395Z"}
{"log":"2021-07-13 19:54:48.523 +0000 - [ERROR] - play.api.UnexpectedException: Unexpected exception[RuntimeException: }
{"log":"2021-07-13 19:54:48.523 +0000 - [ERROR] - java.net.ConnectException: Connection refused (Connection refused)] }
and the problem as you can see here is that all those lines are "separated" log record.
I would like to extract entire log message with entire stack trace, I wrote some configuration to fluentd parse section
#type regexp
expression /^(?<time>^(.*?:.*?)):\d\d.\d+\s\+0000 - (?<type>(\[\w+\])).- (?<text>(.*))/m
time_key time
time_format %Y-%m-%d %H:%M:%S
</parse>
but I am pretty sure that this is not problem because from some reason those files in (/var/log/containers/*.log) already storing wrong format of records, how can I configure fluentd forwarder to "take" logs from containers and store logs in format (non-json) ?
In an EFK setup, the fluentd suddenly stopped sending to elasticsearch with the following errors in the logs:
2020-09-28 18:48:55 +0000 [warn]: #0 Could not communicate to Elasticsearch, resetting connection and trying again. getaddrinfo: Name or service not known (SocketError)
2020-09-28 18:48:55 +0000 [warn]: #0 Remaining retry: 6. Retry to communicate after 512 second(s).
The elasticsearch components are up and running, and I can curl and access elasticsearch from inside the fluentd pod. There is no error message in the logs of the elasticsearch.
Restarting the fluentd pod or elasticsearch components did not help.
The issue was in one of the configurations that was uploaded to fluentd. The elasticsearch host was set to a wrong value in that configuration. After fixing that configuration, the issue waa resolved.
I am getting below error while starting the dse:
ERROR [main] 2020-02-26 13:08:33,269 DseModule.java:97 - {}. Exiting...
com.google.inject.CreationException: Unable to create injector, see the following errors:
1) An exception was caught and reported. Message: Unable to check disk space available to /u01/dse_ops/logs. Perhaps the Cassandra user does not have the necessary permissions
at com.datastax.bdp.DseModule.configure(Unknown Source)
I've installed NiFi 1.0 and imported a template.
I'm processing some files and after awhile NiFi crashes.
This is the error that I'm getting:
2018-10-09 18:10:07,416 ERROR [NiFi logging handler] org.apache.nifi.StdErr [Error] :10768:38: cvc-complex-type.2.4.a: Invalid content was found starting with element 'template'. One of '{processGroup, remoteProcessGroup, connection, controllerService}' is expected.
i'm unable to set below perporties through elasticsearch.yml file in elasticsearch 6.2.1, but these were working earlier in elasticsearch 2.x
threadpool.bulk.type: fixed
threadpool.bulk.size: 16
threadpool.bulk.queue_size: 5000
getting below error-
[2018-02-19T14:27:05,861][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] []
uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalArgumentException
: unknown setting [threadpool.bulk.queue_size] did you mean any of [thread_pool.
bulk.queue_size, thread_pool.get.queue_size, thread_pool.index.queue_size, threa
d_pool.search.queue_size, thread_pool.bulk.size, thread_pool.listener.queue_size
]?
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:125
) ~[elasticsearch-6.2.1.jar:6.2.1]
Please help me to fix this , Thanks in advance
In 2.x it was threadpool
https://www.elastic.co/guide/en/elasticsearch/reference/2.4/modules-threadpool.html
but in latest version it is changed as thead_pool
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-threadpool.html