Set Log Level of Storm Topology from Start - apache-storm

I have a bug that occurs in my Storm topology during initialization. I would like to set the log level to DEBUG from when the topology is started.
I realize there is a mechanism to dynamically set the log level for a running topology using either the Storm UI or CLI, but I am not able to dynamically change this setting before the bug occurs in my topology during initialization.
How can I statically set the log level to DEBUG so that I can see more detailed logs when my topology is initialized?

The following only applies to Storm 2.0.0 and later.
You can include a log4j2 config file in your topology jar. You then need to set the topology.logging.config property in your topology configuration.
I'll include the documentation here for convenience:
Log file the user can use to configure Log4j2. Can be a resource in the jar (specified with classpath:/path/to/resource) or a file. This configuration is applied in addition to the regular worker log4j2 configuration. The configs are merged according to the rules here: https://logging.apache.org/log4j/2.x/manual/configuration.html#CompositeConfiguration
See https://github.com/apache/storm/blob/885ca981fc52bda6552be854c7e4af9c7a451cd2/storm-client/src/jvm/org/apache/storm/Config.java#L735
The "regular worker log4j2 configuration" is the log4j2/worker.xml file in your Storm deployment, assuming default settings.

Related

How do I set the logging level in Quarkus?

I would like to change the logging level of my Quarkus application.
How can I do that either from the configuration file or at runtime?
The property that controls the root logging level is quarkus.log.level (and defaults to INFO).
This property can be set either in application.properties or can be overridden at runtime using -Dquarkus.log.level=DEBUG.
You can also specify more fine grained logging using quarkus.log.category.
For example for RESTEasy you could set:
quarkus.log.category."org.jboss.resteasy".level=DEBUG
For more information about logging in Quarkus, please check this guide.

How to disable Quarkus logging to a file (quarkus.log)?

I would like to run my Quarkus app in a container where the best practice is to only log to the console and not to a file.
How can I do that?
To disable logging, edit your application.properties file and add the following property:
quarkus.log.file.enable=false
By default Quarkus logs to both the console and to a file named quarkus.log.
In cases where writing to the log file is not necessary (for example when running a Quarkus app in a Kubernetes Pod), quarkus.log.file.enable=false can be used.
This property can be set either in application.properties or be overridden at runtime (using -Dquarkus.log.file.enable=false).
See this guide for more information about logging.
UPDATE
With this PR, logging to a file is now disabled by default.

Configure DEBUG log level for nimbus logs in apache storm

Is there way how we can enable debug for logs in apache storm. Not at the topology level but for Master node(nimbus.log). Wanted to enable DEBUG level for nimbus.log to understand how exactly scheduling works.
I have already gone through dynamic logging using UI for topologies.
The non-worker logging is configured in the storm/log4j2/cluster.xml file https://github.com/apache/storm/blob/master/log4j2/cluster.xml#L86. This is a standard Log4j2 configuration file, so refer to the Log4j2 documentation for how this works.
You should be able to just add a new logger at the bottom there for the package(s) you want logs from, and set the level to DEBUG.

Configure Log4j1.x to log asynchronously through property files

I have a large, Spring-based eCommerce framework as a codebase which makes use of pre configured log4j. I can override property values defined in log4j.properties.
I'd like to configure log4j to log asynchronously to console/file, I've attempted to define new/override appenders but am not seeing any output to console and am unsure why.
Is it possible to wrap current log4j into asynchronous calls?

Loading properties file in storm cluster mode

In my topology there is a small piece of code loading configurations from properties in classpath
InputStream is=getClass().getClassLoader().getResourceAsStream("dev.properties");
p.load(is);
It works great when I run jar in local mode storm, but when I try it in cluster mode, it fails with NullPointerException.
The properties file is in src/main/resources(Maven structure) and properly included in jar file.
Is there any possible reason?
Besides, I face a lot of trouble when I run some topology with outbound interaction for example ElasticSearch in cluster mode storm. Even though it works perfect in local mode storm.
What should I think before using cluster mode storm?
Load your properties object while building topology and then pass it to your bolts/spouts via constructor where necessary.
you have to configure a Network file system in your storm cluster and then place that property file in NFS location, read property file from this location.

Resources