Most of my app logging is done at debug level because in sf 1.4 it's not used by symfony itself, and that makes it easy to see just the messages I'm interested in using something like:
tail -f log/frontend_dev.php | grep "\[debug\]"
This is great in the dev environment while I'm sat there watching it scroll past, but I now want to log only these debug messages in the production environment.
If I set the log level for the production log to debug, then obviously I'll get everything down to and including the debug level, which is far too much data.
Is it possible to write a logger that will just record [debug] messages and nothing else?
Of course it is possible - you can extend sfFileLogger and override log($message, $priority) function (which sfFileLogger inherits from sfLogger class).
...
public function log($message, $priority = self::DEBUG)
{
if ($this->getLogLevel() != $priority)
{
return false;
}
return $this->doLog($message, $priority);
}
...
Now you have to configure logging in your app to use your new logger class, configuration is located in app/<your app>/config/factories.yml:
prod:
logger:
class: sfAggregateLogger
param:
level: DEBUG
loggers:
sf_file_debug:
class: myDebugOnlyLoggerClass
param:
level: DEBUG
file: %SF_LOG_DIR%/%SF_APP%_%SF_ENVIRONMENT%.log
This logger will save only messages logged with the same priority (and only the same, not the same or higher) as configured in factories.yml.
Related
by default i assume that spring-boot/camel is using org.apache.camel.support.processor.DefaultExchangeFormatter.
I wonder how I can set the flag 'showHeaders' inside a spring-boot app.
Because I hope to see the headers in the "org.apache.camel.tracing" log as well.
Wish all a wonderful day
DefaultTracer is used in Camel to trace routes by default.
It is created with showHeaders(false) formatter option set.
Therefore you could implement another Tracer (consider extending from DefaultTracer) to enable putting headers into traced messages.
i need this mostly in my tests. so i have built this into my basic test class
#BeforeEach
public void before() {
if( camelContext.getTracer().getExchangeFormatter() instanceof DefaultExchangeFormatter ) {
DefaultExchangeFormatter def = (DefaultExchangeFormatter)camelContext.getTracer().getExchangeFormatter();
def.setShowHeaders(true);
}
}
With condition on StreamListener annotation, if this condition is not met DispatchingStreamListenerMessageHandler is logging WARN message with text:
Cannot find a #StreamListener matching for message with id: [some_id]
Example, imagine we have 3 microservices:
AnimalService - producer application, that is going to produce Dog and Cat messages.
DogService - consumer application, to consume only Dog messages.
CatService - consumer application, to consume only Cat messages.
Animal application is sending a message and includes header parameter type:
public void handleEvent(Animal animal) {
MessageBuilder<Animal> messageBuilder = MessageBuilder.withPayload(animal)
.setHeader("type", animal.getType());
bindings.itemEventOutput().send(messageBuilder.build());
}
Both DogService and CatService are going to consume this messages. Apparently DogService want to consume only "Dog" messages and CatService only "Cat" messages.
DogService will consume like this:
#StreamListener(target = "animal_events", condition = "headers['type']=='DOG'")
public void handleDogEvents(Message<String> message) {
//important dog related logic
}
CatService will consume like this:
#StreamListener(target = "animal_events", condition = "headers['type']=='CAT'")
public void handleCatEvents(Message<String> message) {
//important cat related logic
}
Because DogService is not handling Cat related messages and vice versa each service will have in a logs WARN message like this:
Cannot find a #StreamListener matching for message with id: [some_id]
I found two solution how to avoid this, but they are probably not the best one.
create in DogService another #StreamListener that will capture Cat events and do any logic there, just log a debug message
Change log level for org.springframework.cloud.stream.binding package to ERROR, but this could lead to missing some important WARN messages in logs.
I'm using spring-cloud-stream-3.0.3.
Is there any other better option (configuration property)? Or there is no other option rather refactor my services ? Thanks.
Annotation programming model for s-c-stream is all but deprecated. For the past year we've been promoting functional programming model which provides for s more robust routing mechanism.
I also published a post with more details here. Hope that helps.
If you use logback as your log framework, you can custom a log filter named ContentLogFilter extends Filter. You need override decide method. In this method, you just add one line of code, that is 'return event.getMessage().contains("Cannot find a #StreamListener matching for message") ? FilterRelay.DENY : FilterRelay.NEUTRAL;'. Then, it means the log filter will discard the log message if log message contains target log message. Otherwise, log filter will jump current filter and execute next filter.
public class ContentLogFilter extends Filter<ILoggingEvent> {
#Override
public FilterReply decide(ILoggingEvent event) {
return event.getMessage().contains("Cannot find a #StreamListener
matching for message") ? FilterRelay.DENY : FilterRelay.NEUTRAL;
}
}
Then, you need add your custom log filter in each appender of logback.xml config file.
<filter class="com.filter.ContentLogFilter" />
Attention please, your custom log filter should be placed on front of other filter.
I answer question for the first time in the platform. I just met the question and cost two days deal it. So I hope the answer can help more developer. That's all,Thanks!
I'm used to work with java and log4j, in which I could dynamically change the log level for a single class without any additional (coding) effort.
Now, when I work with golang (zap for logging) – I seek the same functionality, but cannot find it.
Is there an easy way to change log level dynamically for a file or a package or a part of the code?
zap supports log.Logger wrapping via constructors like NewStdLog, so if you want to keep using zap then the following technique will work:
If you don't want to use a third-party loggers, the following technique is quite simple, using go's standard library log.Logger:
Define some default app loggers:
var (
pkgname = "mypkgname"
Info = log.New(os.Stdout, "[INFO:"+pkgname+"] ", log.LstdFlags)
Debug = log.New(ioutil.Discard, "[DBUG:"+pkgname+"] ", log.LstdFlags|log.Lshortfile)
Trace = log.New(ioutil.Discard, "[TRCE:"+pkgname+"] ", log.LstdFlags|log.Lmicroseconds|log.Lshortfile)
)
so by default Info will write to Stdout - but Debug & Trace will go be "off" by default.
One can then, safely (i.e. goroutine-safe) turn these log.Loggers on/off either at start time:
func init() {
if v, ok := os.LookupEnv("DEBUG"); ok && v != "" {
Info.Println("ENV VAR 'DEBUG' set: enabling debug-level logging")
Debug.SetOutput(os.Stderr)
}
}
or during runtime:
func httpServiceHandler(r *req) {
if r.TraceingOn {
Trace.SetOutput(os.Stderr)
} else {
Trace.SetOutput(ioutil.Discard)
}
}
Logging at pkg level is just like any other log method:
Debug.Printf("http request: %+v", r)
And if there's an expensive log event that you don't want to generate - if say the logger is set to discard - you can safely check the state of the logger like so:
if Trace.Writer() != ioutil.Discard {
// do expensive logging here
// e.g. bs, _ = ioutil.ReadAll(resp.Body); Trace.Println("RESP: RAW-http response:", string(bs))
}
I'm trying to understand the proper way to use Windows.Foundation.Diagnostics.LoggingChannel. In particular I'd like to understand the purpose behind the Level property and when is this property set.
As described in the MSDN documentation of LoggingChannel, the Level property is read-only. So how can I set the level that a channel accepts messages at?
Currently what I have designed as a logger for my app is something like below:
public class Logger
{
public LoggingLevel LoggerLoggingLevel { get; set; }
private LoggingSession _session;
private LoggingChannel _channel;
public Logger()
{
_channel = new LoggingChannel("MyChannel");
_session = new LoggingSession("MySession");
_session.AddLoggingChannel(_channel);
}
public void LogMessage(string msg, LoggingLevel level)
{
if (level >= LoggerLoggingLevel)
{
_channel.LogMessage(msg, level);
}
}
.
.
.
}
// The consumer of the Logger class will instantiate an instance of it,
// sets the LoggerLoggingLevel, and then starts logging messages at various levels.
// At any point, the consumer can change LoggerLoggingLevel to start accepting
// messages at different levels.
IS this the right approach or is there a better way (for example by somehow setting the level of _channel and then passing the message & level to the channel, letting the channel decide whether it should filter out the message or accept and log it)?
LoggingChannel.Level tells you "somebody has expressed interest in receiving messages from you that are of severity 'Level' or higher". This property will be set automatically by the runtime when somebody subscribes to events from your LoggingChannel instance. (Within your app, you can subscribe to your app's events using the LoggingSession class; outside of your app, you can record your app's events using a tool like tracelog or xperf.)
In simple scenarios, you don't need to worry about the value of LoggingChannel.Level. The LoggingChannel.LogMessage function already checks the value of LoggingChannel.Level. It also checks the value of LoggingChannel.Enabled, which tells you whether anybody is subscribed to your events at any level. (Note that the value of LoggingChannel.Level is UNDEFINED and MEANINGLESS unless LoggingChannel.Enabled is true.) In normal use, you don't need to worry about LoggingChannel.Enabled or LoggingChannel.Level -- just call LogMessage and let LoggingChannel check the levels for you.
LoggingChannel exposes the Enabled and Level properties to support a more complex scenario where it is expensive to gather the data you are about to log. In this case, you would probably like to skip gathering the data if nobody is listening for your event. You would then write code like this:
if (channel.Enabled && channel.Level <= eventLevel)
{
string expensiveData = GatherExpensiveData();
channel.LogMessage(expensiveData, eventLevel);
}
Note that the Windows 10 version of LoggingChannel added a bunch of new methods to make life a bit easier. If your program will run on Windows 10 or later, you can use the IsEnabled method instead of separate checks for Enabled and Level:
if (channel.IsEnabled(eventLevel))
{
string expensiveData = GatherExpensiveData();
channel.LogMessage(expensiveData, eventLevel);
}
A bunch of other stuff was also added to LoggingChannel for Windows 10. You can now log complex events (strongly-typed fields) instead of just strings, you can define keywords and opcodes (look up ETW documentation for more information), and you can basically have your LoggingChannel act like a first-class ETW citizen.
I have a problem recently for even a simple akka microkernel model app. it can not stay up on amazon ec2. Here is the log
Starting Akka...
Running Akka 2.0.5
Deploying file:/data/akka-scala/akka-2.0.5/deploy/catchif2_2.9.2-0.1-SNAPSHOT.jar
[DEBUG] [03/02/2013 09:35:32.626] [main] [EventStream] StandardOutLogger started
[DEBUG] [03/02/2013 09:35:33.415] [main] [EventStream(akka://hellokernel)] logger log1-Slf4jEventHandler started
[DEBUG] [03/02/2013 09:35:33.416] [main] [EventStream(akka://hellokernel)] Default Loggers started
Starting up com.catchif.HelloKernel
Successfully started Akka
Shutting down Akka...
Shutting down com.catchif.HelloKernel
Received message 'HELLO world!'
Successfully shut down Akka
Basically it starts up and shutdown automatically immediately.
I run the same code on my mac. it stay up perfectly. There is no extra info in the log other than this:
03/02 09:35:33 INFO [hellokernel-akka.actor.default-dispatcher-4] a.e.s.Slf4jEventHandler - Slf4jEventHandler started
03/02 09:35:33 DEBUG[hellokernel-akka.actor.default-dispatcher-3] a.e.EventStream - logger log1-Slf4jEventHandler started
03/02 09:35:33 DEBUG[hellokernel-akka.actor.default-dispatcher-3] a.e.EventStream - Default Loggers started
The code is very simple as well.
import akka.actor.{ Actor, ActorSystem, Props }
import akka.kernel.Bootable
case object Start
class HelloActor extends Actor {
val worldActor = context.actorOf(Props[WorldActor])
def receive = {
case Start ⇒ worldActor ! "Hello"
case message: String ⇒
println("Received message '%s'" format message)
}
}
class WorldActor extends Actor {
def receive = {
case message: String ⇒ sender ! (message.toUpperCase + " world!")
}
}
class HelloKernel extends Bootable {
val system = ActorSystem("hellokernel")
def startup = {
system.actorOf(Props[HelloActor]) ! Start
}
def shutdown = {
system.shutdown()
}
}
Not sure why it happens. and I did see it stayed up on amazon for one time but it fails all the time later.
Thanks in advance,
Best,
James
Try this and tell me what happens:
class HelloKernel extends App {
val system = ActorSystem("hellokernel")
system.actorOf(Props[HelloActor]) ! Start
}
This is a minimalist implementation and should hold the bash shell until you hit ctrl-c. If that works then I would say something is off with your extension of the Bootable class (something you are missing).