How to write logs into multiple files? - go

In Golang how you guys manage to write logs into multiple file base on the package name.
For example in my current app, I am trying to collect multiple hardware stats from different packages called Netapp, IBM etc but under the same application. So, I would like to write logs from those package in separate folder like /var/log/myapp/netapp.log and /var/log/myapp/ibm.log?
Any pointer or clue would be very helpful ?
Thanks James

One approach you could take is to implement the Observer pattern. It's a great approach when you need to make several things happen with the same input/event. In your case, logging the same input to different logs. You can find more information here.
In a situation you described and following this example, you can do following things:
Your different logging implementations (with different logging destination folders) can implement the Observer interface by putting your logging code for each logging implementation in OnNotify method.
Create an instance of eventNotifier and register all your logging implementations with eventNotifier.Register method. Something like:
notifier := eventNotifier{
observers: map[Observer]struct{}{},
}
notifier.Register(netAppLogger)
notifier.Register(ibmLogger)
Use eventNotifier.Notify whenever and wherever you need to do logging and it will use all registered logging implementations.

Related

Omnetpp wirelesshost sending useful information

I am new to Omnetpp, and I am trying to send messages from one node to another wirelessly.
Basically, I would like to do something as in the tictoc example of Omnetpp (https://docs.omnetpp.org/tutorials/tictoc/) but then wirelessly.
I have installed INET already, and I have seen the wireless example, which uses the UdpBasicAPP. However, I do not know how to change the data of the message send while using the UdPBasicAPP. In my case, what I am sending (i.e. the data) is very important because it is part of a bigger project. Eventually, the idea is to use the 802.11p standard (which exists in VEINS) and multiple nodes, but I thought this was a good place to start.
I hope someone can help me out.
Kind regards
just to be aware: 802.11p is also supported directly in INET. Just set the opMode parameter on the network interface.
You will need to create your own application module. Take a look/copy UdpBasicApp and modify it according to your needs. Check the sendPacket() function which creates an ApplicationPacket. ApplicationPacket contains only a single sequence number, but you can create your own application level data structure and use that for sending.

Using Logs or cobra.Command Println for user feedback?

Spf13/cobra command offers a number of elegant tools to provide feedback to the user. I have more experience using Python/headless service where the standard is to use logging libraries and then redirect to stdio if necessary.
However, the more I’ve been exploring cobra, it feels like this is the wrong path. Instead it feels like I should send everything through cobra, and pick and choose from that buffer whatever should go to logging.
Is there any idiomatic guidance here?
I would suggest to use methods provided from cobra.Command for messages that are intended to be read by users.
Logs are usually used to show/save messages that will be read by developers (in this case, you) or if users explicitly want to read the logs.
With that reasoning, you can actually use both of them. For example, you can perform
c.Println("<success message>") to tell users that the command success, and
Debug/Info/Error logs in your CLI app which will be displayed (or saved in a logfile) if user pass --verbose flag to your app.

Spring Batch (Boot) - Using custom app data directory for application configuration - App uses previous run data and not current

I have a Spring Boot / Batch app. I want to use an "app data directory" (not the same as a properties file) versus a db based datastore (ie: SQL/Mongo).
The data stored in the app data directory is aggregated from several webservices and stored as XML. Each Step within the Job will fetch data and write locally, then the next Step in the chain will pick up the created Files and process for the next step (and so on).
The problem here, is each Step will only fetch previous app run data. For example, the data at app start time and not directly after the Step execution.
I understand what is happening here, that Spring is checking for any resources at launch and using them as-is before the Step actually is run.
Is there a magic trick to requesting Spring to stop loading specified resources/Files at app launch?
Note: Using Java Config, not XML and the latest Spring/Boot/Batch versions, also tried #StepScope for all reader/writers
Repo: https://github.com/RJPalombo/salesforceobjectreplicator
Thanks in advance!
No, there is no magic :-)
Firstly, your code is very well structured and easy to understand.
The first thing, that pops in my eyes is: Why aren't you using the standard readers and writers from springbatch (FlatFileItemReader/Writer, StaxReader/Writer). There is no need to implement this logic by yourself.
As far as I see, the problem is that you load the whole data in the constructor of the readers.
The whole job-structure (together with step, reader, writer, and processor instances) is created when the spring context is loaded, way before the job actually is executed.
Therefore, the reader just read empty files.
The simplest fix you could make is to implement the "ItemStream" interface for all your readers and writers.
And then reading the data in the open method, instead of the constructur. The open method is called, just before the steps get executed.
But that is only a quick fix and only helps to understand the behaviour of springbatch. The problem with this approach is, that all data is loaded at once, which means, that the memory usage will increase with the amount of data; hence, the memory would blow up when reading lots of data. Something we don't want to have when doing batch processing.
So, I'll recommend that you have a look at the standard readers and writers. Have a look how they work, debug into them. See when the open/close methods are called; check what happens when the read method is called and what it does.
It is not that complicated and having a look at your code, I'm sure that you are able to understand this very fast.

How handle different logging solutions in OSGi (Apache Felix)

In OSGi there is a separation between the logging frontend and the actual output.
So, by using the LogService, this doesnt mean that anything is written to the console for example. This is what a LogReaderService is responsible for.
In my current runtime, I am addind org.apache.felix.log which brings a LogReaderService-Implementation which should take care of the output. But I still dont see anything on the console...despite a lot of other stuff from other bundles.
In the next step I created my own LogListener which should be called by a LogServiceReader. I just used the code from this article and debugged the Activator to see if the listener is added. Still no output.
Last I checked the Felix Properties and set felix.log.level=3 (Info) but again, no output.
What I was wondering even more that I still could see a lot of DEBUG-Information despite setting the level to Info?
16:47:46.311 [qtp1593165620-27] DEBUG org.eclipse.jetty.http.HttpParser
It seems to me that there are different logging strategies in place, which use different configuration properties. For example, after I added the pax-logging-service (which uses a classic logging approach) to my runtime I could see the output, but currently I would like to stick with felix-logging.
Can someone please explain how to disable Blueprint-Debug-Level, which I guess is causing the current output, and enable simple felix-logging via LogService? There must be a standard-console-implementation even though its not specified in the spec.
Thanks to Christian Schneider and Balazs Zsoldos: both comments were very helpfull.
To answer the question: I need to provide a ServiceTrackerCustomizer as shown in Balazs example here. Since I already had a ConsoleLogListener it was enough to register the listener with the TrackerCustomizer.
Since my current task is about migrating a big legacy application where I would like to introduce OSGi it makes probably more sense to go with Pax-Logging since log4j is already used in hundreds of classes which we probably wont change to LogService.
The output of Pax-Logging can be adjusted with the property org.ops4j.pax.logging.DefaultServiceLog.level set to INFO

multiple processes writing to a single log file

This is intended to be a lightweight generic solution, although the problem is currently with a IIS CGI application that needs to log the timeline of events (second resolution) for troubleshooting a situation where a later request ends up in the MySQL database BEFORE the earlier request!
So it boils down to a logging debug statements in a single text file.
I could write a service that manages a queue as suggested in this thread:
Issue writing to single file in Web service in .NET
but deploying the service on each machine is a pain
or I could use a global mutex, but this would require each instance to open and close the file for each write
or I could use a database which would handle this for me, but it doesnt make sense to use a database like MySQL to try to trouble shoot a timeline issue with itself. SQLite is another possability, but this thread
http://www.perlmonks.org/?node_id=672403
Suggests that it is not a good choice either.
I am really looking for a simple approach, something as blunt as writing to individual files for each process and consolidating them accasionally with a scheduled app. I do not want to over engineer this, nor spend a week implementing it. It is only needed occassionally.
Suggestions?
Try the simplest solution first - each write to the log opens and closes the file. If you experience problems with this, which you probably won't , look for another solution.
You can use file locking. Lock the file for writing, write the message, unlock.
My suggestion is to preserve performance then think in asynchronous logging. Why not send your data log info using UDP to service listening port and he write to log file.
I would also suggest some kind of a central logger that can be called by each process in an asynchronous way. If the communication is UDP or RPC or whatever would be an implementation detail.
Even thought it's an old post, has anyone got an idea why not using the following concept:
Creating/opening a file with share mode of FILE_SHARE_WRITE.
Having a named global mutex, and opening it.
Whenever a file write is desired, lock the mutex first, then write to the file.
Any input?

Resources