In OSGi there is a separation between the logging frontend and the actual output.
So, by using the LogService, this doesnt mean that anything is written to the console for example. This is what a LogReaderService is responsible for.
In my current runtime, I am addind org.apache.felix.log which brings a LogReaderService-Implementation which should take care of the output. But I still dont see anything on the console...despite a lot of other stuff from other bundles.
In the next step I created my own LogListener which should be called by a LogServiceReader. I just used the code from this article and debugged the Activator to see if the listener is added. Still no output.
Last I checked the Felix Properties and set felix.log.level=3 (Info) but again, no output.
What I was wondering even more that I still could see a lot of DEBUG-Information despite setting the level to Info?
16:47:46.311 [qtp1593165620-27] DEBUG org.eclipse.jetty.http.HttpParser
It seems to me that there are different logging strategies in place, which use different configuration properties. For example, after I added the pax-logging-service (which uses a classic logging approach) to my runtime I could see the output, but currently I would like to stick with felix-logging.
Can someone please explain how to disable Blueprint-Debug-Level, which I guess is causing the current output, and enable simple felix-logging via LogService? There must be a standard-console-implementation even though its not specified in the spec.
Thanks to Christian Schneider and Balazs Zsoldos: both comments were very helpfull.
To answer the question: I need to provide a ServiceTrackerCustomizer as shown in Balazs example here. Since I already had a ConsoleLogListener it was enough to register the listener with the TrackerCustomizer.
Since my current task is about migrating a big legacy application where I would like to introduce OSGi it makes probably more sense to go with Pax-Logging since log4j is already used in hundreds of classes which we probably wont change to LogService.
The output of Pax-Logging can be adjusted with the property org.ops4j.pax.logging.DefaultServiceLog.level set to INFO
Related
I am using JBoss 7x, and have the following use case.
I am going to do load testing of messaging queues with Jboss. The queues are external to JBoss.
I will push a lot of message in the queue, around 1000 message. When around 100+ message has been pushed I want to crash JBoss. Later I want to re-start the Jboss the verify the message processing.
I had earlier made use of Byteman to crash the JVM using the following
JAVA_OPTS="-javaagent:/BYTEMAN_HOME/lib/byteman.jar=script:/QUICKSTART_HOME/jta-crash-rec/src/main/scripts/xa.btm ${JAVA_OPTS}"
Details are here: https://github.com/Naresh-Chaurasia/jboss-eap-quickstarts/tree/7.3.x/jta-crash-rec
In the above case when ever XA Transaction is happening the JVM is being crashed using byteman, but in my case I want to only crash the JVM/Jboss lets say after 100+ messages. i.e not for each transaction but after processing some messages.
I have also tried a few examples from here, to get ideas of how to achieve it, but did not succeed. https://developer.jboss.org/docs/DOC-17213#top
Question: How can I crash JBoss/ running JVM using byteman or some other way.
See the Programmers Guide that comes bundled with the distribution.
Sections headed "CountDowns" and "Aborting Execution" provide what's necessary. These are built-in features of the Rule Language.
I would like to monitor my pushs' to the clients with the famous
UI.access() ... sequence on the server side.
Background is that I have to propagate lots of pushs to my client and I
want to make sure, nothing gets queued up.
I found only client RPCQueue having a size(), but I have no idea if its the correct items searching for now how to access this.
Thanks for any hint.
Gerry
If you want to know the size of the queue of tasks that have been enqueued using UI.access but not yet run, then you can use VaadinSession.getPendingAccessQueue.
This will, however, not give the full picture since it doesn't cover changes that have been applied to the server-side state (i.e. the UI.access task has already been executed) but not yet sent to the client. Those types of changes are tracked in a couple of different places depending on the type of change and the Vaadin version you're using.
For this kind of use case, it might be good to use the built-in beforeClientResponse functionality to apply your own changes as late as possible instead of applying changes eagerly.
With Vaadin versions up to 8, you do this by overriding the beforeClientResponse method in your component or extension class. You need to use markAsDirty() to ensure that beforeClientResponse will eventually be run for that instance.
Wit Vaadin 10 and newer, there's instead a UI.beforeClientResponse to which you give a callback that will be run once at an appropriate time by the framework.
In Golang how you guys manage to write logs into multiple file base on the package name.
For example in my current app, I am trying to collect multiple hardware stats from different packages called Netapp, IBM etc but under the same application. So, I would like to write logs from those package in separate folder like /var/log/myapp/netapp.log and /var/log/myapp/ibm.log?
Any pointer or clue would be very helpful ?
Thanks James
One approach you could take is to implement the Observer pattern. It's a great approach when you need to make several things happen with the same input/event. In your case, logging the same input to different logs. You can find more information here.
In a situation you described and following this example, you can do following things:
Your different logging implementations (with different logging destination folders) can implement the Observer interface by putting your logging code for each logging implementation in OnNotify method.
Create an instance of eventNotifier and register all your logging implementations with eventNotifier.Register method. Something like:
notifier := eventNotifier{
observers: map[Observer]struct{}{},
}
notifier.Register(netAppLogger)
notifier.Register(ibmLogger)
Use eventNotifier.Notify whenever and wherever you need to do logging and it will use all registered logging implementations.
I have a Spring Boot / Batch app. I want to use an "app data directory" (not the same as a properties file) versus a db based datastore (ie: SQL/Mongo).
The data stored in the app data directory is aggregated from several webservices and stored as XML. Each Step within the Job will fetch data and write locally, then the next Step in the chain will pick up the created Files and process for the next step (and so on).
The problem here, is each Step will only fetch previous app run data. For example, the data at app start time and not directly after the Step execution.
I understand what is happening here, that Spring is checking for any resources at launch and using them as-is before the Step actually is run.
Is there a magic trick to requesting Spring to stop loading specified resources/Files at app launch?
Note: Using Java Config, not XML and the latest Spring/Boot/Batch versions, also tried #StepScope for all reader/writers
Repo: https://github.com/RJPalombo/salesforceobjectreplicator
Thanks in advance!
No, there is no magic :-)
Firstly, your code is very well structured and easy to understand.
The first thing, that pops in my eyes is: Why aren't you using the standard readers and writers from springbatch (FlatFileItemReader/Writer, StaxReader/Writer). There is no need to implement this logic by yourself.
As far as I see, the problem is that you load the whole data in the constructor of the readers.
The whole job-structure (together with step, reader, writer, and processor instances) is created when the spring context is loaded, way before the job actually is executed.
Therefore, the reader just read empty files.
The simplest fix you could make is to implement the "ItemStream" interface for all your readers and writers.
And then reading the data in the open method, instead of the constructur. The open method is called, just before the steps get executed.
But that is only a quick fix and only helps to understand the behaviour of springbatch. The problem with this approach is, that all data is loaded at once, which means, that the memory usage will increase with the amount of data; hence, the memory would blow up when reading lots of data. Something we don't want to have when doing batch processing.
So, I'll recommend that you have a look at the standard readers and writers. Have a look how they work, debug into them. See when the open/close methods are called; check what happens when the read method is called and what it does.
It is not that complicated and having a look at your code, I'm sure that you are able to understand this very fast.
I'm encapsulating the EntLib 5 logging application block. I've seen in the documentation that every time that you want to log, you should give a look to "IsLoggingEnabled()". The fact that it's a method and not a property, tell me that is an operation that takes some time to be done, but... could I cache that value in a local variable and check if it's possible to log or not based on it?
Cheers.
You can not, through code, change the Logging settings, as said at the Enterprise Library Document. But there you can also read that:
Note:
Run time changes to the configuration of the Logging
Application Block are automatically
detected after a short period, and the
logging stack is updated. However, you
cannot modify the logging stack at run
time through code. For details of
using configuration mechanisms that
you can update at run time, see
Updating Configuration Settings at Run
Time.
That is, while you can't enable/disable programatically the logging, it can change at run time if configuration edited manually.
So, that is why you'll need to access the IsLoggingEnabled() operation every time, and it's not a good idea to cache it's value.