Can IsLoggingEnabled() change at runtime? - windows

I'm encapsulating the EntLib 5 logging application block. I've seen in the documentation that every time that you want to log, you should give a look to "IsLoggingEnabled()". The fact that it's a method and not a property, tell me that is an operation that takes some time to be done, but... could I cache that value in a local variable and check if it's possible to log or not based on it?
Cheers.

You can not, through code, change the Logging settings, as said at the Enterprise Library Document. But there you can also read that:
Note:
Run time changes to the configuration of the Logging
Application Block are automatically
detected after a short period, and the
logging stack is updated. However, you
cannot modify the logging stack at run
time through code. For details of
using configuration mechanisms that
you can update at run time, see
Updating Configuration Settings at Run
Time.
That is, while you can't enable/disable programatically the logging, it can change at run time if configuration edited manually.
So, that is why you'll need to access the IsLoggingEnabled() operation every time, and it's not a good idea to cache it's value.

Related

Spring Batch (Boot) - Using custom app data directory for application configuration - App uses previous run data and not current

I have a Spring Boot / Batch app. I want to use an "app data directory" (not the same as a properties file) versus a db based datastore (ie: SQL/Mongo).
The data stored in the app data directory is aggregated from several webservices and stored as XML. Each Step within the Job will fetch data and write locally, then the next Step in the chain will pick up the created Files and process for the next step (and so on).
The problem here, is each Step will only fetch previous app run data. For example, the data at app start time and not directly after the Step execution.
I understand what is happening here, that Spring is checking for any resources at launch and using them as-is before the Step actually is run.
Is there a magic trick to requesting Spring to stop loading specified resources/Files at app launch?
Note: Using Java Config, not XML and the latest Spring/Boot/Batch versions, also tried #StepScope for all reader/writers
Repo: https://github.com/RJPalombo/salesforceobjectreplicator
Thanks in advance!
No, there is no magic :-)
Firstly, your code is very well structured and easy to understand.
The first thing, that pops in my eyes is: Why aren't you using the standard readers and writers from springbatch (FlatFileItemReader/Writer, StaxReader/Writer). There is no need to implement this logic by yourself.
As far as I see, the problem is that you load the whole data in the constructor of the readers.
The whole job-structure (together with step, reader, writer, and processor instances) is created when the spring context is loaded, way before the job actually is executed.
Therefore, the reader just read empty files.
The simplest fix you could make is to implement the "ItemStream" interface for all your readers and writers.
And then reading the data in the open method, instead of the constructur. The open method is called, just before the steps get executed.
But that is only a quick fix and only helps to understand the behaviour of springbatch. The problem with this approach is, that all data is loaded at once, which means, that the memory usage will increase with the amount of data; hence, the memory would blow up when reading lots of data. Something we don't want to have when doing batch processing.
So, I'll recommend that you have a look at the standard readers and writers. Have a look how they work, debug into them. See when the open/close methods are called; check what happens when the read method is called and what it does.
It is not that complicated and having a look at your code, I'm sure that you are able to understand this very fast.

Are Changes Made to the CRM Unsecure Configuration Settings of a Plugin Instantaneous

If I have some configuration changes in the Unsecure (does it matter?) Configuration Settings, and I make a change, will that force all instances of the plugin to get the newest settings, or does it take a while for the configuration settings to propagate?
The change does have to propagate to each front end web server, so I have definitely seen very short delays before, but I'm talking seconds. The vast majority of the time, as soon as I hit Update Step and then initiate whatever action in the UI, I can see that the plugin ran with the updated configuration value.
I'm less sure about delays when it comes to plugins running in the async service. Meaning, if 30 async plugins/wfs are queued up, you make your config change, I'm not sure if those queued up async jobs will use the new value or the old.
One easy way to investigate this would be for your plugin to write to the trace log and then set the trace log level in system settings to all. Plugin-trace log records show what configuration values the plugin ran with.

How handle different logging solutions in OSGi (Apache Felix)

In OSGi there is a separation between the logging frontend and the actual output.
So, by using the LogService, this doesnt mean that anything is written to the console for example. This is what a LogReaderService is responsible for.
In my current runtime, I am addind org.apache.felix.log which brings a LogReaderService-Implementation which should take care of the output. But I still dont see anything on the console...despite a lot of other stuff from other bundles.
In the next step I created my own LogListener which should be called by a LogServiceReader. I just used the code from this article and debugged the Activator to see if the listener is added. Still no output.
Last I checked the Felix Properties and set felix.log.level=3 (Info) but again, no output.
What I was wondering even more that I still could see a lot of DEBUG-Information despite setting the level to Info?
16:47:46.311 [qtp1593165620-27] DEBUG org.eclipse.jetty.http.HttpParser
It seems to me that there are different logging strategies in place, which use different configuration properties. For example, after I added the pax-logging-service (which uses a classic logging approach) to my runtime I could see the output, but currently I would like to stick with felix-logging.
Can someone please explain how to disable Blueprint-Debug-Level, which I guess is causing the current output, and enable simple felix-logging via LogService? There must be a standard-console-implementation even though its not specified in the spec.
Thanks to Christian Schneider and Balazs Zsoldos: both comments were very helpfull.
To answer the question: I need to provide a ServiceTrackerCustomizer as shown in Balazs example here. Since I already had a ConsoleLogListener it was enough to register the listener with the TrackerCustomizer.
Since my current task is about migrating a big legacy application where I would like to introduce OSGi it makes probably more sense to go with Pax-Logging since log4j is already used in hundreds of classes which we probably wont change to LogService.
The output of Pax-Logging can be adjusted with the property org.ops4j.pax.logging.DefaultServiceLog.level set to INFO

Is it possible for RoleEntryPoint.OnStart() to be run twice before the host machine is cleaned up?

I plan to insert some initialization code into OnStart() method of my class derived from RoleEntryPoint. This code will make some permanent changes to the host machine, so in case it is run for the second time on the same machine it will have to detect those changes are already there and react appropriately and this will require some extra code on my part.
Is it possible OnStart() is run for the second time before the host machine is cleared? Do I need this code to be able to run for the second time on the same machine?
Is it possible OnStart() is run for
the second time before the host
machine is cleared?
Not sure how to interpret that.
As far as permanent changes go: Any installed software, registry changes, and other modifications should be repeated with every boot. If you're writing files to local (non-durable storage), you have a good chance of seeing those files next time you boot, but there's no guarantee. If you are storing something in Windows Azure Storage (blobs, tables, queues) or SQL Azure, then your storage changes will persist through a reboot.
Even if you were guaranteed that local changes would persist through a reboot, these changes wouldn't be seen on additional instances if you scaled out to more VMs.
I think the official answer is that the role instance will not run it's Job more than once in each boot cycle.
However, I've seen a few MSDN articles that recommend you make startup tasks idempotent - e.g. http://msdn.microsoft.com/en-us/library/hh127476.aspx - so probably best to add some simple checks to your code that would anticipate multiple executions.

"Replay" the steps needed to recreate an error

I am going to create a typical business application that will be used by a few hundred consultants. Normally, the consultants would be presented with an error message with a standard text. As the application will be a complicated one with lots of changes being made to it constantly I would like the following:
When an error message is presented, the user has the option to "send" the error message to the developers. The developers should be able to open the incoming file in i.e. Eclipse and debug the steps of the last 10 minutes of work step by step (one line at a time if they want to). Everything should be transparent, meaning that they for example should be able to see the return values of calls to the database.
Are there any solutions that offer such functionality today, my preferred language is Python or also Java. I know that there will be a huge performance hit because of such functionality, but that is acceptable as this kind of software is not performance sensitive.
It would be VERY nice if the database also had a cronology so that one could query the database for values that existed at the exact time that a specific line of code was run in the application, leading up to the bug.
You should try to use logging, e.g. commit logs from the DB and logging the user interactions with the application, if it is a web application you can start with the log files from the webserver. Make sure that the logfiles include all submitted data such as the complete GET url with parameters and POST with entity body. You can configure the web server to generate such logs when necesary.
Then you build a test client that can parse the log files and re-create all the user interaction that caused the problem to appear. If you suspect race conditions you should log with high precision (ms resolution) and make sure that the test client can run through the same sequences over and over again to stress those critical parts.
Replay (as your title suggests) is the best way to reproduce an error, just collect all the data needed to recreate the input that generated a specific state/situation. Do not focus on internal structures and return values, when it comes to hunting down an error or a bug you should not work in forensic mode, e.g. trying to analyse the cause of the crash by analyzing the wreck, you should crash the plane over and over again and add more and more logging/or use a debugger until you know that goes wrong.

Resources