I'm trying to refactor some Apache Camel routes and optimize them as best as I can, without changing their general behavior. One thing I've seen is that most of this routes use the instruction parallelProcessing().
Is there any way I can change this into a threads() without changing the behavior of the route?
I'd like to do that in order to limit the ThreadPool size. I've been told that the first instruction should be equivalent to threads(30) or something like that. Is this true? I haven't found anything about this on Google.
I'm using Java DSL.
Thank you for the help.
You can use Thread Pool profile and create a Custom Thread Pool.
Read more about this in the Camel docs: http://camel.apache.org/threading-model.html
Related
Im facing an issue and was wondering if there is some library/framework or something to help me out.
Basically i have method in an API that creates an object for me but the problem is that this is not returned to me right away but is created later in time.
all i get is a guid on method call and have to manually check in the future if my object is created and if it isnt try again.
So my wish is to somehow automise this maybe? My thoughts were for using jobs or mqueues.
Any suggestions are really appreciated. The languages im allowed to use is nestjs or spring boot.
It sounds like you're looking to set up a dynamic cron job after your API kicks off the event, and then have those cron jobs possibly create more cron jobs or send out notifications. Not sure what the Spring Boot alternative would be, but a CRON is definitely what it sounds like you need (at least to me)
In Golang how you guys manage to write logs into multiple file base on the package name.
For example in my current app, I am trying to collect multiple hardware stats from different packages called Netapp, IBM etc but under the same application. So, I would like to write logs from those package in separate folder like /var/log/myapp/netapp.log and /var/log/myapp/ibm.log?
Any pointer or clue would be very helpful ?
Thanks James
One approach you could take is to implement the Observer pattern. It's a great approach when you need to make several things happen with the same input/event. In your case, logging the same input to different logs. You can find more information here.
In a situation you described and following this example, you can do following things:
Your different logging implementations (with different logging destination folders) can implement the Observer interface by putting your logging code for each logging implementation in OnNotify method.
Create an instance of eventNotifier and register all your logging implementations with eventNotifier.Register method. Something like:
notifier := eventNotifier{
observers: map[Observer]struct{}{},
}
notifier.Register(netAppLogger)
notifier.Register(ibmLogger)
Use eventNotifier.Notify whenever and wherever you need to do logging and it will use all registered logging implementations.
In OSGi there is a separation between the logging frontend and the actual output.
So, by using the LogService, this doesnt mean that anything is written to the console for example. This is what a LogReaderService is responsible for.
In my current runtime, I am addind org.apache.felix.log which brings a LogReaderService-Implementation which should take care of the output. But I still dont see anything on the console...despite a lot of other stuff from other bundles.
In the next step I created my own LogListener which should be called by a LogServiceReader. I just used the code from this article and debugged the Activator to see if the listener is added. Still no output.
Last I checked the Felix Properties and set felix.log.level=3 (Info) but again, no output.
What I was wondering even more that I still could see a lot of DEBUG-Information despite setting the level to Info?
16:47:46.311 [qtp1593165620-27] DEBUG org.eclipse.jetty.http.HttpParser
It seems to me that there are different logging strategies in place, which use different configuration properties. For example, after I added the pax-logging-service (which uses a classic logging approach) to my runtime I could see the output, but currently I would like to stick with felix-logging.
Can someone please explain how to disable Blueprint-Debug-Level, which I guess is causing the current output, and enable simple felix-logging via LogService? There must be a standard-console-implementation even though its not specified in the spec.
Thanks to Christian Schneider and Balazs Zsoldos: both comments were very helpfull.
To answer the question: I need to provide a ServiceTrackerCustomizer as shown in Balazs example here. Since I already had a ConsoleLogListener it was enough to register the listener with the TrackerCustomizer.
Since my current task is about migrating a big legacy application where I would like to introduce OSGi it makes probably more sense to go with Pax-Logging since log4j is already used in hundreds of classes which we probably wont change to LogService.
The output of Pax-Logging can be adjusted with the property org.ops4j.pax.logging.DefaultServiceLog.level set to INFO
Here's my problem: I need to call multiple 3rd party methods inside an ApiController. The signature for those methods is Task DoSomethingAsync(SomeClass someData, SomeOtherClass moreData). I want those calls to continue running in the background, after the ApiController has sent the data back to the client. When DoSomethingAsync completes I want to do some logging and maybe save some data to the file system. How can I do that? I'd prefer to use the asyny/await syntax.
Great news, there is a new solution in .NET 4.5.2 called the QueueBackgroundWorkItem API. It's really simple to use:
HostingEnvironment.QueueBackgroundWorkItem(ct => DoSomething(a, b, c));
Here's an article that describes it in detail.
https://blogs.msdn.microsoft.com/webdev/2014/06/04/queuebackgroundworkitem-to-reliably-schedule-and-run-background-processes-in-asp-net/
And here's anohter article that mentions a few other approaches not mentioned in this thread.
http://www.hanselman.com/blog/HowToRunBackgroundTasksInASPNET.aspx
You almost never want to do this. It is almost always a big mistake.
ASP.NET (and most other servers) work on the assumption that it's safe to tear down your service once all requests have completed. So you have no guarantee that your logging will be done, or that your data will be written to disk. Particularly with the disk writes, it's entirely possible that your writes will be corrupted.
That said, if you are absolutely sure that you want to implement this extremely dangerous design, you can use the BackgroundTaskManager from my blog.
Update: I've written a blog series that goes into detail on a proper solution for request-extrinsic code. In summary, what you really want to do is move the request-extrinsic code out of ASP.NET. Introduce a durable queue and an independent processor; the ASP.NET controller action will place a request onto the queue, and the independent processor will read requests and execute them. This "processor" can be an Azure Function/WebJob, Win32 Service, etc.
Stephen described why starting essentially long running fire-and-forget tasks inside an ApiController is a bad idea.
Perhaps you should create a separate service to execute those fire-and-forget tasks. That service could be a different ApiController, a worker behind a queue, anything that can be hosted on its own and have an independent lifetime.
This would make management of the different task lifetimes much easier and separate the concerns of the long-running tasks from the ApiController's core responsibilities.
As pointed out by others, it is not recommended. However, whenever there is a need there is a way, so take a look at IRegisteredObject
See also
http://haacked.com/archive/2011/10/16/the-dangers-of-implementing-recurring-background-tasks-in-asp-net.aspx/
Though the question is several years old, best possible solution now is to use Singal R in this case.
https://github.com/Myrmex/signalr-notify-progress
This is intended to be a lightweight generic solution, although the problem is currently with a IIS CGI application that needs to log the timeline of events (second resolution) for troubleshooting a situation where a later request ends up in the MySQL database BEFORE the earlier request!
So it boils down to a logging debug statements in a single text file.
I could write a service that manages a queue as suggested in this thread:
Issue writing to single file in Web service in .NET
but deploying the service on each machine is a pain
or I could use a global mutex, but this would require each instance to open and close the file for each write
or I could use a database which would handle this for me, but it doesnt make sense to use a database like MySQL to try to trouble shoot a timeline issue with itself. SQLite is another possability, but this thread
http://www.perlmonks.org/?node_id=672403
Suggests that it is not a good choice either.
I am really looking for a simple approach, something as blunt as writing to individual files for each process and consolidating them accasionally with a scheduled app. I do not want to over engineer this, nor spend a week implementing it. It is only needed occassionally.
Suggestions?
Try the simplest solution first - each write to the log opens and closes the file. If you experience problems with this, which you probably won't , look for another solution.
You can use file locking. Lock the file for writing, write the message, unlock.
My suggestion is to preserve performance then think in asynchronous logging. Why not send your data log info using UDP to service listening port and he write to log file.
I would also suggest some kind of a central logger that can be called by each process in an asynchronous way. If the communication is UDP or RPC or whatever would be an implementation detail.
Even thought it's an old post, has anyone got an idea why not using the following concept:
Creating/opening a file with share mode of FILE_SHARE_WRITE.
Having a named global mutex, and opening it.
Whenever a file write is desired, lock the mutex first, then write to the file.
Any input?