I'm new to OSGI and working on such project that runs on websphere.
I have a simple scheduler, I used java.util.concurrent.ScheduledExecutorService like so:
private ScheduledExecutorService scheduler;
...
scheduler = Executors.newScheduledThreadPool(corePoolSize);
since my application is running inside a container(WebSphere) I though it will be better to let the container manage the threads, so I wanted to use:
scheduler = Executors.newScheduledThreadPool(corePoolSize, threadFactory);
were threadFactory will be injected in the blueprint from the container.
I've looked around and could not find an example of how it can be done.
So my question is, how can it be done and is it worth the effort at all?
I have found some very useful resource regarding my question,
accurding to this:
http://www.ibm.com/developerworks/websphere/techjournal/0609_alcott/0609_alcott.html#spring-4
Other packages, such as quartz and the JDK Timer, start unmanaged
threads and should be avoided.
the solution is:
http://www.ibm.com/developerworks/websphere/techjournal/0606_johnson/0606_johnson.html#sec5
a sample code is available, basically a custom TheardFactory is implemented using WebSphere WorkManager and than all left to do is initiate ExecutorService with the custom ThreadFactory.
I am not sure where the OSGi tags comes in from this particular case, but there are several links off of this question that you will find useful. I would not use the WorkManager stuff in the article cited in your own answer, as it is very IBM specific and there is a better alternative in the commonj API and for that matter, there is a Timer service as part of the API that may take care of your needs altogether.
If you use Spring, then you can code their integration for task execution and scehduling, which supports the JDK and commonj transparently (at compile time).
Related
Spring boot 2.1.5
Project Reactor 3.2.9
I am setting up a bunch of rest reactive APIs using the above-mentioned frameworks and I am running into an annoying problem with MDC (mapped diagnostic context). My applications are in JAVA.
MDC relies on thread locals to store the current query's mapped context to put in the logs. That system, obviously, is not perfect and contradicts the reactive pattern since the different steps of your execution will be executed through different threads.
I have run into the same problem with the Play Reactive framework but found a workaround there by copying the mapped context transparently from one actor to another.
For spring and reactor, I could not find a satisfying solution yet.
Some random examples found on the internet:
First - It works but forces you to use a bunch of utility methods
Same thing
Second - It tries to copy the context during the onNext publisher event but seems to lose some features on the way of doing that. The signal context, for example, is lost.
I am in need of a proper solution to deal with this:
A library which would make the link between MDC and reactor?
A way to tweak reactor/spring to achieve it transparently?
Any advice?
"I could not find a satisfying solution yet."
Working with contexts is the only solution for the moment. Since as you said threadlocals goes against everything that has to do with reactive programming. Using thread local as a storage point during a request is a resource heavy way of solving things and in my opinion poor design. Unless logging frameworks themselves come up with a better solution to the problem we developers must pass the data through the context to accommodate for the logging frameworks blocking nature.
Reactive programming is a paradigm shift in the programming world. Other things like database drivers, that use threadlocal to rollback transactions are also in big trouble. the JDBC database driver spec is defined as blocking in nature, and atm. there has been attempts by spring and the R2DBC project to define a new JDBC driver spec that is inherently non/blocking. This means that all vendors must rewrite ther database driver implementations from scratch.
Reactive program is so new that lots of libraries need to rewrite entire codebases. The logging frameworks as we know it needs to be rewritten from the ground up which is a huge task. And the context in reactive is actually something that should not even be in reactive programming, it was implemented just to accommodate for MDC problems.
It's actually a lot of overhead needing to pass data from thread to thread.
So what can we do?
push on logging frameworks, and/or help logging frameworks to rewrite their codebase
Accept that there is no "tweak" that will magically fix this
use the context and the way suggested in the blogposts
Project reactor context
I have a use case where I would like build a common interface or service which can update entities of application. Example case is shown as below:
Now every application has to handle update functionality of entities. Rather than implementing update functionality in n application module. I would like to build a common interface or server in spring boot.
Service will be like below:
My question is how to design service/interface which can used for above scenario. Any api or tool which can help me to achieve this. I dont want to write code for update in every application module.
Thanks in advance.
Last year I was thinking about the similar concept to yours, but in Apache Camel framework context. I haven't got enough time and motivation to do so, but your post encouraged me to give it a try - perhaps mostly because I've found your concept very similar to mine.
This is how I see it:
So basically I considered an environment with application that might uses N modules/plugins that enriches application's features, i.e. processing feature etc. Application uses module/plugin when it is available in the classpath - considering Java background. When the module is not available application works without its functionality like it was never there. Moreover I wanted to implement it purely using framework capabilities - in this case Spring - without ugly hacks/ifs in the source code.
Three solutions come to my mind:
- using request/response interceptors and modifying(#ControllerAdvice)
- using Spring AOP to intercept method invocations in *Service proxy classes
- using Apache Camel framework to create a routes for processing entities
Here's the brief overview of POC that I implemented:
I've chosen Spring AOP because I've never been using it before on my own.
simple EmployeeService that simulates saving employee - EmployeeEntity
3 processors that simulates Processing Modules that could be located outside the application. These three modules change properties of EmployeeEntity in some way.
one Aspect that intercepts "save" method in EmployeeService and handles invocation of available processors
In the next steps I'd like to externalize these Processors so these are some kind of pluggable jar files.
I'm wondering if this is something that you wanted to achieve?
link to Spring AOP introduction here: https://docs.spring.io/spring/docs/5.0.5.RELEASE/spring-framework-reference/core.html#aop
link to repository of mentioned POC: https://github.com/bkpawlowski/spring-aop
I'm trying to create a service such that once it is created it only allows itself to be held by a single consumer/bundle at any one time. (If this is against the philosophy/specification of OSGi then that obviously provides a quick answer but reference to the OSGi specs. stating this would be appreciated.)
To implement such a requirement I implemented the ServiceFactory interface thinking that whenever there was a requirement for the service the getService(Bundle bundle, ServiceRegistration<S> registration) method would be called and it would be where I could determine if the Bundle was a new consumer or not and act accordingly.
It appears that this is not the case in the scenario I have tested this in.
Using a Apache Karaf and instantiating a consumer of the Service via Blueprint it would seem that the getService method is never called. Instead the consumer's binding method for the service is called directly but injecting a proxy service object.
While I understand that Blueprint uses proxies surely there is still the obligation of the ServiceFactory contract to fulfil even if it's a proxy object consuming the service?
Why do I want to do this?
I am attempting to wrap JavaFX and the Stage class and because JavaFX isn't OSGi friendly I am attempting to co-ordinate access to the Stage object. I'm aware that there are frameworks such as Drombler but a brief look at them made me think that it doesn't suit my use case. They appear too restrictive for my needs e.g. I don't necessarily wish to layout an application in the manner Drombler uses.
It depends what you mean by a consumer. ServiceFactory does give you the chance to create a separate instance of a service per bundle that calls getService on your service. It's not clear from your question but I suspect you weren't seeing the getService invoked multiple times because you were fetching the service from the same consumer bundle. In this case, ServiceFactory simply returns the same object repeatedly.
As for your general question about restricting access to a single consumer, no that really goes against the OSGi philosophy. I'm sorry I don't have a spec reference for you but the clue is in the name: it's a service that is available to all.
I'm aware that there are frameworks such as Drombler but a brief look at them made me think that it doesn't suit my use case. They appear too restrictive for my needs e.g. I don't necessarily wish to layout an application in the manner Drombler uses.
Please note that the layout of Drombler FX applications is pluggable so you can provide your own implementation tailored to your needs. This allows you to get the most out of Drombler FX and JavaFX.
While this feature is available for some time, there is now a new tutorial trail explaining it in more detail.
I want to run few tasks asynchronously in a web application. My question is which Spring implementation of task executors i should use in a Container managed environment.
I refereed to this chapter in Spring documentation and found few options.
One option I considered is WorkManagerTaskExecutor. This is very simple and works seamlessly with the IBM Websepher server which I'm currently using but this is very specific to IBM Websphere and Oracle Weblogic servers. I don't want to tie my code specifically to one particular implementation as in some test and local regions we are using Jetty container & this implementation creates problems to run the code in Jetty.
Other options like SimpleThreadPoolTaskExecutor does not seem to be best fit to leverage thread pooling in container managed environment and I don't want to create new thread myself.
Could you pleas suggest how do I go about this. Any pointers to a sample implementation will be great help.
As usual, it depends. If you rely on the container's thread management and want to be able to set thread pools on its admin interface or if you're application is not the only app inside the container or you use specific features like setting thread pool priorities for EJB or JMS you should add support for the WorkManagerTaskExecutor and make it configurable. If not, you can use whatever you want cause in the end threads are just threads. Since Spring is an IOC container you can do it. To use the same app everywhere I wouldn't suggest to change the XML config per app version. Rather
use profiles with configuration to set the executor type and inside your java config return the proper bean type. If you use Jetty you should have a configuration for the thread pool sizes to to be able to tune it.
use spring boot like auto configuration which usually rely on available classes on classpath (#ConditionalOnClass). If your weblogic or websphere specific classes are available or any other container specific thing like env variables you can create the WorkManagerTaskExecutor
With both of these you can deploy the same war everywhere.
We have been building two suites of applications for the last 10 years using Spring as our dependency injection. We also use spring-batch and spring-amqp. We are now looking to move to OSGI so that our monolithic applications can be separated into bundles so that we can be more agile. The two suites are web applications and are deployed as two separate war files. We are looking to use Apache Karaf as our OSGI runtime.
Spring-DM is dead and it appears that we are going to have to convert EVERYTHING to use Blueprint for our dependency injection.
My question is how do we do this incrementally? It will be close to impossible to convert all of this over at once. It seems like one bundle should still be able to use Spring DI and have it's own application context as long as we take the responsibility to expose any services that we want to the service registry in the bundle activator, but I'm not sure if there is some kind of magic that we would lose like transaction management.
Any guidance on this would be really appreciated.
You might want to consider to make the problem appear even larger and switch to DS instead of Blueprint ... To take truly advantage of the OSGi model, DS is far superior to Blueprint in all aspects. In reality, after the first hurdle, you'll make much more progress and your gains will be higher. Though Blueprint made Spring available on OSGi, it never 'got' OSGi.
For strategy, keep your Spring app alive as a single bundle and move things out gradually. I.e. the elephant approach.
The biggest gain that OSGi provides can be summarized as follows:
Make sure modules have service APIs that ONLY handle collaboration. I.e. each service API should be a story/scenario how the actors work together, not how they come into existence and are configured.
Let Configuration Admin to the configuration work. I.e. never expose configuration APIs. In OSGi, you register instances, not things that still need to be configured.
Make sure you really understand the OSGi model with services. You might want to take a look at OSGi enRoute that leverages OSGi to the hilt.
I propose you take a look at the blueprint-maven-plugin. It allows to use a subset of CDI and JEE annoations to define injections as well as transactions and persistence. The plugin creates blueprint xml at build time which can then be executed by karaf. The big advantage is that these annotations are also supported by spring. So you can transition and in parallel release to production using spring.
I have a complete example here Annotation based blueprint and JPA.
Using this plugin I migrated a medium sized project while it was developed and released in parallel. If you need further advice while using the plugin I can surely help.