What I’m looking to do is to run multiple Clojure environments that start in a pristine mode on the same JVM. It has to be such a way that their namespaces and the generated classes don’t clobber each other.
looking at this question: osgi - multiple instances of a service, I need clarification whether running multiple instances of the same service will solve the namespace clobbering issue.
Yes, apparently you can, if starting the framework with the property org.osgi.framework.bsnversion=multiple.
I never tried using that, don't know if would work.
If you want to isolate the ClassLoaders, it would be better to just create a child ClassLoader for each instance.
Related
I'm creating a Liferay 7.1 OSGi bundle, which has some external dependencies in it. In consideration of time, we opted to embed the external JAR in our OSGi Bundle. I've managed to create a bnd file, which includes all of the ElasticSearch dependencies, and put them on the bundle classpath. I've used the source-code from github (https://github.com/liferay/liferay-portal/blob/master/modules/apps/portal-search-elasticsearch6/portal-search-elasticsearch6-impl/build.gradle) and the bnd.bnd file, to check what's imported.
When activating the bundle, an exception is thrown:
The activate method has thrown an exception
java.util.ServiceConfigurationError: org.elasticsearch.common.xcontent.XContentBuilderExtension: Provider org.elasticsearch.common.xcontent.XContentElasticsearchExtension not a subtype
at java.util.ServiceLoader.fail(ServiceLoader.java:239)
at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:376)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at org.elasticsearch.common.xcontent.XContentBuilder.<clinit>(XContentBuilder.java:118)
at org.elasticsearch.common.settings.Setting.arrayToParsableString(Setting.java:1257)
The XContentBuilderExtension is from the elasticsearch-x-content-6.5.0.jar,
the XContentElasticsearchExtension class, is included in the elasticsearch-6.5.0.jar. Both are Included Resources, and have been put on the classpath.
The Activate-method initializes a TransportClient in my other jar, hence it happens on activation ;).
Edit:
I've noticed that this error does NOT occur when installing this the first time, or when the portal restarts. So it only occurs when I uninstall and reinstall the bundle. (This is functionality I really prefer to have!). Maybe a stupid thought.. But could it be that there is some 'hanging thread'? That the bundle is not correctly installed, or that the TransportClient still is alive? I'm checking this out. Any hints are welcome!
Edit 2:
I'm fearing this is an incompatibility between SPI and OSGi? I've checked: The High Level Rest Client has the same issue. (But then with another Extension). I'm going to try the Low-Level Rest Client. This should work, as there are minimal dependencies, I'm guessing. I'm still very curious on why the incompatibility is there. I'm certainly no expert on OSGi, neither on SPI. (Time to learn new stuff!)
Seems like a case where OSGi uses your bundle to solve a dependency from another bundle, probably one that used your bundle to solve a package when the system started.
Looking at the symptoms: it does not occur when booting or restarts. Also it is not a subtype.
When OSGi uses that bundle to solve a dependency, it will keep a copy around, even when you remove it. When the bundle comes back a package that was previously used by another bundle may still be around and you can have the situation where a class used has two version of itself, from different classloaders, meaning they are not the same class and therefore, not a subtype.
Expose only the necessary to minimize the effects of this. Import only if needs importing. If you are using Liferay Gradle configuration to include the bundle inside, stop - it's a terrible way to include as it exposes a lot. If using the bnd file to include a resource and create an entry for the adicional classpath location, do not expose if not necessary. If you have several bundles using one as dependency, make sure about the version they use and if the exchange objects from the problematic class, if they do, than extra care is required.
PS: you can include attributes when exporting and/or importing in order to be more specific and avoid using packages from the wrong origin.
You can have 2 elastic search connections inside one Java app and Liferay is by default not exposing the connection that it holds.
A way around it is to rebuild the Liferay ES connector. It's not a big deal because you don't need to change the code only the OSGi descriptor to expose more services.
I did it in one POC project and worked fine. The tricky thing is to rebuild the Liferay jar but that was explained by Pettry by his google like search blog posts. https://community.liferay.com/blogs/-/blogs/creating-a-google-like-search (it is a series but it's kind of hard to navigate in the new Liferay blogs but Google will probably help) Either way it is all nicely documented here https://github.com/peerkar/liferay-gsearch
the only thing then what needs to be done is to add org.elasticsearch.* in the bnd.bnd file in the export section. You will then be able to work with the native elastic API.
For publishing with SSL using the Endpoint I need to access classes under the packages com.sun.net.httpserver.*
Using the Eclipse IDE I found a way to use this classes. But exporting the bundles and running them in another equinox OSGi Installation I can't start the bundle throwing the following error:
java.lang.NoClassDefFoundError: com/sun/net/httpserver/HttpsConfigurator
Anyone an Idea how to solve this issue?
Thanks!
The package you're referring to is part of the JDK. You need to expose it, to make it available in OSGi and you have two options:
The first, and in most cases preferred option, is to expose this package through the system bundle. The OSGi framework has a property that you can set to do this:
org.osgi.framework.system.packages.extra=...
As its value, you provide it with a comma separated list of packages that you want to expose, on top of the ones that are already exposed by the framework. In your case, at least com.sun.net.httpserver, but there might be more packages that you need. In this case, also make sure that the bundle that uses this package imports this package.
The second option is to use a mechanism used boot delegation. It should only be used as a last resort, as it breaks modularity and if it's not used carefully it might lead to other problems. Again, this is a property that you need to set:
org.osgi.framework.bootdelegation=*
Here, you can provide a comma separated list of packages that should be loaded by the boot class loader. Wildcards are supported (as seen in the example above) but you are encouraged to be as specific as possible, so in your case for example use com.sun.* as the value.
I'd like to create a Configuration object in OSGi, but one that won't be persisted, so it won't be there when the framework is restarted. Similar to START_TRANSIENT for bundles.
Some background: I've got an OSGi (Felix) based client side application, deployed over OBR. The configuration object I'm talking about effectively boots the application. That works fine, but sometimes the content has changed while the context was stopped. In that case, it boots the application as OSGi revives all bundles and adds all configuration options. Then I inject the correct configuration, the application stops and then restarts again.
So it does actually work, but the app starts twice, and I can't get access to the framework before it reconstructs its old state.
Any ideas?
As BJ said there is no standard support for this in the Configuration Admin spec.
However the Felix implementation supports two features which may help you. First, you can set the felix.cm.dir property which configures where the configadmin saves its internal state (which by default will be somewhere under the Framework storage directory). You could set this to a location that you control and then simply wipe it every time you start OSGi (you could also wipe out the entire OSGi Framework storage directory on every start... some people do this but it has wider consequences that what you asked for).
Second, if you need a bit more control, Felix ConfigAdmin supports customising its persistence with a PersistenceManager service. You could probably implement this and return empty/doesn't-exist for the particular pids that you want to control.
The OSGi Config Admin spec does not support this. I also do not know of a non-standard means either for any of the CM impls I am familiar with.
Ok, what I did in the end was the following:
I created a special really small 'boot' bundle, which I do not provision from OBR, instead, I install it from the classpath.
That bundle controls the configuration, and I use START_TRANSIENT the moment I really want to load that configuration.
Not exactly pretty, it gets the job done. I do think transient configuration would make sense to have in OSGi.
My application obtains class names from a properties file. Classes represented by these class names could reside in certain OSGI bundles unknown at advance so in order to instantiate them I firstly have to find which bundle these classes belong to. I'm thinking about getting all installed bundles from BundleContext#getBundles, this means I have to obtain reference to BundleContext in AbstractUIPlugin#start. But I'm not sure if holding reference to BundleContext is the right thing to do since it's should be used only in the start method. So I need advice from OSGI experts here about the alternatives to get list of bundles.
Any help would be greatly appreciated.
Regards,
Setya
This is not really how OSGi is intended. If a bundle has something to add to the 'global' context, you should register a service. So each bundle that has something to share can do that in its own start method.
Some other component (DS, ServiceTracker, Blueprint, something like that) can then listen to these events, and act accordingly.
This is really important, if you start manually searching through all bundles you completely lose the advantages of OSGi.
Like written before you should try to use services to achieve what you want. I guess you have an Interface with one or more implementations that should be installable at runtime. So if you control the bundles that implement the interface then simply let them install their implementation as a service by using an Activator or a blueprint context. You can use service properties to describe your implementation.
The bundles that need the implementation can then lookup the services using a service tracker or a service reference in blueprint.
If that is not possible then you can use the bundle context to obtain the running bundles and instantiate the classes but this is not how OSGi should work. You will run into classloading problems as the bundle that tries to instantiate the classes will not have direct access to them.
Your bundle gets control at start up through the bundle activator, or better, through DS. At that time it can register services with the services registry so others can find/use them.
The route your planning to go (properties that name classes) is evil since you undoubtedly will run in class loading hell. Modularity is about hiding your implementation details, the name of your implementation classes are such details.
Exposing implementation classes in properties files is really bad practice and it looses the advantage of modularity. It does not matter if another class refers to your implementation class or a property file, the problem is that the impl. class is exposed.
Unfortunately this model has become so prevalent in our industry that many developers think it is normal :-(
OSGi allows you share instances typed by interfaces in a way that allows the implementation class to only be known inside the module.
We are writing a new set of services and have decided to make them share a common interface... calling it a BaseService. The idea is that whenever anyone wants to develop a new service in our organization, they should be just able to extend and use this BaseService.
We have written a few other classes which also form a part of this base jar, it does things like handle transactions and connect to database using hibernate etc.
Right now all the services that extend the BaseService are a part of the same project (Eclipse + Maven), and some of the services are dependent on each other, but because they are in the same project we don't have a problem with dependencies.However, we expect 40-50 services to be written which would extend base service and would also be interdependent.
I am worried that the size of the base project would be huge and that just because when someone has to use one service they might have to depend on my base jar which has 50 services.
Is there a way that we can make some projects dynamically dependent on others?
Lets say I have a service A which depends on service B, when I build/compile Service A,it should be able to realize that it has a dependency on service B and automatically use the Service B jar.
I have heard of OSGi, will it solve my problem or is there a way I can do it with Maven or is there a simpler solution ?
Sorry about the long post !
Thanks in advance for your suggestions
It doesn't make any sense to "dynamically" manage project dependencies, since build dependencies are by definition not dynamic.
Your question, at least for the moment, seems to be entirely about how to build your code rather than about how to run it. If you are interested in creating a runtime system in which new service implementations can be dynamically added then OSGi may be the solution to look at. An extra advantage here would be that you could enforce the separation of API from implementation, and prevent the implementing services from invalidly depending on parts of your core module that you do not want them to have visibility of.
You could also use OSGi to manage evolution of your core service API through versioning; for example how do you communicate the fact that a non-breaking change has been made to the API versus a breaking change etc.
I would say there are two options depending if i understand your question correct. First one. You have already defined an interface (java term) and now you have different implementations of that. The simple solution for Maven would be to a have a module which is called for example: service-api and than this will be released and can be used by others as dependencies. On their side they simply implement the interface. No problem with the dependencies. If you are more talking about OSGi than you should take a look to maven-tycho.