How does OSGi bundle update work? - osgi

When a bundle is updated (say to fix a bug), what happens to other bundles that are currently using the one being updated?
Say that there are two bundles service and dao. Say that classes in service bundle are using classes in dao bundle when I issue command to update dao layer. Will the class in service layer using dao code get an exception?
Thanks for your response.
I meant to say updated with the same version.
until a bundle refresh occurs which includes the dependent bundle.
Bundle refresh operation is invoked by the user updating the bundle, right? Say that when user invokes refresh to update dao bundle, a class in bundle service invoked a method on a class in dao layer... what happens in this scenario?
I found this blog post helpful:
http://solutionsfit.com/blog/2008/08/27/osgi-what-modularity-can-do-for-you-part-1/
From the post:
If we simply replace the bundle with a bundle that includes the fix, the container will unregister the old bundle and register the new bundle. The proxy can then handle the reference shuffling and resume the service invocation. This interaction will be almost instantaneous. Your customers will be completely oblivious to what has happened and you just saved your company a substantial amount of money (do I hear bonus?).
In this blog post, the call to authorizePayment() was put on hold until the updated bundle is available. What happens if the control is within the authorizePayment() method when bundle refresh happens?

Bundles have 2 kind of dependencies:
Services, and
Connections between
class loaders, keyed by the package
names. Those connections are called
wires.
Services are easy to withdraw because that is intrinsic to their design. Wires are harder because they are intricately woven in your objects and those objects are not aware of the dynamics. So when you install a new bundle, the old bundles stay as they are, your objects are not updated and the updated bundle still provides its wires as a zombie.
When you call refreshPackages the framework looks at those dependencies and finds the bundles that refer to those zombies. Every zombie is then stopped. The contract for a bundle is that it should cleanup. We help the bundle by doing a lot of cleanup for you, but some things are very bad, e.g. storing references in statics of other bundles or forgetting to stop threads that were started. Other bundles that depend in other ways on those bundles get notified of the bundle stopping so they can also clean up any references. After the bundles are stopped, the bundles are unresolved and then resolved again against the new bundles.
For real OSGi bundles the cleaning up is natural and not really visible in your code (as it should be). It is well supported by the tools like Declarative services, iPOJO, dependency manager, Spring, Blueprint, etc. The magic is focus on the µservices model and not dong class loading hacks.
Why are we not refreshing automatically? Well, we once did but refreshing is disruptive. In many cases you need to update multiple bundles. Having this disruption after each update would be unnecessary painful. That is, after an install or update you should ALWAYS do a refresh but you can bracket a number of installs/updates.

When you update a bundle, using the OSGi 'update' command, it is most likely to have other dependent bundles that are relying on it and already capturing a set of loaded classes from the old version of this bundle. A situation that typically conforms to the problem you described in your question.
In order to avoid possible inconsistency between the different versions of the classes enfolded by this bundle, the OSGi container decides to temporarily hide the new version of the updated bundle's classes from the outside world. You can think of it as kind of keeping the updated classes in isolation from the other bundles -momentarily-.
The point here is that the OSGi container can’t just start loading classes from the new version of the target bundle, because the dependent bundles would end up seeing old versions of the classes they already loaded, mixed with new versions of the same classes that were loaded after the update, which would incorporate an inconsistency that would result in an uncontrollable mess. The same goes for the bundle Uninstall, the bundle is removed from the installed list of bundles, but it is not removed from memory. It shall be kept around so that dependent bundles can continue to load classes from it.
So you can think of the 'update' command, as introducing a new version of the same bundle, to be only supplied to dependent bundles that are yet to come, -which are not yet there at the time of the update-. While the old version -that existed before the update-, remains in memory in order to assure backward compatibility and avoid any possible disruption to existing bundles that have already started to depend on the updated bundle.
Note, that the old versions are only kept in memory, which means that a restart to the server will result in eradicating all these old versions and bring the latest version to the table. This makes perfect sense, because there will be no need for backward compatibility, simply because all bundles are now starting at the same time..
What happens next, is that you have to explicitly invoke the 'refresh' command on specific bundles, -those which are depending on the updated bundle-, or instead you can choose to run the 'refresh' command without specifying a specific bundle, meaning that all bundles will be blindly refreshed. The 'refresh' command forces the rebuilding of the dependency tree of the target bundles, and coerce their class loaders to start loading their required classes from scratch.
Only then dependent bundles will start to see the changes you have made to the code of the classes living in the bundle that has been updated.
The rule is that
Existing resolved bundles already importing an older version of a class won’t be automatically rewired to the new bundle unless they’re refreshed.

When a bundle is updated, a new revision (the bits of the bundle) is installed. If another bundle is wired to the prior revision of the updated bundle, that is, another bundle imported some package exported by the prior revision or another bundle required the bundle at the prior revision, then the OSGi framework will retain the prior revision of the updated bundle to service future class load requests from the dependent bundle until a bundle refresh occurs which includes the dependent bundle.
The purpose of this is to minimize or delay perturbing dependent bundles when a dependency is updated. A management agent may want to update several bundles and, at the end, do a bundle refresh to "modernize" the dependencies. Once the bundle refresh is done, there are no wires to the prior revision of the updated bundle and the OSGi framework is now free to discard the prior revision.
So in your example, generally no exception will result. But of course it depends upon what the code in question is actually doing and how their bundle manifests are written.

Related

Osgi bundle in start phase but not getting active

My OSGi bundle is giving me strange behaviour. Some times it gets Active in first go and some time I need to restart Karaf again and again to see whether my bundle is active or not. Even I cann't see any exception. And all other bundles are active. Can anyone suggest me what can be the cause. I checked its headers they are just ok, we are importing all the packages and exporting none. As is the case for my bundle. Even I hit start command many times. But it is not getting Active state.
Installation order of your bundles might not be aligned to their dependency orders. Try creating your own feature.xml file, where you can set the installation order of your bundles during startup. You can find more detail at https://karaf.apache.org/manual/latest/provisioning
And do not forget to add the features you have created to $KARAF_ROOT/etc/org.apache.karaf.features file in order to make your features installed automatically during startup.

How to add custom bundles part of the Target?

I am working on server-allinone in Eclipse. I would like to make
some custom bundles part of the basic bundles running on the Target.
Is there a way to do so from the configurations?
Conceptually, what is running on the target, consists of three logical parts:
The actual OSGi framework implementation (Apache Felix, Equinox, ...).
The "management agent" that Apache ACE provides.
All the bundles that make up your application.
Anything that is part of #3 can be easily installed, updated and uninstalled by ACE. ACE also has a mechanism that allows the management agent (#2) to update itself (which obviously is a bit of a special case). That is where ACE stops, it has not built-in mechanism to update #1, even though there has been some talk about providing a mechanism for that as well. You have to draw the line at some point though, as beyond that you could also ask yourself who is going to update the JVM, the OS and the bootloader.
Anyway, back to your question. If you want to make those custom bundles part of the basic bundles that are running on the target, conceptually they become part of the framework and you have no way in ACE to ever update them. If that is fine, just create a custom launcher that, besides installing the management agent, also installs these bundles. If that is not fine you need to add those bundles to the actual management agent bundle so they become part of that. That way you can update them as part of that bundle. However, I would like to challenge you and ask you why these bundles cannot be part of the application? Is there a special reason they must be pre-installed (updatable or not)?

how to handle multiple releases in a one go using Maven/hudson

We have more than 250+ applications and most of the applications depends on the generic/common(around 30) components.
If there is any change in a common component then we have to release all/most of them components using Maven. This is very painful task.
Is there any way by which we can avoid this. Do we need to change the design of the application or does maven provides any solution for this or is there any solution in Hudson by which we can schedule a builds to do this.
Any inputs on this will be really helpful.
You could take a look at the Cascade Release Plugin which might work with Jenkins as well.
However, you should thing about your workflow:
Does every application need to have the newest common components right now? Perhaps it is sufficient to include them in the next regular release?
That way, you turn the responsibilities around. So say your component DOES in indeed do a release. Next time when you applications build (in the regular Nightly build, using SNAPSHOTs), the build process notifies the responsible developer/releasemanager for the application that there is a new version of common.
The release manager can now decide whether the features/updates are necessary for his application. You could even let the nightly build automatically update the dependency to common (using versions-maven-plugin.
The point is: I would stronlgy advise against automatically updating and releasing the application because of common updates. This would create a strong coupling between both projects. Do the update automatically, if you want, but let the application developers/release managers decide when to release.
Of course, if the change in common is critical, they MIGHT need to release immediately.

Don't apply bundle updates to old bundles automatically

Is it possible in OSGi to keep bundle wirings even across restarts of the system where even if now there's a new, higher version available we stay with the old one? The point is that if something works we don't want to risk it by wiring the old bundles to the new dependency. In other words, we are trying to isolate updates as much as possible so that an update in a component doesn't affect AT ALL other components (since the old bundles will still be used to satisfy already-wired dependencies)
As an example, let's say A depends on B with range [1.0.0, 2.0.0). We deploy version 1.0.0 of B so now A is wired to B_1.0.0
Now we create a bundle C that depends on a logic change so it depends on B with a range [1.0.1, 2.0.0). So we deploy B_1.0.1. Now if we restart the system, C and A will be wired to 1.0.1 since it is in the dependency range of both and it is in theory a "better" match for A than 1.0.0. Is there any way to tell OSGi to not do this and keep wirings as long as possible (say, until the old bundle is actually removed, in which case it would be ok to go with the highest version in the range)
What we currently do is we disallow ranges, so dependencies are like [1.0.0, 1.0.0]. This gives us the update isolation we want but at the cost that we basically lose modularity; to update a dependency we need to update the dependents even if the code in the dependents didn't change. I think it is a huge anti-pattern to disallow ranges so i'm trying to propose a better alternative with ranges but that would give the update isolation that we need and my first idea is to disallow rewirings even across sessions
If it matters, we are not using OSGi services. They are all plain bundles
The answer to the question in the first paragraph is simply: yes, this is the default behaviour of OSGi. If you stop and restart the framework without performing any bundle updates or package refresh operations, then the wiring state will be the same next time you start.
However you change things in the second paragraph. You now have 2 exports of B with different versions, and both A and C depend on it, with non-identical but overlapping ranges. The overlap is the key here. OSGi always tries to minimise the number of independent exports of the same package, so if you have two bundles importing with overlapping ranges then the framework will try to wire them to the same version rather than two separate versions. The version that falls within the overlap here is 1.0.1 so both A and C will wire to 1.0.0.
You should not try to change this. If bundle A is actually compatible with the range [1.0.0, 2.0.0) then why would you oppose it being wired to version 1.0.1?? On the other hand if A is really only compatible with version 1.0.0 and not 1.0.1, then you should specify a very tight range, i.e. [1.0.0, 1.0.1).
Finally... your last paragraph makes me sad. Plain bundles use services!

What is the standard behaviour in OSGI for the container to pick a package that is exported by multiple bundles?

I was just wondering how does the container select which package to load a class if the same package is exported multiple times by different bundles.
Given the following what is true. Note the list also contains the actual order the bundles were deployed.
package.x version 1 (A)
package.x version 2 (B)
package.x version 3 (C)
If a 4th bundle is added and it needs 'package.x.SomeClass' 1-2 from where will it be selected?
Does it randomly pick from either A or B?
Does it fail to deploy B because of the clash?
Does it pick A because it was first?
Do all containers do the same thing or is the behaviour different between the available popular packages?
The framework will pick either A or B. It does not do so "randomly", but the heuristics are complicated and it is better not to attempt to predict what will happen. Also the behaviour in this case is not specified and is subject to differences between OSGi framework implementations.
OSGi is a component framework, the whole idea is that these kind of issues are useless to discuss since they can vary depending on the framework and set of installed bundles. ANY unspecified assumption you make is likely to be violated and crash your code. The beauty of OSGi is that it allows YOU to specify YOUR constraints. OSGi frameworks will never violate your constraints, that is your guarantee. How it finds a solution in a particular case should be utterly irrelevant since any implicit assumption is likely to cause bugs in other situations.
In Felix, the container starts at version 0.0.0 of an package and then increments upward. It then will wire to the first version of the bundle it hits. So, if you have two bundles, version 1.1.1 and version 1.2.0, and you attempt to wire to that package but don't specify a version number, Felix should always choose version 1.1.1.

Resources