Is it possible in OSGi to keep bundle wirings even across restarts of the system where even if now there's a new, higher version available we stay with the old one? The point is that if something works we don't want to risk it by wiring the old bundles to the new dependency. In other words, we are trying to isolate updates as much as possible so that an update in a component doesn't affect AT ALL other components (since the old bundles will still be used to satisfy already-wired dependencies)
As an example, let's say A depends on B with range [1.0.0, 2.0.0). We deploy version 1.0.0 of B so now A is wired to B_1.0.0
Now we create a bundle C that depends on a logic change so it depends on B with a range [1.0.1, 2.0.0). So we deploy B_1.0.1. Now if we restart the system, C and A will be wired to 1.0.1 since it is in the dependency range of both and it is in theory a "better" match for A than 1.0.0. Is there any way to tell OSGi to not do this and keep wirings as long as possible (say, until the old bundle is actually removed, in which case it would be ok to go with the highest version in the range)
What we currently do is we disallow ranges, so dependencies are like [1.0.0, 1.0.0]. This gives us the update isolation we want but at the cost that we basically lose modularity; to update a dependency we need to update the dependents even if the code in the dependents didn't change. I think it is a huge anti-pattern to disallow ranges so i'm trying to propose a better alternative with ranges but that would give the update isolation that we need and my first idea is to disallow rewirings even across sessions
If it matters, we are not using OSGi services. They are all plain bundles
The answer to the question in the first paragraph is simply: yes, this is the default behaviour of OSGi. If you stop and restart the framework without performing any bundle updates or package refresh operations, then the wiring state will be the same next time you start.
However you change things in the second paragraph. You now have 2 exports of B with different versions, and both A and C depend on it, with non-identical but overlapping ranges. The overlap is the key here. OSGi always tries to minimise the number of independent exports of the same package, so if you have two bundles importing with overlapping ranges then the framework will try to wire them to the same version rather than two separate versions. The version that falls within the overlap here is 1.0.1 so both A and C will wire to 1.0.0.
You should not try to change this. If bundle A is actually compatible with the range [1.0.0, 2.0.0) then why would you oppose it being wired to version 1.0.1?? On the other hand if A is really only compatible with version 1.0.0 and not 1.0.1, then you should specify a very tight range, i.e. [1.0.0, 1.0.1).
Finally... your last paragraph makes me sad. Plain bundles use services!
Related
I'm working with a Grails application version 2.2.4 and I need a procedure for upgrade to latest version (I hope it can be possible). I have thought as a first step to follow the indications of the official site, but that let me to upgrade to version 3.
I'd like to know if anyone already did it or have experience about that. How long take it?, the process and the main problems.
Many thanks in advance.
I think you need to follow both upgrade instructions. the one for 3.x and the 4.x.
start with the 3.x and them move to the 4.x changes.
Another approach I think may be better is to start an empty 4.x application and then start moving you code there. also check first that all the plugins that you are you sing have 3+ version.
The effort required to upgrade can change massively depending on multiple factors, including the size of the project, the quality of the original code, were plugins used and if so have they been updated or will the functionality need replacing, were deprecated taglibs used, e.g remoteFunction etc. etc.
There is not a great deal of difference between 3.x and 4.x so it makes sense to upgrade to 4.x.
Tackle it in stages from the basis of a new project, attempting to rebuild the project between stages.
Reestablish configuration, you don't have to use application.yaml (the default in 4.x) so can create an application.groovy with the same parameters as per your old project.
Move over domain objects but use a new database URL, compare the schema's between the old db and new db to ensure the database is the same. Unless you don't rely on GORM to recreate/update the schema.
Move over any other source and command objects ensuring the project will build. You may need to modify buildconfig at this stage to bring in dependencies and plugins.
Move over services, ensure all compiles and make sure transactions are behaving as intending.
Move over controllers ensuring any tests run successfully.
Move over the views.
Hopefully if the project is still building at this stage, you can run it!
There are multiple modules in our applications and each of them have their separate version and depend on other modules(external to our organization). They all have a parent POM which has it's own version, independent from the children's version.
When one of those modules change, they're converted to snapshots.
For the following example:
Parent v14.0
- module1 v1.5.0
- dependency1(module2 v15.0.0)
- dependency2(external-jar v12.0.1)
- module2 v15.0.0
- module3 v3.1.0
If there would be a change in module2, then module2's version would become v15.0-SNAPSHOT, then module1 becomes v1.5-SNAPSHOT. The parent remains the same.
The purpose for not having the same version on parent+modules, is that we want to localize the updates made to some modules and not affect the others' versions.
This has been designed like this a long time ago and there are several bash scripts to support the updates, although they're not handling all the cases. In any case, we don't have a one-click release process and we feel we are quite far from it with this approach.
We don't know how to convince the management towards a single version approach on all modules. How do you feel about the above? Did you ever encountered a project using the above structure and how well did it go?
Thank you!
I've had to deal with such situations before. There is an actual benefit from having decentralized versions, especially in cases where your product is made out of a large number of modules and this is because of the following facts:
You don't have to release all of them as a whole, if only a handful have been changed (which, from my observation is almost always the case).
You don't have to create unnecessary tags in your version control for code which hasn't changed since the previous release.
You don't have to waste an excessive amount of time releasing modules which don't need to be released.
You know with certainty which modules have changed in a release, which helps a lot when you need to investigate a complex bug, which seems to be dating back a while.
You can actually release certain modules/aggregators before the actual release date of the complete product, allowing for more testing time and a feeling of completeness for a given part of the product.
You can make feature branch releases much easier and implement a continuous delivery in a better way.
You can re-use the same code across multiple development branches without wondering if that branched version matches the one for your branch (or at least with less confusion).
What we ended up doing was:
Extract a parent or set of parents (with no sub-modules).
Try to use fixed versions for parents as much as possible. This is a bit of a caveat, as you must change all modules that inherit it, but in the end it improves the stability.
Extract each of the modules whose versions are independent of the rest to separate modules.
Extract sets of modules whose versions must always move along together into aggregators.
Create jobs in your CI server that can do releases or manually release these modules.
Use the versions-maven-plugin.
I think it's a lot more mature of a project and company's development principles to use decentralized versions and I must admit that in the beginning I was very reluctant to this approach. You might not realize or understand the benefits immediately, but with some practice and a proper setup, you will start seeing the upsides. I'm not saying there aren't caveats like... for example bumping the version of a parent, or having to know in which modules to bump the version of one of your modules.
From my experience, this module actually works better in the end, once you've become used to working with it.
From my experience: Everywhere we tried this, it eventually failed.
Although Maven supports this approach it is not advisable because of the additional effort.
I try to use the following criteria when choosing whether to use distinct projects or a multimodule structure:
If all projects have the same release cycle, I put them in a common multi module structure. In that case, I give them all the same version and release them together.
If a part of the project is used by different other projects (organizational projects), I always split them of and give them a separate lifecycle and a separate version.
If one part of my project stabilizes, I split it off, and give it a separate lifecycle (Maven refactoring)
To do it different alway results in homebrew solutions that neither scale well, nor are easy to maintain.
Hope that helps.
Let's suppose I have a project called myLib-1.1.0. This project has a dependency on lib-dependency-1.2.3.
If there's a new version for this dependency and I need to use it, should I change my project version as well? No other modifications are made to myLib.
At the same time myLib is a dependency for various other projects. My main concern is the impact of a small change in a dependency might have upstream.
Yes. In maven, released versions are immutable. If you release 1.1.0 with a dependency to lib-dependency-1.2.3 then that's it.
If you change to depend on lib-dependency-1.2.4 then that's a new version. You should not redeploy 1.1.0 since some people might have already pulled that (supposedly immutable) 1.1.0.
So that means you need a different version, even if it's a just a new qualifier (myLib-1.1.0-RC-2 for example, but better just 1.1.1)
Maven doesn't recheck remote repos for release versions once it has it in the local repo, so if someone has 1.1.0 already locally, they will not get the new, fixed 1.1.0.
And about your rippling problem. Upstream projects should depend on the lowest acceptable released version. i.e. if the upstream project itself is ok with myLib-1.1.0 because it doesn't need (indirectly) lib-dependency-1.2.4 then it should stay with 1.1.0
Any code change that potentially affects the behavior should be given a new version number, in other words: anything that's not an absolute trivial change should be given a new version number. A changed dependency would definitely qualify for that because, unless you do a thorough code inspection of the dependency, you have no reason to assume that they only made absolute trivial changes.
Changes are often advertised as "small" (similar to being absolutely trivial as I call it above), but they hardly ever are. They may be negligible in someone's use case, but not in someone else's use case. I've even seen circumstances where there were only changes to Javadocs in a project that would break things down the line. (You could argue about how smart it is for someone to depend that strongly on Javadoc, but that's besides the point, isn't it?)
That is not to say that you can't accumulate changes and release a bunch of them as a single release. While accumulating, your project is in flux, and should have a ...-SNAPSHOT version. There should be no two versions of myLib-1.1.0 (without the -SNAPSHOT) that have even the least little change.
The fact that you're re-releasing your project also makes explicit the fact that regression testing and such should be redone to validate that it's still working with the changes in its dependency.
I was just wondering how does the container select which package to load a class if the same package is exported multiple times by different bundles.
Given the following what is true. Note the list also contains the actual order the bundles were deployed.
package.x version 1 (A)
package.x version 2 (B)
package.x version 3 (C)
If a 4th bundle is added and it needs 'package.x.SomeClass' 1-2 from where will it be selected?
Does it randomly pick from either A or B?
Does it fail to deploy B because of the clash?
Does it pick A because it was first?
Do all containers do the same thing or is the behaviour different between the available popular packages?
The framework will pick either A or B. It does not do so "randomly", but the heuristics are complicated and it is better not to attempt to predict what will happen. Also the behaviour in this case is not specified and is subject to differences between OSGi framework implementations.
OSGi is a component framework, the whole idea is that these kind of issues are useless to discuss since they can vary depending on the framework and set of installed bundles. ANY unspecified assumption you make is likely to be violated and crash your code. The beauty of OSGi is that it allows YOU to specify YOUR constraints. OSGi frameworks will never violate your constraints, that is your guarantee. How it finds a solution in a particular case should be utterly irrelevant since any implicit assumption is likely to cause bugs in other situations.
In Felix, the container starts at version 0.0.0 of an package and then increments upward. It then will wire to the first version of the bundle it hits. So, if you have two bundles, version 1.1.1 and version 1.2.0, and you attempt to wire to that package but don't specify a version number, Felix should always choose version 1.1.1.
When a bundle is updated (say to fix a bug), what happens to other bundles that are currently using the one being updated?
Say that there are two bundles service and dao. Say that classes in service bundle are using classes in dao bundle when I issue command to update dao layer. Will the class in service layer using dao code get an exception?
Thanks for your response.
I meant to say updated with the same version.
until a bundle refresh occurs which includes the dependent bundle.
Bundle refresh operation is invoked by the user updating the bundle, right? Say that when user invokes refresh to update dao bundle, a class in bundle service invoked a method on a class in dao layer... what happens in this scenario?
I found this blog post helpful:
http://solutionsfit.com/blog/2008/08/27/osgi-what-modularity-can-do-for-you-part-1/
From the post:
If we simply replace the bundle with a bundle that includes the fix, the container will unregister the old bundle and register the new bundle. The proxy can then handle the reference shuffling and resume the service invocation. This interaction will be almost instantaneous. Your customers will be completely oblivious to what has happened and you just saved your company a substantial amount of money (do I hear bonus?).
In this blog post, the call to authorizePayment() was put on hold until the updated bundle is available. What happens if the control is within the authorizePayment() method when bundle refresh happens?
Bundles have 2 kind of dependencies:
Services, and
Connections between
class loaders, keyed by the package
names. Those connections are called
wires.
Services are easy to withdraw because that is intrinsic to their design. Wires are harder because they are intricately woven in your objects and those objects are not aware of the dynamics. So when you install a new bundle, the old bundles stay as they are, your objects are not updated and the updated bundle still provides its wires as a zombie.
When you call refreshPackages the framework looks at those dependencies and finds the bundles that refer to those zombies. Every zombie is then stopped. The contract for a bundle is that it should cleanup. We help the bundle by doing a lot of cleanup for you, but some things are very bad, e.g. storing references in statics of other bundles or forgetting to stop threads that were started. Other bundles that depend in other ways on those bundles get notified of the bundle stopping so they can also clean up any references. After the bundles are stopped, the bundles are unresolved and then resolved again against the new bundles.
For real OSGi bundles the cleaning up is natural and not really visible in your code (as it should be). It is well supported by the tools like Declarative services, iPOJO, dependency manager, Spring, Blueprint, etc. The magic is focus on the µservices model and not dong class loading hacks.
Why are we not refreshing automatically? Well, we once did but refreshing is disruptive. In many cases you need to update multiple bundles. Having this disruption after each update would be unnecessary painful. That is, after an install or update you should ALWAYS do a refresh but you can bracket a number of installs/updates.
When you update a bundle, using the OSGi 'update' command, it is most likely to have other dependent bundles that are relying on it and already capturing a set of loaded classes from the old version of this bundle. A situation that typically conforms to the problem you described in your question.
In order to avoid possible inconsistency between the different versions of the classes enfolded by this bundle, the OSGi container decides to temporarily hide the new version of the updated bundle's classes from the outside world. You can think of it as kind of keeping the updated classes in isolation from the other bundles -momentarily-.
The point here is that the OSGi container can’t just start loading classes from the new version of the target bundle, because the dependent bundles would end up seeing old versions of the classes they already loaded, mixed with new versions of the same classes that were loaded after the update, which would incorporate an inconsistency that would result in an uncontrollable mess. The same goes for the bundle Uninstall, the bundle is removed from the installed list of bundles, but it is not removed from memory. It shall be kept around so that dependent bundles can continue to load classes from it.
So you can think of the 'update' command, as introducing a new version of the same bundle, to be only supplied to dependent bundles that are yet to come, -which are not yet there at the time of the update-. While the old version -that existed before the update-, remains in memory in order to assure backward compatibility and avoid any possible disruption to existing bundles that have already started to depend on the updated bundle.
Note, that the old versions are only kept in memory, which means that a restart to the server will result in eradicating all these old versions and bring the latest version to the table. This makes perfect sense, because there will be no need for backward compatibility, simply because all bundles are now starting at the same time..
What happens next, is that you have to explicitly invoke the 'refresh' command on specific bundles, -those which are depending on the updated bundle-, or instead you can choose to run the 'refresh' command without specifying a specific bundle, meaning that all bundles will be blindly refreshed. The 'refresh' command forces the rebuilding of the dependency tree of the target bundles, and coerce their class loaders to start loading their required classes from scratch.
Only then dependent bundles will start to see the changes you have made to the code of the classes living in the bundle that has been updated.
The rule is that
Existing resolved bundles already importing an older version of a class won’t be automatically rewired to the new bundle unless they’re refreshed.
When a bundle is updated, a new revision (the bits of the bundle) is installed. If another bundle is wired to the prior revision of the updated bundle, that is, another bundle imported some package exported by the prior revision or another bundle required the bundle at the prior revision, then the OSGi framework will retain the prior revision of the updated bundle to service future class load requests from the dependent bundle until a bundle refresh occurs which includes the dependent bundle.
The purpose of this is to minimize or delay perturbing dependent bundles when a dependency is updated. A management agent may want to update several bundles and, at the end, do a bundle refresh to "modernize" the dependencies. Once the bundle refresh is done, there are no wires to the prior revision of the updated bundle and the OSGi framework is now free to discard the prior revision.
So in your example, generally no exception will result. But of course it depends upon what the code in question is actually doing and how their bundle manifests are written.