how to follow OSGi bundle changes - osgi

What's the best way to track any change in the entire the bundle population?
I've got some logic which scans all bundles for some bundle capabilities.
I want this to kick in only if there's a change in the bundle population, i.e. a bundle has been added/updated/removed.

Use a BundleTracker. You will receive a callback on the changes.

Related

How to keep service/componnet running while update bundle OSGI

I have implemented 2 services A,B in my bundle. I would like to change the code of service A by building a new jar file and do update command but keep the service B running without start it again.
Sounds like you have 2 services in 1 bundle. The unit of deployment is a bundle, so my recommendation is to split the two services into two bundles. Otherwise, undeploying your existing bundle will naturally also tear down Service B.
Alternatively, in case the API/interface resides in a separate bundle, you could deploy a new service-implementation for A in a separate bundle, with a higher priority, and rewire all uses of the service. Which typically is rather confusing, so it's a distant second place recommendation.
Edit: You comment that you are combining services in a bundle to minimize the number of jars, but you want to update the services independently. Specifically for minimizing the number of jars: Are you trying to solve a problem that you indeed had? I'm mainly working with Liferay, which is fully OSGi, and a plain vanilla installation comes with more than 1000 bundles - the runtime handles it just fine. Make sure you're not preemptively optimizing something that doesn't need optimization.
If your components have different maintenance intervals, then deploy them in different bundles. Period. No use working against the system, which has no problem with the number of bundles at all.

Using Hanami model and rake tasks without a router etc

I'm going to write a service that will using amqp protocol, without http at all. I like hanami's paradigm of repository-entity-model-interactors and I wonder to use those in my project. Generating all that stuff by hand, sure, is boring.
So, I wonder to grab rake tasks. Looking into config/environment etc, ughhhh. What is the best method, shortly, to use those tools without hanami router and controllers? Or, it is all integrated tightly?
As I think for that moment, there are two ways:
a) To include only hanami-model into my Gemfile, then copy by hand every needed file from gem hanami.
b) To create hanami project and do not use rackup.
I'm disappointed.
Alternatively, you can add hanami as a development gem. That gives you access to the code generators. At the deploy stage, you don't bundle hanami, so the app will only have hanami-model and hanami-utils in production.
hello. If I understand you right, you want to use interactors only with models. Interactors you can use as a regular ruby library.
For model, you need to configure all this staff and load to memory. You can check the example from our playbook. Hope it'll be helpful for you
https://github.com/hanami/playbook/blob/master/development/bug_templates/model_psql.rb

Strategies for making platform level decisions within a bundle

I have a requirement where, if one bundle fails to start because of some internal state issue, the entire application should not be running and thus the platform should be shutdown (bundleCtx.getBundle(0).stop()).
Because of OSGi's nice modularity and so on, other bundles might've started up just fine.
It feels kinda wrong for bundles to be calling bundleCtx.getBundle(0).stop() (or System.exit(nn) if a BundleException occurs) in different places.
Is there a common way to implement this? One way may be Declarative Services, but those are only notified when a given component starts, right? It cannot tell if something has failed (AFAIK).
Ah, here is one possibility I just stumbled upon.
I have a bootstrap bundle which is responsible for starting all of the other bundles in my app. It does this with START_TRANSIENT.
I could put logic into this bundle to do certain things depending on which bundle failed.
So one idea would be to have one bundle that checks if all needed services and bundles come up. It can then stop the framework if one or more services are missing or if a bundle does not start. This would allow to centralize the checking logic at one place.

OSGi: Programatically decide either a bundle can be started

Context
I have a large OSGi-based (Equinox 3.9/Eclipse RCP 4.4) application, with a few "optional" bundles. Each of these optional bundles provide alternative user interfaces and some extra services (through SCR components) that are applicable only to a subset of our customers.
The application is always distributed as a pre-packaged installation (that is, we do not use P2 nor any other automatic provisionning mecanism). Until recently, we built a different pre-packaged installation for each customer that needed some optional bundles. The number of optional bundles is now growing up, and so does the number of custom pre-packaged installations to be built.
So we would like to build only one installation package to be delivered to all customers, containing all optional bundles, and then decide at runtime which optional bundles shall be launched. But here's the twist: the set of optional features to be enabled is known only after the user has authenticated himself with the server. That is, some end-users have access to multiple accounts, each giving access to a distinct set of optional features to be enabled. Consequently, optional bundles must remains unavailable until the user log-in, and then, only allowed optional bundles should be loaded. Since these optional bundles make contributions through various extenders (Eclipse's Plugin registry, SCR and Blueprints), it means that optional bundles must not be allowed to reach the RESOLVED state until it has been determined that they are indeed authorized to launch. It must also be possible to load and unload those optional bundles, as the user log-in and log-out of a specific server account.
Potential solutions and questions
I have identified a few potential solutions, but have unresolved questions to each of them. So I only really need proper positive answers to all questions of any of the following scenario.
Have optional bundles require some "enabling" capability. That would be, I think, the cleanest approach. Optional bundles would add something like "Require-capability: com.acme.optionalfeatures; identifier=my-custom-feature-identifier". At runtime, the capability would remains unresolved until the user is authentified, so extenders would automaticlay ignore that bundle. The bundle would automaticaly change state once the enabling capability is registered. That's great. But here are the missing parts: a) How can I register new capabilities namespace? b) Can I dynamically alter a bundle's provided capability, and if so, how can I do so? c) (if (b) is not possible) Can I dynamically register a new "Resource" with new capabilities, and if so, how can I do so?
Have optional bundles import some "enabling" package. That is a variant on the previous scenario, which might be easier to manage than custom capability namespaces... Those "enabling packages" could then come into existance through "dynamicaly created" packages. That is, some manager bundle would call BundleContext#installBundle(String, InputStream), with a ByteArrayInputStream, returning a dynamically generated bundle archive containing a Manifest.MF which export appropriate "enabling packages". Sounds simple enough, but several components along the toolchain (the IDE, PDE, P2, Product Export...) will complain about the fact that these required packages doesn't exist. To avoid those issues, the require-package header would need to be added dynamically at the time is is installed into the framework. But is there any mechanism that allow a bundle's headers to be altered that early (that is as soon as the bundle is installed into the framework)? Class Weaving is not applicable here.
Have optional bundles all require a single enabling package (all the same), which indeed exists. Then a Resolve Hook would filter out the bundle exporting that package from potential wiring candidates when the optional bundle has not yet been authorized. But how can I later request that all bundles be reconsidered for resolving when the user log-in or log-out?
Excluded solutions
The following solutions have already been excluded:
Putting optional bundles in distinct directories, then use BundleContext#installBundle(...) to enable thse bundles. Though this solution does indeed work at deploy time, it impose a significant burden on development (because bundles are then at some hard-to-predict folder, relative to the workspace, local git configuration, test environment and so one, making it impossible to properly build the location to be provided to #installBundle). The packaging procedure also become much more complex, as we need first to move apart those optional bundles, and then, to update several Equinox/P2 configuration files to prevent them to locate these now missing bundles.
Prevent optional bundles from being activated by conditionaly throwing an exception form their activator. This would simply not solve our problem, since option bundles would still be allowed to reach the RESOLVED state, and therefore be able to make contributions by the intermediate of extenders.
Use P2 to install new features at runtime. The problem here is that changes to the list of bundles made through P2 1) are inherently persistant (which means that optional features will automatically be reanabled at the next launch) and 2) require a framwork restart to be properly accomplished.
Note: We have no "security"-related concerns about the fact that those extra bundles are actualy distributed to users that do not require them. I know that users might easily hack the installation in order to force the launch of some optional bundles. This pose no significant issue.
You can create and install a bundle which provides the capability or package which the optional bundles require. This will enable them to be resolved.
But I don't understand how your model would work when multiple users access the server with varying rights. A more privileged user would cause the optional bundles to "activate", so then these features would be available to lesser privileged users accessing the server at the same time.

Writing an entire application on top of Capistrano

I am working on a task that needs to checkout source from a github repositroy, and then modify some files from the checked out repository based on some existing configuration data that's coming from a separate call to a different web service as JSON. The changes to the checked out code are temporary and will not be pushed back to github.
Once the checked out source is processed and modified based upon the configuration data, I will create a compressed archive of the resulting source.
I just discovered Capistrano and it seems great for this entire process, although it has nothing to do with deployment. On the other hand, I could simply use plain Ruby to do the same stuff. Currently, I am weighing more on the side of using Capistrano with custom tasks.
So you can say that it's an app based on Capistrano itself, with local deployment. Does it sound like a sane approach? Should I write it in plain Ruby instead? Or maybe write parts of the application in pure Ruby, and connect the pieces with Capistrano. Any suggestion is welcome.
Sincerely recommend Thor (see Github) it's pure-ruby syntax tax framework like Rake (but like Capistrano has a lot of cruft for server cluster grouping and connection handling… Rake has a lot to do with more classical "Make" or build tasks)
Recommendation from me is a set of Thor tasks, using raw-net-ssh (cap is based on Net::SSH) where appropriate.
For the checking out I recommend you watch the "Amp" project… they're coming up with a consistent cross-scm way to do checkouts (but thats the least of your problems) - You can take a look here, but it's early days for them yet - http://github.com/michaeledgar/amp
Sources: (as the Capistrano maintainer, i'm planning on throwing out our own DSL to replace it with Thor since it makes a lot more sense )
As for me, I write things like these in a Rakefile, and then use a rake command to call them.
You can find that Rakefiles are similar to Capfiles, so rake is usually used to perform some local tasks, and cap for remote.

Resources