How to keep service/componnet running while update bundle OSGI - osgi

I have implemented 2 services A,B in my bundle. I would like to change the code of service A by building a new jar file and do update command but keep the service B running without start it again.

Sounds like you have 2 services in 1 bundle. The unit of deployment is a bundle, so my recommendation is to split the two services into two bundles. Otherwise, undeploying your existing bundle will naturally also tear down Service B.
Alternatively, in case the API/interface resides in a separate bundle, you could deploy a new service-implementation for A in a separate bundle, with a higher priority, and rewire all uses of the service. Which typically is rather confusing, so it's a distant second place recommendation.
Edit: You comment that you are combining services in a bundle to minimize the number of jars, but you want to update the services independently. Specifically for minimizing the number of jars: Are you trying to solve a problem that you indeed had? I'm mainly working with Liferay, which is fully OSGi, and a plain vanilla installation comes with more than 1000 bundles - the runtime handles it just fine. Make sure you're not preemptively optimizing something that doesn't need optimization.
If your components have different maintenance intervals, then deploy them in different bundles. Period. No use working against the system, which has no problem with the number of bundles at all.

Related

How can I minimize the weight of my lambda functions?

I am trying to build a microservice API using AWS (lambda + s3 + apiGateway) and I have noticed that all my lambdas have the same weight, so it seems like I am uploading the full project to every lambda instead of the needed resources.
Is there any way to upload just the resources I need for each function? Will this minimize the execution time? Is it worth to do it?
Going to answer this in 2 parts:
(1) The obligatory "Why do you care?"
I ask this because I was really concerned too. But after testing, it doesn't seem like the size of the bundle uploaded (the jars in the lib folder of the lambda distribution bundle) seemed to really affect anything expect maybe initial upload time (or maybe S3 usage if you are going that route).
For the sake of sanity, rather than having a bunch of nano projects and bundles, I have a single Java Lambda API module and then I upload the same artifact for every Lambda.
At some point, if it makes sense to separate for whatever reason (micro service architecture, separation of code, etc), then I plan on splitting.
Now having said that, the one things that REALLY seems to affect Java based lambdas is class loading time. You mentioned you use Spring. I would recommend you not use Spring configuration loading as you will probably end up executing a bunch of code you never really need.
Remember, ideally your lambdas should be in the 100ms range.
I had a case where I was using the AWS SDK and initializing the AWSClient was taking 13 seconds! (13000 ms). When I switched to using Python of Node, it went to 56ms...
Remember that you get charged by time, and a 1000x factor is no laughing matter :)
(2) If you've decided on splitting, I'd recommend using the gradle distribution plugin with child projects to make each child project and child project zip distribution "light". I went down this road but realized I would really be splitting my components really fine... and I'd either be duplicating configurations across projects. Or if I made a project dependency, I would simply end up bundling up the entire dependency tree again.
If you already know what you need to cherry pick without relying on gradle / maven to handle the dependencies for you, you can create gradle zip tasks to create different Lambda distribution packages.
AWS documentation: http://docs.aws.amazon.com/lambda/latest/dg/create-deployment-pkg-zip-java.html
You will need to create and build 3 different jars for each of your lambda functions and in each of the jar simply package the classes and their required resources rather than creating a super-set jar that has classes and resources for each of the lambda functions.
This way your jars will get lighter.
For more details about building lambda jars see Building AWS Lambda jar

Strategies for making platform level decisions within a bundle

I have a requirement where, if one bundle fails to start because of some internal state issue, the entire application should not be running and thus the platform should be shutdown (bundleCtx.getBundle(0).stop()).
Because of OSGi's nice modularity and so on, other bundles might've started up just fine.
It feels kinda wrong for bundles to be calling bundleCtx.getBundle(0).stop() (or System.exit(nn) if a BundleException occurs) in different places.
Is there a common way to implement this? One way may be Declarative Services, but those are only notified when a given component starts, right? It cannot tell if something has failed (AFAIK).
Ah, here is one possibility I just stumbled upon.
I have a bootstrap bundle which is responsible for starting all of the other bundles in my app. It does this with START_TRANSIENT.
I could put logic into this bundle to do certain things depending on which bundle failed.
So one idea would be to have one bundle that checks if all needed services and bundles come up. It can then stop the framework if one or more services are missing or if a bundle does not start. This would allow to centralize the checking logic at one place.

OSGi: Programatically decide either a bundle can be started

Context
I have a large OSGi-based (Equinox 3.9/Eclipse RCP 4.4) application, with a few "optional" bundles. Each of these optional bundles provide alternative user interfaces and some extra services (through SCR components) that are applicable only to a subset of our customers.
The application is always distributed as a pre-packaged installation (that is, we do not use P2 nor any other automatic provisionning mecanism). Until recently, we built a different pre-packaged installation for each customer that needed some optional bundles. The number of optional bundles is now growing up, and so does the number of custom pre-packaged installations to be built.
So we would like to build only one installation package to be delivered to all customers, containing all optional bundles, and then decide at runtime which optional bundles shall be launched. But here's the twist: the set of optional features to be enabled is known only after the user has authenticated himself with the server. That is, some end-users have access to multiple accounts, each giving access to a distinct set of optional features to be enabled. Consequently, optional bundles must remains unavailable until the user log-in, and then, only allowed optional bundles should be loaded. Since these optional bundles make contributions through various extenders (Eclipse's Plugin registry, SCR and Blueprints), it means that optional bundles must not be allowed to reach the RESOLVED state until it has been determined that they are indeed authorized to launch. It must also be possible to load and unload those optional bundles, as the user log-in and log-out of a specific server account.
Potential solutions and questions
I have identified a few potential solutions, but have unresolved questions to each of them. So I only really need proper positive answers to all questions of any of the following scenario.
Have optional bundles require some "enabling" capability. That would be, I think, the cleanest approach. Optional bundles would add something like "Require-capability: com.acme.optionalfeatures; identifier=my-custom-feature-identifier". At runtime, the capability would remains unresolved until the user is authentified, so extenders would automaticlay ignore that bundle. The bundle would automaticaly change state once the enabling capability is registered. That's great. But here are the missing parts: a) How can I register new capabilities namespace? b) Can I dynamically alter a bundle's provided capability, and if so, how can I do so? c) (if (b) is not possible) Can I dynamically register a new "Resource" with new capabilities, and if so, how can I do so?
Have optional bundles import some "enabling" package. That is a variant on the previous scenario, which might be easier to manage than custom capability namespaces... Those "enabling packages" could then come into existance through "dynamicaly created" packages. That is, some manager bundle would call BundleContext#installBundle(String, InputStream), with a ByteArrayInputStream, returning a dynamically generated bundle archive containing a Manifest.MF which export appropriate "enabling packages". Sounds simple enough, but several components along the toolchain (the IDE, PDE, P2, Product Export...) will complain about the fact that these required packages doesn't exist. To avoid those issues, the require-package header would need to be added dynamically at the time is is installed into the framework. But is there any mechanism that allow a bundle's headers to be altered that early (that is as soon as the bundle is installed into the framework)? Class Weaving is not applicable here.
Have optional bundles all require a single enabling package (all the same), which indeed exists. Then a Resolve Hook would filter out the bundle exporting that package from potential wiring candidates when the optional bundle has not yet been authorized. But how can I later request that all bundles be reconsidered for resolving when the user log-in or log-out?
Excluded solutions
The following solutions have already been excluded:
Putting optional bundles in distinct directories, then use BundleContext#installBundle(...) to enable thse bundles. Though this solution does indeed work at deploy time, it impose a significant burden on development (because bundles are then at some hard-to-predict folder, relative to the workspace, local git configuration, test environment and so one, making it impossible to properly build the location to be provided to #installBundle). The packaging procedure also become much more complex, as we need first to move apart those optional bundles, and then, to update several Equinox/P2 configuration files to prevent them to locate these now missing bundles.
Prevent optional bundles from being activated by conditionaly throwing an exception form their activator. This would simply not solve our problem, since option bundles would still be allowed to reach the RESOLVED state, and therefore be able to make contributions by the intermediate of extenders.
Use P2 to install new features at runtime. The problem here is that changes to the list of bundles made through P2 1) are inherently persistant (which means that optional features will automatically be reanabled at the next launch) and 2) require a framwork restart to be properly accomplished.
Note: We have no "security"-related concerns about the fact that those extra bundles are actualy distributed to users that do not require them. I know that users might easily hack the installation in order to force the launch of some optional bundles. This pose no significant issue.
You can create and install a bundle which provides the capability or package which the optional bundles require. This will enable them to be resolved.
But I don't understand how your model would work when multiple users access the server with varying rights. A more privileged user would cause the optional bundles to "activate", so then these features would be available to lesser privileged users accessing the server at the same time.

Spring Profile conditional prop files multiple environment

I have a question, with Spring Profiles. I understand the reason for not using maven profiles because each environment would require another artifact. That makes since. I modified my code to use Spring Profile but the problem I have with Spring Profiles is that it requires you to have a database.property file for each environment, on the server. I have this setup, same setup everyone has seen a hundred times.
src
- main
- resources
-conf
myapp.properties
-env
dev.db.properties
test.db.properties
prod.db.properties
The problem I think with this setup is that, each server would have all the files in the env dir (i.e. dev would have prod.db.properties and test.db.properties files on its server). Is there a way to only copy the files that are needed during the build of maven without using profiles? I haven't been able to figure out a way. If that is the case, then this would seem like a reason to use maven profiles. I may have missed something. Any thoughts would be greatly appreciated.
This seems like a chicken and egg problem to me. If you want your artifact to work on all these 3 environments you need to ship the 3 configurations. Not doing so would lead to the same issue you mentioned originally. It's generally a bad practice to build an artifact with certain coordinates differently according to a profile.
If you do not want to ship the configuration in the artifact itself, you could externalize the definition either through the use of system property or by locating a properties file at a defined place (that you could override for convenience).
You should first point out what your application really is: If you are running "an application in different environments" or if you are running "different applications in their own propritary environments". This are two slightly different concepts:
If you are running an application in different environments its better to put all property files into your jar. Bring it into mind by imagine that you buy a new SUV; you first drive it on a test track, then after on ordinary highways before going offroad to finally enjoy its offroad capabilities. You always use one and the same car in different environments with all its capabilities and driving characteristics. In each environment the car adapts its behaviour and driving characteristics. If you use one application to drive it through different environment, so use the first approach to build all environment-characteristics into one jar.
On the other hand you can also use slightly different cars in different environments. So if you need different cars with its own special driving characteristics for different environments, maybe 4WD or special flood ligths because you are driving by night, you should take the second approach. Back to application: If you need different applications with far different characteristics in different production environments its better to build each application only with the properties it really needs.
Finally you can also merge the two approaches:
my-fun-application-foo.jar for customer foo with properties for test, integration and production environment.
my-fun-application-2047.jar for customer 2047 with properties for test, pre-integration, integration, pre-production and production environment.
Now you should get also an understanding why you shouldn't using profiles for building an application with different flavours.

How to use R-Osgi to get remote "exported-package"?

R-Osgi provides us a way to call service from a remote OGSi container. WebSite: http://r-osgi.sourceforge.net.
I'm new to R-OSGi and now I want to split my OSGi container into small ones and interact each other by R-Osgi because it's too huge. But it seems R-OSGi only provides a way for Registered Service. We know that the most popular way to interaction between 2 bundles besides Service, "exported-package" is also widely used.
So, is there anyone familiar with R-OSGi and know how to use "exported-package" from remote OSGi container?
Thanks for any response.
If you think about it, attempting to handle the remote importing/exporting of packages is very complex, fragile and prone to error; you'd need to send all bundle lifecycle events over the wire, and honour them in the importing system (which would require caching).
Additionally the framework would need to know ahead of time to use these class definitions (you cannot instantiate a class that references classes that aren't available to your classloader). A remote bundle's classloader may depend on classes from another classloader, this chain could go round and round a network making classloading take ages.
Put another way; your local bundles will never resolve without the class definitions they depend on, and considering there could be thousands+ of potential remote exporters on a network/hardware with very poor SLA, this wouldn't scale well or be very robust considering the fallacies of distributed computing.
If we tried to do remote packages, the framework would need to import all exported packages from all available remote nodes and then select just one to import each bundle export from (this would be arbitrary and if the select node goes down, the whole import remote package process would have to triggered again).
What you need to do is separate you api/interfaces from your implementation, you then distribute the api bundle to all nodes that need it and then use dOSGi to import services.
Apologies if this unclear or waffly but it should explain why it's simply not practical to have remote exported packages.
On a side note; I'm not sure if r-osgi is being actively maintained or is up-to-date with the latest Remote Services Admin spec, from looking at the last commit to SVN trunk was 14/02/2011. There's some alternative dOSGi implementations listed here (but avoid CXF).
EDIT: In terms of deployment, distributing your bundles (and configuration) can be done from an OBR (there are number of public ones and several implementations Felix/Eclipse), or a maven repository can be reappropriated with pax url handler.

Resources