Identify OSGI Bundle failure in Karaf programmatically - osgi

We are using Karaf-4 as OSGI container. We have several bundles associated with a feature. When any bundle fails in a feature, we want to identify programmatically. We tried BundleTracker and BundleListener, but we are not getting notification when a bundle fails after waiting in "GracePeriod" state.
We are able to view the status of failure bundle using "list" command in karaf console. We want to achieve this programmatically through notification rather than we execute "list" command in the karaf console.

You can use the BundleService as an OSGi service. The method getInfo gives you among other the BundleState. For failed bundles you can then call getDiag to get the detailed status.
Actually this is what the diag command does internally.

Related

kogito return process instance not found after restart the service

I need some advice and explanation, according to my case. here is my kogito setup:
kogito service --> dataIndex-postgresql-->Kogito Management Console --> Kogito Task Console.
I create simple BPMN, it is just Task User.
Test scenario:
Service kogito, Console Management and task console Run,Then I submit the workflow until the phases complete in Task Console management.
Service kogito, Console Management and task console Run. Submit the wofkflow then the task success waiting in task console, then i stop the kogito service then run it again the kogito service. the task console will returned error "process instance with id 2493dndnxxx not found. when i try to post the task console.
I don't understand why. I really appreciate if some one can explain for this case, it is normal or not ?.
Thank you
i expect some one can explain this is normal situation or not ?.
in my understanding the process instance Id can submited the task even i stop the kogito service because we have dataIndex with postgresql.
A Kogito service is ephemeral by default, which means any process started will be lost if you restart the service. To maintain the state, you must add one of the persistence add-ons to your Kogito runtime project. See the docs here for more information about the supported persistence types https://docs.kogito.kie.org/latest/html_single/#con-persistence_kogito-developing-process-services.
In this other section,there are also some more details about how that can be combined when using other services like Data Index, which also supports different persistency types: https://docs.kogito.kie.org/latest/html_single/#con-data-index-service_kogito-configuring

Talend ESB deployment on runtime

I'm on Talend ESB Runtime.
I encountered problems while starting ./trun. Nothing on the screen appeared after start. The process is launched but I can't get anything else...
Anyway I tryed to deployed a job, and there is something weird in the log about org.osgi.framework.bundleException in tesb.log.
And Karaf.log is OK
Here tesb.log :
tesb.log
karaf.log :
karaf.log
log in repository data :
timestamplog
I don't know how to investigate, because logs are poor and JVM is equal between Talend ESB and the runtime...
Can you help me please?
You only showed a small snippet of the log. From this I can already see that at least one bundle can not be resolved. This means that this bundle can not be used. In the snippet the bundle seems to be a user bundle but I am pretty sure you have other such log messages that show that one of the main bundles of karaf can not be loaded.
If you want to check for the cause of the problem look into these messages and search for non optional package that are not resolved. Usually this leads to a missing bundle.
If you simply want to get your system running again you can simply reset karaf by using
./trun clean
Remember though that you then have to reinstall all features again.

Viewing TeamCity service messages

I'm troubleshooting a build step in TeamCity 9.0.4. The problem seems to lie within the service message output. Is it possible to view these after the build has completed? They are not included in the build log.
The documentation on service messages simply says In order to be processed by TeamCity, they should be printed into a standard output stream of the build.
https://confluence.jetbrains.com/display/TCD9/Build+Script+Interaction+with+TeamCity
(To some extent the service messages can be viewed by manually rerunning the build step and monitoring standard output, but this is not always feasible.)
The documentation for service message implies that you need to write service messages to standard out/error rather than to a log file. If you write it to standard out, teamcity will automatically pick it up and show it in the **build logs ** tab
What this means is that if you have a
shell script, use echo for your service messages
java class, use System.out.println
and so on
Different languages also have different plugins for this , for ex perl has TapHarness.pl to write teamcity messages to the console.
EDIT:
If you want to just view service messages , you can find them in the build logs on the teamcity agent that the build ran on. If you do not find them in the build logs , either the build log has rolled over or you need to increase the verbosity or debug level of your logs(depends on the language).
There was a problem which is solved nowdays:
TeamCity now parses service messages inside other service messages, but only if original message was tagged with tc:parseServiceMessagesInside. Example:
##teamcity[testStdOut name='test1' out='##teamcity|[buildStatisticValue key=|'my_stat_value|' value=|'125|'|]' tc:tags='tc:parseServiceMessagesInside']
A link to JetBrains bug tracker:
https://youtrack.jetbrains.com/issue/TW-45311

Host bundle fails to start on Karaf restart when two fragments are present

I am using Karaf 3.0.1 and have two fragment bundles A and B that attach to a host bundle C. I am able to install A, then B, then C, and start C and everything works fine.
When I stop and start Karaf though, the host usually has a failure and does not start successfully. Both fragments are listed as "Resolved" and show as being attached to the host and the host shows it is attached to the fragments, but the host has a state of "Failure". The exception in the log file is:
20140507 07:35:39.011 [ERROR] FelixStartLevel | 19:org.apache.aries.blueprint.core |
org.apache.aries.blueprint.container.BlueprintContainerImpl | Unable to start
blueprint container for bundle <host bundle name>
org.osgi.service.blueprint.container.ComponentDefinitionException: Unable to load
class <class from 2nd fragment> from recipe BeanRecipe[name='<bean ID from second
fragment blueprint XML>']
If I deploy only a single fragment, then I'm able to restart Karaf and the host starts fine. But with two fragments, Karaf will restart ok perhaps once or twice, but then fails and never successfully starts again. I played with start levels and having the host have a higher/later start level doesn't help at all.
I read When is an OSGi fragment attached to host? that seems to make it clear that start-levels don't affect resolution order and saw the suggestion to use Provide/Require-Capability headers. I tried that and see the same behavior, although again with a single fragment it works fine.
What else might I do to get this to work? Might there be a bug in Karaf/Felix regarding multiple fragments for the same host on restart?
Yes, I'd rather not use fragments but am porting a fairly complex Java EE app to OSGi and this is the approach that works given the code-base I have, but if I can't depend on things starting correctly when Karaf starts, this won't be workable.
Thanks,
Kevin

How does one run Spring XD in distributed mode?

I'm looking to start Spring XD in distributed mode (more specifically deploying it with BOSH). How does the admin component communicate to the module container?
If it's via TCP/HTTP, surely I'll have to tell the admin component where all the containers are? If it's via Redis, I would've thought that I'll need to tell the containers where the Redis instance is?
Update
I've tried running xd-admin and Redis on one box, and xd-container on another with redis.properties updated to point to the admin box. The container starts without reporting any exceptions.
Running the example stream submission curl -d "time | log" http://{admin IP}:8080/streams/ticktock yields no output to either console, and not output to the logs.
If you are using the xd-container script, then the redis.properties is expected to be under "XD_HOME/config" where XD_HOME points the base directory where you have bin, config, lib & modules of xd.
Communication between the Admin and Container runtime components is via the messaging bus, which by default is Redis.
Make sure the environment variable XD_HOME is set as per the documentation; if it is not you will see a logging message that suggests the properties file has been loaded correctly when it has not:
13/06/24 09:20:35 INFO support.PropertySourcesPlaceholderConfigurer: Loading properties file from URL [file:../config/redis.properties]

Resources