We have been using karaf version 2.3.3 for some months now on a system that receives data files, translates the data into objects, and persists the data to a persistent store.
Recently, we've noticed that when karaf is stopped/restarted the bundles will get into some kind of locked state for a period of time.
Here is a sequence of events:
1) During chef run, bundles are deployed into the deploy directory while karaf is down
2) When karaf comes up, all bundles and blueprints resolve correctly
3) When karaf is cycled, bundles resolve correctly, but blueprints get into a locked state where most are up, but one is in a stopping state, and several might be in a resolved state
4) After 5 mins (timeout), the stopping bundle goes to resolved, and some other bundle moves into the stopping state
5) Some of the time (most of the time?), if you wait long enough, all bundles will eventually move to an Active state and the system will be fully up
While karaf is starting, I can use the karaf client to issue 'list' commands and watch the bundles start up. They cycle from: Installed -> Resolved -> Active,
while the blueprints cycle from:
blank -> Creating -> Created with an occasional GracePeriod thrown in while dependent services are coming up.
After it appears that all services are Active and all blueprints are Created, one bundle will get stuck in a Stopping state while others revert to a Resolved state:
[ 136] [Active ] [Created ] [ 80] transformation-services (1.0.3)
[ 137] [Active ] [Created ] [ 80] event-services (0.1.2)
[ 138] [Active ] [Created ] [ 80] ftp-services (0.0.0)
[ 139] [Active ] [Created ] [ 80] ingest-resources (0.0.1)
[ 140] [Active ] [Created ] [ 80] orchestration-app (0.2.3)
[ 141] [Active ] [Created ] [ 80] aws-services (0.4.0)
[ 142] [Resolved ] [ ] [ 80] point-data-service-test (0.2.0)
[ 143] [Active ] [Created ] [ 80] event-consumer-app (1.3.4)
[ 144] [Stopping ] [ ] [ 80] XXXX_no_op_log_transform.xml (0.0.0)
[ 145] [Resolved ] [ ] [ 80] persistence-app (1.3.3)
[ 146] [Active ] [Created ] [ 80] ftp-ingest-endpoint (1.0.2)
[ 147] [Resolved ] [ ] [ 80] secondary_ftp.xml (0.0.0)
[ 148] [Resolved ] [ ] [ 80] event-rest-test (0.0.0)
[ 149] [Resolved ] [ ] [ 80] customer_credentials.xml (0.0.0)
[ 150] [Resolved ] [ ] [ 80] customer1_xml.xml (0.0.0)
[ 151] [Active ] [Created ] [ 80] endpoint-services (0.0.0)
[ 152] [Active ] [Created ] [ 80] scheduler-services (0.1.0)
[ 153] [Active ] [Created ] [ 80] fourhundred_xml.xml (0.0.0)
[ 154] [Active ] [Creating ] [ 80] point-data-service (2.3.3)
[ 155] [Installed ] [ ] [ 80] customer1_csv.xml (0.0.0)
We have around 20 custom bundles that perform a variety of services. Some describe services that run in a scheduled executor. Some expose cxf REST services. Some are simple blueprint files that have been dropped into the karaf deploy directory. We are using the whiteboard pattern to discover, register, and access the services from the blueprint files that are dropped in the hot deploy.
I've played around with using a feature file or setting the bundle start levels, but still see the same behavior. There are a few JIRAs that I've found that talk about the problem being a blueprint synchronization problem (https://issues.apache.org/jira/browse/KARAF-1724 https://issues.apache.org/jira/browse/ARIES-1051) but don't really give any real advice.
Has anyone come across this same issue and come up with a reliable way to workaround it?
Related
I've installed ElasticSearch 5.3 on my Windows 10 machine recently and after following the installation process, ES does not respond when I access http://localhost:9200 and from Chrome's Sense.
However, when I use curl 'http://localhost:9200' on the command line, it sends this:
{
"name" : "test",
"cluster_name" : "my-application",
"cluster_uuid" : "9-wvh6UXTHSKmkdpblqyyA",
"version" : {
"number" : "5.3.0",
"build_hash" : "3adb13b",
"build_date" : "2017-03-23T03:31:50.652Z",
"build_snapshot" : false,
"lucene_version" : "6.4.1"
},
"tagline" : "You Know, for Search"
}
which indicates that the ES is running/configured correctly(?).
I also configured network.host: to 0.0.0.0/127.0.0.1/:: of elasticsearch.yml still no effect.
Here is the latest log info:
[2017-04-07T18:21:06,481][INFO ][o.e.n.Node ] [test] initializing ...
[2017-04-07T18:21:06,633][INFO ][o.e.e.NodeEnvironment ] [test] using [1] data paths, mounts [[OSDisk (C:)]], net usable_space [129.2gb], net total_space [232.3gb], spins? [unknown], types [NTFS]
[2017-04-07T18:21:06,634][INFO ][o.e.e.NodeEnvironment ] [test] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-04-07T18:21:06,681][INFO ][o.e.n.Node ] [test] node name [test], node ID [Ab5g3zN0S7qNOuP_asF-iQ]
[2017-04-07T18:21:06,682][INFO ][o.e.n.Node ] [test] version[5.3.0], pid[312], build[3adb13b/2017-03-23T03:31:50.652Z], OS[Windows 10/10.0/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_121/25.121-b13]
[2017-04-07T18:21:08,288][INFO ][o.e.p.PluginsService ] [test] loaded module [aggs-matrix-stats]
[2017-04-07T18:21:08,289][INFO ][o.e.p.PluginsService ] [test] loaded module [ingest-common]
[2017-04-07T18:21:08,291][INFO ][o.e.p.PluginsService ] [test] loaded module [lang-expression]
[2017-04-07T18:21:08,291][INFO ][o.e.p.PluginsService ] [test] loaded module [lang-groovy]
[2017-04-07T18:21:08,292][INFO ][o.e.p.PluginsService ] [test] loaded module [lang-mustache]
[2017-04-07T18:21:08,294][INFO ][o.e.p.PluginsService ] [test] loaded module [lang-painless]
[2017-04-07T18:21:08,295][INFO ][o.e.p.PluginsService ] [test] loaded module [percolator]
[2017-04-07T18:21:08,296][INFO ][o.e.p.PluginsService ] [test] loaded module [reindex]
[2017-04-07T18:21:08,296][INFO ][o.e.p.PluginsService ] [test] loaded module [transport-netty3]
[2017-04-07T18:21:08,298][INFO ][o.e.p.PluginsService ] [test] loaded module [transport-netty4]
[2017-04-07T18:21:08,299][INFO ][o.e.p.PluginsService ] [test] loaded plugin [ltr-query]
[2017-04-07T18:21:11,330][INFO ][o.e.n.Node ] [test] initialized
[2017-04-07T18:21:11,330][INFO ][o.e.n.Node ] [test] starting ...
[2017-04-07T18:21:11,612][INFO ][o.e.t.TransportService ] [test] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2017-04-07T18:21:14,657][INFO ][o.e.c.s.ClusterService ] [test] new_master {test}{Ab5g3zN0S7qNOuP_asF-iQ}{XYHTUZ8AQN6kYrEPLChFNg}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-04-07T18:21:14,826][INFO ][o.e.h.n.Netty4HttpServerTransport] [test] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2017-04-07T18:21:14,826][INFO ][o.e.n.Node ] [test] started
[2017-04-07T18:21:14,982][INFO ][o.e.g.GatewayService ] [test] recovered [1] indices into cluster_state
[2017-04-07T18:21:15,510][INFO ][o.e.c.r.a.AllocationService] [test] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[tmdb][0]] ...]).
Got it! Referring from Network Settings Documentation, I tried to run it using this script: elasticsearch.bat -E network.host=_local_, and it worked!
I also changed the configuration file(./config/elasticsearch.yml). Just uncomment the line and change it to network.host: _local_. After that, I simply ran elasticsearch.bat directly.
OR
to put it simply just comment the network.host line.
Hi running Elasticsearch 1.6.0 and AWS plugin 2.6.0 on Windows 2008 in Amazon.
I have AWS plgin setup, I don't get any Exception in the logs but the nodes can't seem to dicover each other.
bootstrap.mlockall: true
cluster.name: my-cluster
node.name: "ES MASTER 01"
node.data: false
node.master: true
plugin.mandatory: "cloud-aws"
cloud.aws.access_key: "AK...Z7Q"
cloud.aws.secret_key: "gKW...nAO"
cloud.aws.region: "us-east"
discovery.zen.minimum_master_nodes: 1
discovery.type: "ec2"
discovery.ec2.groups: "Elastic Search"
discovery.ec2.ping_timeout: "30s"
discovery.ec2.availability_zones: "us-east-1a"
discovery.zen.ping.multicast.enabled: false
Logs:
[2015-07-13 15:02:19,346][INFO ][node ] [ES MASTER 01] version[1.6.0], pid[2532], build[cdd3ac4/2015-06-09T13:36:34Z]
[2015-07-13 15:02:19,346][INFO ][node ] [ES MASTER 01] initializing ...
[2015-07-13 15:02:19,378][INFO ][plugins ] [ES MASTER 01] loaded [cloud-aws], sites []
[2015-07-13 15:02:19,440][INFO ][env ] [ES MASTER 01] using [1] data paths, mounts [[(C:)]], net usable_space [6.8gb], net total_space [29.9gb], types [NTFS]
[2015-07-13 15:02:26,461][INFO ][node ] [ES MASTER 01] initialized
[2015-07-13 15:02:26,461][INFO ][node ] [ES MASTER 01] starting ...
[2015-07-13 15:02:26,851][INFO ][transport ] [ES MASTER 01] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/172.30.0.123:9300]}
[2015-07-13 15:02:26,866][INFO ][discovery ] [ES MASTER 01] my-cluster/SwhSDhiDQzq4pM8jkhIuzw
[2015-07-13 15:02:56,884][WARN ][discovery ] [ES MASTER 01] waited for 30s and no initial state was set by the discovery
[2015-07-13 15:02:56,962][INFO ][http ] [ES MASTER 01] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/172.30.0.123:9200]}
[2015-07-13 15:02:56,962][INFO ][node ] [ES MASTER 01] started
[2015-07-13 15:03:13,455][INFO ][cluster.service ] [ES MASTER 01] new_master [ES MASTER 01][SwhSDhiDQzq4pM8jkhIuzw][WIN-3Q4EH3B8H1O][inet[/172.30.0.123:9300]]{data=false, master=true}, reason: zen-disco-join (elected_as_master)
[2015-07-13 15:03:13,517][INFO ][gateway ] [ES MASTER 01] recovered [0] indices into cluster_state
It can surely work with private IP, if and only if your node instances are able to access ec2 information on the same VPC in order to find out about the Cluster it should join.
you can set this Discovery Permission as a policy as the following and apply it to your IAM role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "whatever",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeInstances",
"ec2:DescribeRegions",
"ec2:DescribeSecurityGroups",
"ec2:DescribeTags"
],
"Resource": [
"*"
]
}
]
}
I am trying to run ElasticSearch with Kibana in Windows 2008 R2.
I followed this article: Install-logstash-on-a-windows-server-with-kibana
Step by step, but all I get is:
Connection Failed
Possibility #1: Your elasticsearch server is down or unreachable
This can be caused by a network outage, or a failure of the Elasticsearch process. If you have recently run a query that required a terms facet to be executed it is possible the process has run out of memory and stopped. Be sure to check your Elasticsearch logs for any sign of memory pressure.
Possibility #2: You are running Elasticsearch 1.4 or higher
Elasticsearch 1.4 ships with a security setting that prevents Kibana from connecting. You will need to set http.cors.allow-origin in your elasticsearch.yml to the correct protocol, hostname, and port (if not 80) that your access Kibana from. Note that if you are running Kibana in a sub-url, you should exclude the sub-url path and only include the protocol, hostname and port. For example, http://mycompany.com:8080, not http://mycompany.com:8080/kibana.
Click back, or the home button, when you have resolved the connection issue
When I go to
http://XXX.XXX.XXX.XXX:9200/
I get:
{
"status" : 200,
"name" : "Benazir Kaur",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "1.4.0",
"build_hash" : "bc94bd81298f81c656893ab1ddddd30a99356066",
"build_timestamp" : "2014-11-05T14:26:12Z",
"build_snapshot" : false,
"lucene_version" : "4.10.2"
},
"tagline" : "You Know, for Search"
}
So it seems that the ElasticSearch is running, but for some reason the Kibana cannot connect to it.
The ElasticSearch logs contains an error:
[2014-11-08 13:02:41,474][INFO ][node ] [Virako] version[1.4.0], pid[5556], build[bc94bd8/2014-11-05T14:26:12Z]
[2014-11-08 13:02:41,490][INFO ][node ] [Virako] initializing ...
[2014-11-08 13:02:41,490][INFO ][plugins ] [Virako] loaded [], sites []
[2014-11-08 13:02:46,872][INFO ][node ] [Virako] initialized
[2014-11-08 13:02:46,872][INFO ][node ] [Virako] starting ...
[2014-11-08 13:02:47,402][INFO ][transport ] [Virako] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.0.14:9300]}
[2014-11-08 13:02:47,558][INFO ][discovery ] [Virako] elasticsearch/XyAjXnofTnG1CXgDoHrNsA
[2014-11-08 13:02:51,412][INFO ][cluster.service ] [Virako] new_master [Virako][XyAjXnofTnG1CXgDoHrNsA][test04][inet[/192.168.0.14:9300]], reason: zen-disco-join (elected_as_master)
[2014-11-08 13:02:51,521][INFO ][gateway ] [Virako] recovered [0] indices into cluster_state
[2014-11-08 13:02:51,552][INFO ][http ] [Virako] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.0.14:9200]}
[2014-11-08 13:02:51,552][INFO ][node ] [Virako] started
[2014-11-08 13:11:04,781][WARN ][transport.netty ] [Virako] exception caught on transport layer [[id: 0x3984a6b4, /192.168.0.14:58237 => /192.168.0.14:9300]], closing connection
java.io.StreamCorruptedException: invalid internal transport message format, got (47,45,54,20)
at org.elasticsearch.transport.netty.SizeHeaderFrameDecoder.decode(SizeHeaderFrameDecoder.java:47)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:425)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Any idea what am I doing wrong?
I have faced similar kind of issue.
If you are using elasticsearch-1.4 with Kibana-3 then add following parameters in elasticsearch.yml file
http.cors.allow-origin: "/.*/"
http.cors.enabled: true
Reference,
https://gist.github.com/rmoff/379e6ce46eb128110f38
In my case the problem was caused by the HTTP_PROXY environment variable being set and the proxy being down.
It's not the most obvious cause, and there is no obvious way from the error message that you would think to look at that.
Unsetting HTTP_PROXY and restarting Kibana did the trick.
I have been using elasticsearch with MySQL without any problems recently. My server was recently migrated over from MySQL to MariaDB and now the JDBC river just seems to freeze up with even the most basic syncs. Does anyone know if they are compatible?
Here is a sample code:
PUT /_river/my_jdbc_river/_meta
{
"type" : "jdbc",
"jdbc" : {
"url" : "jdbc:mysql://HOST/DATABASE",
"user": "username",
"password": "password",
"sql" : "select * from table"
}
}
It just hangs on the following:
[2014-05-19 16:11:49,080][INFO ][cluster.metadata ] [Wade Wilson] [_river] update_mapping [my_jdbc_river] (dynamic)
[2014-05-19 16:11:49,082][INFO ][org.xbib.elasticsearch.river.jdbc.JDBCRiver] [Wade Wilson] [jdbc][my_jdbc_river] starting JDBC river: URL [jdbc:mysql://HOST/DATABASE], strategy [simple], index/type [jdbc/jdbc]
[2014-05-19 16:11:49,083][INFO ][org.xbib.elasticsearch.river.jdbc.strategy.simple.SimpleRiverMouth] waiting for cluster state YELLOW
[2014-05-19 16:11:49,083][INFO ][org.xbib.elasticsearch.river.jdbc.strategy.simple.SimpleRiverMouth] ... cluster state ok
[2014-05-19 16:11:49,094][INFO ][cluster.metadata ] [Wade Wilson] [_river] update_mapping [my_jdbc_river] (dynamic)
[2014-05-19 16:11:49,113][INFO ][cluster.metadata ] [Wade Wilson] [_river] update_mapping [my_jdbc_river] (dynamic)
Download MariaDB's "connector":
https://mariadb.com/kb/en/about-the-mariadb-java-client/
Download mariadb-java-client-1.1.7.jar at
https://downloads.mariadb.org/client-java/1.1/
them move mariadb-java-client-1.1.7.jar into /your_path_to_elasticsearch/plugins like below:
> [root#SpaceConnection elasticsearch-1.3.4]# ll plugins/jdbc/
> -rw-r--r-- 1 root root 280826 Oct 16 22:03 elasticsearch-river-jdbc-1.3.4.0.jar
> -rw-r--r-- 1 root root 380 Oct 16 22:03 log4j2.xml
> -rw-r--r-- 1 root root 234 Oct 16 22:03 log4j.properties
> -rw-r--r-- 1 root root 230704 Mar 29 2014 mariadb-java-client-1.1.7.jar
then run bin/elasticsearch
[2014-10-16 23:34:41,712][INFO ][node ] [Apache Kid] version[1.3.4], pid[15632], build[a70f3cc/2014-09-30T09:07:17Z]
[2014-10-16 23:34:41,712][INFO ][node ] [Apache Kid] initializing ...
[2014-10-16 23:34:41,734][INFO ][plugins ] [Apache Kid] loaded [jdbc-1.3.4.0-e13884c], sites []
OpenJDK Server VM warning: You have loaded library /var/www/html/bibi.baonam/elasticsearch-1.3.4/lib/sigar/libsigar-x86-linux.so which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
[2014-10-16 23:34:45,060][INFO ][node ] [Apache Kid] initialized
[2014-10-16 23:34:45,060][INFO ][node ] [Apache Kid] starting ...
[2014-10-16 23:34:45,195][INFO ][transport ] [Apache Kid] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/118.69.197.136:9300]}
[2014-10-16 23:34:45,225][INFO ][discovery ] [Apache Kid] elasticsearch/rn5hDK2YTCKsC53RKt5MMg
[2014-10-16 23:34:48,244][INFO ][cluster.service ] [Apache Kid] new_master [Apache Kid][rn5hDK2YTCKsC53RKt5MMg][SpaceConnection][inet[/118.69.197.136:9300]], reason: zen-disco-join (elected_as_master)
[2014-10-16 23:34:48,286][INFO ][http ] [Apache Kid] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/118.69.197.136:9200]}
[2014-10-16 23:34:48,288][INFO ][node ] [Apache Kid] started
[2014-10-16 23:34:49,087][INFO ][gateway ] [Apache Kid] recovered [2] indices into cluster_state
Note at line 3: [2014-10-16 23:34:41,734][INFO ][plugins ] [Apache Kid] loaded [jdbc-1.3.4.0-e13884c], sites []
Of course, you must install step by step at https://github.com/jprante/elasticsearch-river-jdbc#how-to-start-the-jdbc-river before
I am trying to install my bundle and I get the following error:
org.osgi.framework.BundleException: Unresolved constraint in bundle horizon-util [271]: Unable to resolve 271.0: missing requirement [271.0] package; (&(package
=org.apache.cxf.jaxrs.client)(version>=2.7.0)(!(version>=3.0.0)))
Bundle ID: 271
these are my bundles :
karaf#root> osgi:list
START LEVEL 100 , List Threshold: 50
ID State Blueprint Spring Level Name
[ 45] [Active ] [ ] [ ] [ 50] geronimo-j2ee-management_1.1_spec (1.0.1)
[ 46] [Active ] [ ] [ ] [ 50] Commons Collections (3.2.1)
[ 47] [Active ] [ ] [ ] [ 50] Apache ServiceMix :: Bundles :: jasypt (1.9.0.1)
[ 48] [Active ] [ ] [ ] [ 50] geronimo-jms_1.1_spec (1.1.1)
[ 49] [Active ] [ ] [ ] [ 50] Commons Pool (1.6.0)
[ 50] [Active ] [ ] [ ] [ 50] Apache ServiceMix :: Bundles :: xpp3 (1.1.0.4c_5)
[ 51] [Active ] [ ] [ ] [ 50] Apache ServiceMix Bundles: dom4j-1.6.1 (1.6.1.2)
[ 52] [Active ] [ ] [ ] [ 50] Commons Lang (2.6)
[ 53] [Active ] [ ] [ ] [ 50] Apache ServiceMix Bundles: oro-2.0.8 (2.0.8.3)
[ 54] [Active ] [ ] [ ] [ 50] Apache ServiceMix :: Specs :: Stax API 1.0 (1.9.0)
[ 55] [Active ] [ ] [ ] [ 50] Apache ServiceMix Bundles: xstream-1.3 (1.3.0.3)
[ 56] [Active ] [ ] [ ] [ 50] Apache ServiceMix :: Bundles :: jdom (1.1.0.4)
[ 57] [Active ] [ ] [ ] [ 50] Apache ServiceMix :: Bundles :: velocity (1.7.0.4)
[ 58] [Active ] [ ] [ ] [ 50] Apache Aries Transaction Manager (0.3.0)
[ 59] [Active ] [ ] [ ] [ 50] kahadb (5.7.0)
[ 60] [Active ] [ ] [ ] [ 50] activemq-pool (5.7.0)
[ 61] [Active ] [ ] [ ] [ 50] activemq-console (5.7.0)
[ 62] [Active ] [ ] [ ] [ 50] activemq-ra (5.7.0)
[ 63] [Active ] [Created ] [ ] [ 50] activemq-core (5.7.0)
Fragments: 68
[ 64] [Active ] [Created ] [ ] [ 50] activemq-karaf (5.7.0)
[ 65] [Active ] [Created ] [ ] [ 50] Apache XBean :: OSGI Blueprint Namespace Handler (3.11.1)
[ 66] [Active ] [ ] [ ] [ 50] Commons JEXL (2.0.1)
[ 67] [Active ] [ ] [ ] [ 50] Apache ServiceMix :: Specs :: Scripting API 1.0 (1.9.0)
[ 68] [Resolved ] [ ] [ ] [ 50] activemq-blueprint (5.7.0)
Hosts: 63
[ 69] [Active ] [Created ] [ ] [ 50] activemq-broker.xml (0.0.0)
[ 83] [Active ] [ ] [ ] [ 50] Joda-Time (1.6.2)
[ 84] [Active ] [ ] [ ] [ 50] Apache XBean :: Spring (3.11.1)
[ 85] [Active ] [ ] [ ] [ 50] activemq-spring (5.7.0)
[ 99] [Active ] [Created ] [ ] [ 50] camel-karaf-commands (2.10.7)
[ 100] [Active ] [ ] [ ] [ 50] camel-core (2.10.7)
[ 102] [Active ] [ ] [ ] [ 50] camel-spring (2.10.7)
[ 103] [Active ] [Created ] [ ] [ 50] camel-blueprint (2.10.7)
[ 106] [Active ] [ ] [ ] [ 50] camel-jms (2.10.7)
[ 107] [Active ] [ ] [ ] [ 50] activemq-camel (5.7.0)
[ 172] [Active ] [ ] [ ] [ 50] Apache CXF Compatibility Bundle Jar (2.6.9)
[ 173] [Active ] [Created ] [ ] [ 50] camel-cxf (2.10.7)
[ 174] [Active ] [ ] [ ] [ 50] camel-cxf-transport (2.10.7)
[ 181] [Resolved ] [ ] [ ] [ 80] simple-camel-blueprint.xml (0.0.0)
[ 182] [Active ] [ ] [ ] [ 50] camel-stream (2.10.7)
[ 188] [Installed ] [ ] [ ] [ 80] ERP-blueprint.xml (0.0.0)
[ 199] [Active ] [ ] [ ] [ 50] camel-sql (2.10.7)
[ 204] [Installed ] [ ] [ ] [ 80] horizon-core (0.0.1)
[ 206] [Active ] [ ] [ ] [ 50] Data mapper for Jackson JSON processor (1.9.10)
[ 207] [Active ] [ ] [ ] [ 50] Jackson JSON processor (1.9.10)
[ 208] [Active ] [ ] [ ] [ 50] camel-jackson (2.10.7)
[ 209] [Active ] [ ] [ ] [ 50] MongoDB Java Driver (2.11.2.RELEASE)
[ 259] [Installed ] [ ] [ ] [ 80] Spring Data MongoDB Support (1.3.3.RELEASE)
[ 269] [Installed ] [ ] [ ] [ 80] horizon-util (0.0.1)
Do I need to update Apache CXF to 2.7.0 version?
And how do I do that?
I tried to update the bundle but it did not work.
Thank you for any pointer
your horizon-util bundle depends on bundle org.apache.cxf.jaxrs.client with version greater than and equal to 2.7.0.
So try to install appropriate version to resolve the error.
In my case I've installed Apache CXF without Camel so I will give you the steps for this scenario but maybe it will work the same way for Camel.
So, to remove the current version first you have to remove the repository:
feature:repo-list
--> gives you the list of repositories (in my case cxf-2.x.x)
feature:repo-remove cxf-2.x.x
--> removes the repository from Karaf
feature:repo-add mvn:org.apache.cxf.karaf/apache-cxf/2.7.10/xml/features
--> Adds new version of CXF repository (in this case 2.7.0)
feature:install cxf-jaxrs
--> Installs the part of CXF you need (or cxf instead of cxf-jaxrs if we need all)
feature:list | grep cxf
--> shows a list of bundles related to CXF. [x] means that they are started
I hope this will help you.