Open JMS run basic example - jms

I am trying to setup the OpenJMS on my machine and trying to run the basic example from command line. However, I am not able to figure out how to do it.
This is what I have done so far,
Run the Open JMS
➜ bin ./startup.sh
Using OPENJMS_HOME: /Users/gaurang.shah/Documents/personal/jms/openjms-0.7.7-beta-1
Using JAVA_HOME: /Library/Java/JavaVirtualMachines/jdk1.8.0_161.jdk/Contents/Home
OpenJMS 0.7.7-beta-1
The OpenJMS Group. (C) 1999-2007. All rights reserved.
http://openjms.sourceforge.net
11:46:59.353 INFO [main] - Server accepting connections on tcp://192.168.2.12:3035/
11:46:59.355 INFO [main] - JNDI service accepting connections on tcp://192.168.2.12:3035/
11:46:59.356 INFO [main] - Admin service accepting connections on tcp://192.168.2.12:3035/
11:46:59.453 INFO [main] - Server accepting connections on rmi://192.168.2.12:1099/
11:46:59.453 INFO [main] - JNDI service accepting connections on rmi://192.168.2.12:1099/
11:46:59.454 INFO [main] - Admin service accepting connections on rmi://192.168.2.12:1099/
Start the Sender
➜ basic ./run.sh Sender new_topic 1
Using OPENJMS_HOME: /Users/gaurang.shah/Documents/personal/jms/openjms-0.7.7-beta-1
Using JAVA_HOME: /Library/Java/JavaVirtualMachines/jdk1.8.0_161.jdk/Contents/Home
Using CLASSPATH: ./:/Users/gaurang.shah/Documents/personal/jms/openjms-0.7.7-beta-1/lib/openjms-0.7.7-beta-1.jar
hello
Start the Receiver
➜ basic ./run.sh Receiver new_topic
Using OPENJMS_HOME: /Users/gaurang.shah/Documents/personal/jms/openjms-0.7.7-beta-1
Using JAVA_HOME: /Library/Java/JavaVirtualMachines/jdk1.8.0_161.jdk/Contents/Home
Using CLASSPATH: ./:/Users/gaurang.shah/Documents/personal/jms/openjms-0.7.7-beta-1/lib/openjms-0.7.7-beta-1.jar
However, I am not able to get anything on the receiver side.

In JMS if a message is sent to a topic then all the subscribers on that topic receive the message. If there are no subscribers on the topic then any message sent to the topic is discarded (i.e. the message is not stored). This is basic publish-subscribe semantics.
Therefore, if you send the message before you start your receiver/subscriber then it won't receive the message.
Start the receiver before sending the message and it should receive it.

Related

Running IBM MQ dmpmqcfg results in 'libmqds_r.dylib' error

I am going to answer this myself as a FAQ.
This has been seen on MacOS, but applies to all MQ Client only installations. When running dmpmqcfg in a terminal you see the error:
AMQ8670E: Loading of server module 'libmqds_r.dylib' failed.
The command dmpmqcfg is used to dump the configuration of IBM MQ queue managers. It needs to connect to a queue manager to do this. dmpmqcfg can run both in bindings and client mode. The error is thrown when dmpmqcfg is run in bindings mode on a client only install, and is thrown when it can't find the .dylib file being requested. libmqds_r.dylib is used to make a bindings connection, which only works when its co-located on the MQ server.
If you see this error on a client machine, then you need to run dmpmqcfg in client mode:
dmpmqcfg -c
and provide queue manager information on the command line, or through a CCDT, or mqclient.ini

Develop a new Opendaylight application which includes nc-mount

I'm new to develop Opendaylight(ODL) applications. I'm planning to develop our application, which interacts Netconf devices. So, I expect using nc-mount. However, I can't develop application now because there are some problems.
I've tried the follows so far:
I tried the tutorial. I made the example application following this, but I didn't know how to install nc-mount into startup-archetype.
And currently, I tried this tutorial again after Neon released, but the build was failed.
I think that maybe there is some trouble on the repository now.
In order to know the behavior of nc-mount, I confirmed netconf repository. I've checked out release/fluorine-sr2 and the build succeeded. I've confirmed existing of netconf-connector-all. But Netconf testtool was not worked correctly... So that, I cannot confirm the behavior of nc-mount...
Also, I don't know how to import own application into ODL controller even if I've read this document.
Questions are the follows:
About the development of applications:
Do you know recommended ways to develop the application including nc-mount?
Or, If you know the proper documents, please let me know...
About Netconf testtool:
Have you had the same experience when you use Netconf testtool?
The build succeeded. But probably the tool was not worked correctly.
If you have some solution to solve this problem, please let me know...
Netconf testtool starting logs and ssh connection logs are the follows:
$ java -jar netconf-testtool-1.7.0-SNAPSHOT-executable.jar &
[1] 13108
15:22:07.155 [main] INFO o.o.n.t.tool.NetconfDeviceSimulator - Starting 1, SSH simulated devices starting on port 17830
15:22:07.199 [main] INFO o.o.n.t.tool.NetconfDeviceSimulator - Custom module loading skipped.
15:22:08.254 [main] INFO o.o.n.t.tool.NetconfDeviceSimulator - using OperationsProvider.
15:22:08.543 [main] INFO o.a.s.c.u.s.b.BouncyCastleSecurityProviderRegistrar - getOrCreateProvider(BC) created instance of org.bouncycastle.jce.provider.BouncyCastleProvider
15:22:08.684 [main] WARN io.netty.bootstrap.ServerBootstrap - Unknown channel option 'SO_BACKLOG' for channel '[id: 0x10ab3fa2]'
15:22:08.875 [main] INFO o.o.n.t.tool.NetconfDeviceSimulator - All simulated devices started successfully from port 17830 to 17830
$ ssh admin#localhost -p 17830 -s netconf
15:22:30.832 [sshd-netconf-ssh-server-nio-group-thread-1] WARN o.a.s.s.session.ServerSessionImpl - exceptionCaught(ServerSessionImpl[null#/0:0:0:0:0:0:0:1:48026])[state=Opened] SshException: sendKexInit() no resolved signatures available
15:22:30.835 [sshd-netconf-ssh-server-nio-group-thread-1] INFO o.a.s.s.session.ServerSessionImpl - Disconnecting(ServerSessionImpl[null#/0:0:0:0:0:0:0:1:48026]): SSH2_DISCONNECT_HOST_KEY_NOT_VERIFIABLE - sendKexInit() no resolved signatures available
Received disconnect from ::1 port 17830:9: sendKexInit() no resolved signatures available
Disconnected from ::1 port 17830
If you need more information to answer my questions, please let me know.
I really expect using Opendaylight, but it is too difficult to develop own ODL apps. I'm confused because there are so many documents... However, I'll be earnest to ODL App development.
Any help would be greatly appreciated.
For your second question about netconf test tool, you can check this recent thread: https://lists.opendaylight.org/pipermail/netconf-dev/2019-April/002116.html

Cannot produce events to Confluent Kafka deployed on AWS EC2 from local machine

I'm trying to connect from an external client (my laptop) to a broker in a Kafka cluster that I have running on ec2 machines. When I try and connect from my local machine I get the following error:
$ ./kafka-console-producer --broker-list AWS.PRIV.ATE.IP:9092 --topic test
>hi
>[2018-09-20 13:28:53,952] ERROR Error when sending message to topic test with key: null, value: 2 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for test-0: 1519 ms has passed since batch creation plus linger time
The topic exists because if I run (from local machine)
$ ./kafka-topics --list --zookeeper AWS.PRIV.ATE.IP:2181
__confluent.support.metrics
__consumer_offsets
_schemas
connect-configs
connect-offsets
connect-status
test
The cluster configuration is from Confluent's AWS quickstart template: https://github.com/aws-quickstart/quickstart-confluent-kafka/blob/master/templates/confluent-kafka.template and I'm running the open source version.
The three broker ec2 instances are visible to my local machine, which I verified by stopping the Kafka broker, starting a simple HTTP server on port 9092, and successfully curling that server using the internal IP address of the ec2 instance.
If I ssh into one of the broker instances I can successfully produce and consume messages across the cluster. The only update I've made to the out-of-the-box configuration provided by the template is changing listeners=PLAINTEXT://ec2-AWS-PUB-LIC-IP.compute-1.amazonaws.com:9092 in server.properties on each machine and then restarted the kafka server.
I can provide more configuration or debugging info if necessary. Believe the issue is something regarding IP address discoverability/visibility but I'm not entirely sure what.
You need to set advertised.listeners too.
See https://rmoff.net/2018/08/02/kafka-listeners-explained/ for details.

How to change the "kafka connect" component port?

On port 8083 I am running Influxdb for which I am even getting the GUI on http://localhost:8083
Now come to kafka, Here I am referring the setup as per https://kafka.apache.org/quickstart
starting the zookeeeper which is in folder /opt/zookeeper-3.4.10 by the command: bin/zkServer.sh start
So zookeeper is started now starting kafka under /opt/kafka_2.11-1.1.0 folder as :
bin/kafka-server-start.sh config/server.properties
create a topic named "test" with a single partition and only one replica:
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
Topic is created and can be checked in with command :
bin/kafka-topics.sh --list --zookeeper localhost:2181
Uptill here everything is fine and tuned.
Now I need to use "Kafka connect" component to import/export data.
So I am creating a seed data as: echo -e "foo\nbar" > test.txt
Now using connector configuration for "kafka connect" to work :
bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties
after running above command I am getting : Address already in use
Kafka connect has stopped
I even changed the rest.port=8084 in the /opt/kafka_2.11-1.1.0/config/connect-distributed.properties so as it don't get conflict with influxdb which already on 8083. Still I am getting the same Address already in use,
Kafka connect has stopped as shown in screenshots.
Since you're using Kafka Connect in Standalone mode, you need to change the REST port in config/connect-standalone.properties:
rest.port=18083
To understand more about Standalone vs Distributed you can read the doc here.
Kafka Standalone mode, uses Port 8084 as the Rest API post, by default. Due to this reason, if someone else is using that port already, the process with throw a BindException.
To change the port used above, navigate to the config/connect-standalone.properties file in the Kafka Root directory.
Add the following key value property to change the Port being used for Rest API opening. (Kafka should have included this in the properties file by default, else many developers go nuts trying to find the port mapping used in the standalone mode). Put a different port as you wish.
rest.port=11133
Kafka 3.0.0
Since Kafka Connect is intended to be run as a service, it also provides a REST API for managing connectors. The REST API server can be configured using the listeners configuration option. This field should contain a list of listeners in the following format: protocol://host:port,protocol2://host2:port2. Currently supported protocols are http and https.
For example: listeners= http://localhost:8080,https://localhost:8443
By default, if no listeners are specified, the REST server runs on port 8083 using the HTTP protocol.
More details: https://kafka.apache.org/documentation/#connect_rest
Change the port definition in config/server.properties:
# The port the socket server listens on
port=9092

Kafka container timeout

I have deployed hyperldger-fabric kafka based ordering service using ansible on aws. Everything working fine for me till yesterday. Today when I launch a network , kafka container unable to communicate with zookeeper. Here are docker logs of kafka containers
[2017-11-16 08:23:36,075] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within timeout: 6000
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1223)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:155)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:129)
at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:89)
at kafka.utils.ZkUtils$.apply(ZkUtils.scala:71)
at kafka.server.KafkaServer.initZk(KafkaServer.scala:278)
at kafka.server.KafkaServer.startup(KafkaServer.scala:168)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
[2017-11-16 08:23:36,077] INFO shutting down (kafka.server.KafkaServer)
[2017-11-16 08:23:36,080] INFO shut down completed (kafka.server.KafkaServer)
[2017-11-16 08:23:36,081] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within timeout: 6000
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1223)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:155)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:129)
at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:89)
at kafka.utils.ZkUtils$.apply(ZkUtils.scala:71)
at kafka.server.KafkaServer.initZk(KafkaServer.scala:278)
at kafka.server.KafkaServer.startup(KafkaServer.scala:168)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
[2017-11-16 08:23:36,082] INFO shutting down (kafka.server.KafkaServer)
I havent change any code or anything else that's why I am unable to figure out what causes the problem . Any trick to solve this issue?
Finally fixed that issue. It was due to iptables setting which blocks icmp packets to be forwarded from flannel interface to docker interface thus docker containers couldn't communicate to each other. By adding iptable rules everything works fine for me .

Resources