Hyperledger peer node start error - installation

I am getting the following error while running the command to start the peer node.
Error:
grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:5005: getsockopt: connection refused"; Reconnecting to {"127.0.0.1:5005" }
Can anybody help me out ?

This happened to me as well after I just set up a development environment in a vagrant VM, following the instructions from http://hyperledger-fabric.readthedocs.io/en/latest/dev-setup/build/.
The connection to "127.0.0.1:5005" is configured in peer/core.yaml:
167: # orderer to talk to
168: orderer: 127.0.0.1:5005
So, the peer expects an orderer service listening on that port. The oderer service (https://github.com/hyperledger/fabric/blob/master/orderer/README.md) listens on port 5151 by default. This is configured in https://github.com/hyperledger/fabric/blob/master/orderer/orderer.yaml.
Build the orderer with make ordererand start it with orderer. Adjust the port in peer/core.yaml to 5151 (the one that the orderer service listens on), rebuild peer with make peer, start peer node start and you will see that the error message disappeared and peer starts correctly:
...
09:51:50.430 [chaincode] notify -> DEBU 056 notifying Txid:vscc
09:51:50.430 [chaincode] Launch -> DEBU 057 sending init completed
09:51:50.430 [chaincode] Launch -> DEBU 058 LaunchChaincode complete
09:51:50.430 [sysccapi] RegisterSysCC -> INFO 059 system chaincode %s(%s) registered vscc github.com/hyperledger/fabric/core/system_chaincode/vscc
09:51:50.433 [committer] NewDeliverService -> INFO 05a Creating committer for single noops endorser
09:51:50.437 [nodeCmd] serve -> INFO 05b Starting peer with ID=name:"jdoe" , network ID=dev, address=0.0.0.0:7051, rootnodes=, validator=true
Nil tx from block
Commit success, created a block!

Related

Quarkus grpc is throwing start up error: Unable to start the gRPC server: java.nio.channels.UnresolvedAddressException

I am trying to start the grpc server with the property
quarkus.grpc.server.use-separate-server=true
in that case, i am getting the below error during server start up
2023-01-19 13:12:51,762 WARN [io.qua.grp.run.GrpcServerRecorder] (main) Using legacy gRPC support, with separate new HTTP server instance. Switch to single HTTP server instance usage with quarkus.grpc.server.use-separate-server=false property
2023-01-19 13:12:51,824 INFO [io.qua.grp.run.GrpcServerRecorder] (vert.x-eventloop-thread-0) Registering gRPC reflection service
2023-01-19 13:12:51,934 ERROR [io.qua.grp.run.GrpcServerRecorder] (vert.x-eventloop-thread-0) Unable to start the gRPC server: java.nio.channels.UnresolvedAddressException
at java.base/sun.nio.ch.Net.checkAddress(Net.java:149)
at java.base/sun.nio.ch.Net.checkAddress(Net.java:157)
at java.base/sun.nio.ch.ServerSocketChannelImpl.netBind(ServerSocketChannelImpl.java:330)
at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:294)
at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:141)
But when I start the grpc server with property
quarkus.grpc.server.use-separate-server=false
the grpc server starts but the client is not able to access the server
I am getting the below error on the client side
13:54:28 ERROR line=111 traceId=, parentId=, spanId=, sampled= [qu.ms.of.OfferResource] (executor-thread-0) Exception: UNAVAILABLE: upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: delayed connect error: 111: io.grpc.StatusRuntimeException: UNAVAILABLE: upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: delayed connect error: 111
at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:271)
at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:252)
at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:165)
How do we overcome this issue?

Springboot application unable to recover after jms connection failure

We have a sprinboot application which stops retrying to connect with solace queues after 3 connection attempts. We get below information logged and then application just does not respond and we have to restart the application:
2021-09-15 16:49:08.021 INFO 4444 --- [recovery-thread] bitronix.tm.recovery.Recoverer : recoverer is already running, abandoning this recovery request
2021-09-15 16:50:04.862 INFO 4444 --- [connect_service] c.s.j.protocol.impl.TcpClientChannel : Connection attempt failed to host '<<hostname>>' ReconnectException com.solacesystems.jcsmp.JCSMPSecurityException: Error performing login to LoginContext (*****) cause: javax.security.auth.login.LoginException: *****
2021-09-15 16:50:07.865 INFO 4444 --- [connect_service] c.s.j.protocol.impl.TcpClientChannel : Connecting to host 'orig=tcp://<<hostname>>:55555, scheme=tcp://, host=<<hostname>>, port=55555' (host 1 of 1, smfclient 2, attempt 3 of 3, this_host_attempt: 1 of 1)
2021-09-15 16:50:07.877 INFO 4444 --- [connect_service] c.s.j.protocol.impl.TcpClientChannel : Connection attempt failed to host '<<hostname>>' ReconnectException com.solacesystems.jcsmp.JCSMPSecurityException: Error performing login to LoginContext (*****) cause: javax.security.auth.login.LoginException: *****
2021-09-15 16:50:10.878 INFO 4444 --- [connect_service] c.s.j.protocol.impl.TcpClientChannel : Stale reconnect task, aborting reconnect.
Below is our configuration for connecting to solace queues:
spring.jta.bitronix.connectionfactory.className=com.solacesystems.jms.SolXAConnectionFactoryImpl
spring.jta.bitronix.connectionfactory.driverProperties.host=smf://<<hostname>>:55555
spring.jta.bitronix.connectionfactory.driverProperties.VPN=<<vpn>>
spring.jta.bitronix.connectionfactory.driverProperties.authenticationScheme=AUTHENTICATION_SCHEME_GSS_KRB
spring.jta.bitronix.connectionfactory.driverProperties.KRBServiceName=HOST
In our service class we are just autowiring the object of jmsTemplate and publishing messages on the queue.
I went through few documentations and tried adding below configuration:
spring.jta.bitronix.connectionfactory.ignore-recovery-failures=true
But still I am facing the same issue. Any suggestions
====Edit
I face this issue only when I put my laptop in airplane mode and reconnect. If I just disconnect from VPN and connect back solace connection is getting reestablished
The SolXAConnectionFactory interface allows for you to tune the connect and reconnect parameters. Docs here.
You'll want to checkout these and maybe a few others. I suggest searching the javadoc for "retry" and "retries":
connectRetries
connectRetriesPerHost
connectTimeoutInMillies
reconnectRetries
I did more research and found the following helpful, would try it in my application : https://solace.community/discussion/917/why-won-t-my-solace-enterprise-application-reconnect-after-an-ha-failover To set it at JNDI, I think this should also be configured at SolAdmin -> JMS Administration -> connection factory -> Transport Properties.
After going through the various documentations and doing some hit and trials, below properties turn out too be useful. Hope it can help somebody:
spring.jta.bitronix.connectionfactory.driverProperties.reconnectRetries = -1
spring.jta.bitronix.connectionfactory.driverProperties.connectRetries = -1

MongoDB shell 4.0.3 Windows cannot connect to MongoDB replica set: SSLHandshakeFailed: QueryContextAttributes for connection info failed

In Windows 10, I'm not able to connect to MongoDB server with the following errors:
>mongo "mongodb+srv://xxx-dsvlb.mongodb.net/test" --username xxx --verbose
2018-11-01T11:34:19.273+0700 D - [main] User Assertion: DNSHostNotFound: Failed to look up service "":This operation returned because the timeout period expired. C:\data\mci\6411135b04f345f6d01072b56250cba6\src\src\mongo/util/dns_query_windows-impl.h 254
MongoDB shell version v4.0.3
2018-11-01T11:34:30.535+0700 D - [main] User Assertion: DNSHostNotFound: Failed to look up service "":This operation returned because the timeout period expired. C:\data\mci\6411135b04f345f6d01072b56250cba6\src\src\mongo/util/dns_query_windows-impl.h 254
Enter password:
connecting to: mongodb+srv://xxx-dsvlb.mongodb.net/test
2018-11-01T11:35:16.589+0700 D - [js] User Assertion: DNSHostNotFound: Failed to look up service "":This operation returned because the timeout period expired. C:\data\mci\6411135b04f345f6d01072b56250cba6\src\src\mongo/util/dns_query_windows-impl.h 254
2018-11-01T11:35:16.590+0700 D NETWORK [js] creating new connection to:xxx-shard-00-02-dsvlb.mongodb.net.:27017
2018-11-01T11:35:17.356+0700 D - [js] User Assertion: SSLHandshakeFailed: QueryContextAttributes for connection info failed with-2146893055 C:\data\mci\6411135b04f345f6d01072b56250cba6\src\src\mongo/transport/session_asio.h 240
2018-11-01T11:35:17.357+0700 D NETWORK [js] creating new connection to:xxx-shard-00-01-dsvlb.mongodb.net.:27017
2018-11-01T11:35:18.197+0700 D - [js] User Assertion: SSLHandshakeFailed: QueryContextAttributes for connection info failed with-2146893055 C:\data\mci\6411135b04f345f6d01072b56250cba6\src\src\mongo/transport/session_asio.h 240
2018-11-01T11:35:18.198+0700 D NETWORK [js] creating new connection to:xx-shard-00-00-dsvlb.mongodb.net.:27017
2018-11-01T11:35:19.017+0700 D - [js] User Assertion: SSLHandshakeFailed: QueryContextAttributes for connection info failed with-2146893055 C:\data\mci\6411135b04f345f6d01072b56250cba6\src\src\mongo/transport/session_asio.h 240
2018-11-01T11:35:19.018+0700 D - [js] User Assertion: InternalError: couldn't connect to server lakon-shard-00-00-dsvlb.mongodb.net.:27017, connection attempt failed: SSLHandshakeFailed: QueryContextAttributes for connection info failed with-2146893055 src\mongo\scripting\mozjs\mongo.cpp 756
2018-11-01T11:35:19.021+0700 E QUERY [js] Error: couldn't connect to server lakon-shard-00-00-dsvlb.mongodb.net.:27017, connection attempt failed: SSLHandshakeFailed: QueryContextAttributes for connection info failed with-2146893055 :
connect#src/mongo/shell/mongo.js:257:13
#(connect):1:6
2018-11-01T11:35:19.024+0700 D - [js] User Assertion: Location12513: connect failed src\mongo\shell\shell_utils.cpp 343
2018-11-01T11:35:19.024+0700 I QUERY [js] MozJS GC prologue heap stats - total: 4056565 limit: 0
2018-11-01T11:35:19.027+0700 I QUERY [js] MozJS GC epilogue heap stats - total: 421536 limit: 0
2018-11-01T11:35:19.027+0700 I QUERY [js] MozJS GC prologue heap stats - total: 313504 limit: 0
2018-11-01T11:35:19.028+0700 I QUERY [js] MozJS GC epilogue heap stats - total: 131244 limit: 0
2018-11-01T11:35:19.029+0700 D - [main] User Assertion: Location12513: connect failed src\mongo\scripting\mozjs\proxyscope.cpp 300
exception: connect failed
Using MongoDB shell 3.6.2 on Windows 10, I still cannot connect but with a different error (confusing, isn't it?):
>mongo "mongodb+srv://xxx-dsvlb.mongodb.net/test" --username xxx --password xxx
MongoDB shell version v3.6.2
connecting to: mongodb+srv://xxx-dsvlb.mongodb.net/test
MongoDB server version: 3.6.8
2018-11-01T11:01:52.923+0700 E QUERY [thread1] Error: Authentication failed. :
DB.prototype._authOrThrow#src/mongo/shell/db.js:1608:20
#(auth):6:1
#(auth):1:2
exception: login failed
However, with Ubuntu 16.04 I can connect just fine to the exact same server:
⟫ mongo "mongodb+srv://xxx-dsvlb.mongodb.net/test" --username xxx --password xxx
MongoDB shell version v4.0.3
connecting to: mongodb+srv://xxx-dsvlb.mongodb.net/test
2018-11-01T04:27:02.536+0000 I NETWORK [js] Starting new replica set monitor for lakon-shard-0/xxx-shard-00-02-dsvlb.mongodb.net.:27017,xxx-shard-00-00-dsvlb.mongodb.net.:27017,xxx-shard-00-01-dsvlb.mongodb.net.:27017
2018-11-01T04:27:02.561+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Successfully connected to xxx-shard-00-02-dsvlb.mongodb.net.:27017 (1 connections now open to xxx-shard-00-02-dsvlb.mongodb.net.:27017 with a 5 second timeout)
2018-11-01T04:27:02.562+0000 I NETWORK [js] Successfully connected to xxx-shard-00-00-dsvlb.mongodb.net.:27017 (1 connections now open to xxx-shard-00-00-dsvlb.mongodb.net.:27017 with a 5 second timeout)
2018-11-01T04:27:02.563+0000 I NETWORK [js] changing hosts to xxx-shard-0/xxx-shard-00-00-dsvlb.mongodb.net:27017,xxx-shard-00-01-dsvlb.mongodb.net:27017,lakon-shard-00-02-dsvlb.mongodb.net:27017 from xxx-shard-0/xxx-shard-00-00-dsvlb.mongodb.net.:27017,xxx-shard-00-01-dsvlb.mongodb.net.:27017,xxx-shard-00-02-dsvlb.mongodb.net.:27017
2018-11-01T04:27:02.570+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Successfully connected to xxx-shard-00-00-dsvlb.mongodb.net:27017 (1 connections now open to xxx-shard-00-00-dsvlb.mongodb.net:27017 with a 5 second timeout)
2018-11-01T04:27:02.573+0000 I NETWORK [js] Successfully connected to xxx-shard-00-02-dsvlb.mongodb.net:27017 (1 connections now open to xxx-shard-00-02-dsvlb.mongodb.net:27017 with a 5 second timeout)
Implicit session: session { "id" : UUID("4a6488c7-7a22-44d4-977e-07eb09ef37f6") }
MongoDB server version: 3.6.8
WARNING: shell and server versions do not match
2018-11-01T04:27:02.588+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Successfully connected to xxx-shard-00-01-dsvlb.mongodb.net:27017 (1 connections now open to xxx-shard-00-01-dsvlb.mongodb.net:27017 with a 5 second timeout)
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
MongoDB Enterprise xxx-shard-0:PRIMARY>
A MongoDB Atlas support staff told me this is due to network connection on my part, but I'm sure that is not the root issue, because I can connect to the server when using other client such as Robo 3T using the same Windows 10 computer.
This issue happens ONLY when using MongoDB shell (both 3.6.2 and 4.0.3) in Windows 10.
It's probably a bug with MongoDB Shell and Windows 10 implementation?
it's a bit late but here, I had this problem when my shell's version is 4.0.5 then I install 4.2.11, it solve this problem. I tried many things with different connection string syntax and it does not solve the problem and still stuck at SSLHandshake error, so I guess if version were same it might solve the problem(mine 4.0.5 and remote was 4.2.11) and go ahead with new version installation(though I still think it's not version problem but I don't know what is). This problem only happen while I was in shell, connecting from the client like NoSqlBooster or Spring seem to work fine. My Robo3T have a problem connecting but randomly, sometimes once sometimes multiple re-trying.

JMS ActiveMQ SpringBoot .FailoverTransport

iam trying to connected to remote broker url in activeMQ (activemq installed in unix vm)
iam able to connect from browser from my laptop.
while running springboot iam getting this error
--- [ActiveMQ Task-1] o.a.a.t.failover.FailoverTransport : Failed to connect to [tcp://http://199.247.18.11:61616] after: 8 attempt(s) continuing to retry.
what could be the issue?
Please remove https:// from your connection string. Port 61616 is expecting JMS connections.
Your connection string should be tcp://199.247.18.11:61616 or something similar. There is a rest API that (I think) goes through the built in HTTP server but it's not going to listen on 61616 and it's going to have a much longer URL. Something like
http://admin:admin#localhost:8161/api/message?destination=queue://myqueue
still issue
yml file
activemq:
broker-url: failover:(tcp://http://199.247.18.11:61616)?initialReconnectDelay=1000&maxReconnectDelay=60000&warnAfterReconnectAttempts=2
error:
2018-05-01 07:41:51.312 WARN 6560 --- [ActiveMQ Task-1] o.a.a.t.failover.FailoverTransport : Failed to connect to [tcp://http://199.247.18.11:61616] after: 2 attempt(s) continuing to retry.

Storm - Supervisors launched but not connecting to Nimbus

I have a Storm cluster with 1 Nimbus, 4 Supervisors and 2 Zookeeper nodes. My Storm.yaml is as following:
storm.zookeeper.servers:
- "storage14"
- "storage15"
nimbus.seeds: ["storage01"]
#storm.local.hostname: "storage05"
supervisor.supervisors:
- "storage02"
- "storage03"
- "storage04"
- "storage05"
storm.local.dir: "/tmp/storm"
worker.childopts: "-Xmx%HEAP-MEM%m -XX:+PrintGCDetails -Xloggc:artifacts/gc.log -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=artifacts/heapdump"
This storm.yaml file is used by both Nimbus and Supervisors. When Nimbus is started I have the storm.local.hostname commented out as is shown above.
However, when starting Supervisors on respective nodes, I uncomment the storm.local.hostname and set it to the hostname of the node on which the supervisor is being launched. For instance if I was launching the supervisor on storage05, the storm.yaml file would have the following additional config param:
storm.local.hostname: "storage05"
The problem is even though Nimubs is launched successfully and I can see it on the Storm UI, some supervisors do not seem to be able to connect to Nimbus. For instance of the 4 nodes I start supervisors on, Storm UI often shows only 2 of them connected. However, if I ssh in to these nodes and run jps, I can see that the supervisor process is running on ALL of these nodes.
The Supervisors at the nodes which do end up connecting are not the same always, so it is definitely not a problem with those specific nodes.
Another thing to notice is if I try to execute a topology on whatever nodes that got connected, it does not get registered by the cluster and I can not see that topology on the UI either.
What do you think might be causing this erratic behavior?
UPDATE:
Tail end of nimbus.log has the following lines
2017-01-25 00:04:25.216 o.a.s.s.o.a.z.ClientCnxn [WARN] Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701)
at org.apache.storm.shade.org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.storm.shade.org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2017-01-25 00:04:25.317 o.a.s.s.o.a.z.ClientCnxn [INFO] Opening socket connection to server storage15/192.168.140.195:2181. Will not attempt to authenticate using SASL (unknown error)
2017-01-25 00:04:25.317 o.a.s.s.o.a.z.ClientCnxn [WARN] Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701)
at org.apache.storm.shade.org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.storm.shade.org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2017-01-25 00:04:25.686 o.a.s.s.o.a.z.ClientCnxn [INFO] Opening socket connection to server storage15/192.168.140.195:2181. Will not attempt to authenticate using SASL (unknown error)
2017-01-25 00:04:25.686 o.a.s.s.o.a.z.ClientCnxn [WARN] Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701)
at org.apache.storm.shade.org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.storm.shade.org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2017-01-25 00:04:25.787 o.a.s.s.o.a.z.ClientCnxn [INFO] Opening socket connection to server storage14/192.168.140.194:2181. Will not attempt to authenticate using SASL (unknown error)
2017-01-25 00:04:25.787 o.a.s.s.o.a.z.ClientCnxn [WARN] Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701)
at org.apache.storm.shade.org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.storm.shade.org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
Your UPDATE (nimbus log) indicates that your Nimbus cannot connect Zookeeper cluster. Please check that Zookeeper cluster (storage14/storage15) is accessible from storage01 (not only node is accessible, but also do telnet to Zookeeper server via "telnet storage14 (and/or storage15) 2181").
When ZK connectivity issue is gone please try starting supervisor again.

Resources