Wildfly clustering with VirtualBox - cluster-computing

I am using VirtualBox on a WINDOWS7 as host of two DEBIAN7.7 guests, deb1 and deb2. Each guest can comunicate with the other one. Using one guest browser I can see the Wildfly istance welcome page that's running on the other guest. I run each istance in standalone-ha mode, network interfaces have mutlicast enabled, I can see on Wildfly node named srv1 that the two istances build a cluster:
...
...ISPN000094: Received new cluster view: [srv2/web|3] (2) [srv2/web, srv1/web]
where srv1 and srv2 are the node names of the istances. A tcpdump show UDP packets come across the multicast address 230.0.0.4, just where JGroups is listening. Despite all this goodness, http-session is not shared, this is my problem.
The application I use is very simple and <distributable/>, I have already used it succesfully in a multiple nodes on a single host scenario.
UPDATE: I made some tests using jgroups's test application McastReceiverTest and McastSenderTest with the following addresses: 230.0.0.4:45688, 230.0.0.4:45700 and 224.0.1.105:23364. Every test worked, on the receiver guest I can read what I sent by the sender guest. I tried to change my application too, I use this one https://github.com/liweinan/cluster-demo but http session is not shared.

Wildfly work well, I was looking at the problem as if I was still running multiple istances on my host. As JBoss forum suggests, I tried with curl retreiving my JSESSIONID and I see the cluster responding as expected. Happy ending.

Related

"not in dispatcher" - issues connecting a validator peer to genesis validator

I have been banging my head for a while on this one.
So, I have successfully (maybe) created a running sawtooth validator with a settings-tp and poet-validator-registry (all containers from scratch).
I created it with a config-genesis.batch - then "proposal create" with poet and a public key pem etc. for a config.batch - then "poet registration create" for a poet.batch - "proposal create" again with the additional poet settings which give a poet-settings.batch.
Basically, I am copying for the most part the docker-compose for poet default, but now rolled with my own containers from scratch (I want to know how everything pieces together in detail).
Anyway, one of those details is regarding keys and auth... it's finally running, the settings-tp and poet-val-reg are happy with it and communicating normally and then it makes a genesis block as it should.
However, I then try to connect another validator to it as a peer...
"No chain head and not the genesis node: starting in peering mode" - GREAT!
However, when it tries to connect:
[2018-05-10 10:30:10.542 INFO dispatch] Can't send message PING_RESPONSE back to ee58844c071426276de533cadfafbd3c2448604e59fd81f4758edc07b5beea89476a6252e0a2144d43f14e06bf90c57dd2613562221954e3b2eddc6d2fcd9ef6 because connection OutboundConnectionThread-tcp://192.168.1.200:8800 not in dispatcher
[2018-05-10 10:30:10.542 INFO dispatch] Can't send last message AUTHORIZATION_VIOLATION back to ee58844c071426276de533cadfafbd3c2448604e59fd81f4758edc07b5beea89476a6252e0a2144d43f14e06bf90c57dd2613562221954e3b2eddc6d2fcd9ef6 because connection OutboundConnectionThread-tcp://192.168.1.200:8800 not in dispatcher
It's so hard to find explanations on this, only places I can find anything is the original refs in the source code and I'm not going to backwards engineer that anytime soon.
My settings for the validators on startup are:
The usual binds to 0.0.0.0
peering dynamic
scheduler serial
network trust
Any help would be so soooo appreciated!
Many thanks in advance :)
Aaron.
The usual problem with the
Can't send message PING_RESPONSE back to . . . because connection ... not in dispatcher
is configuring the peer endpoints
1) If you are using Ubuntu directly instead of Docker, use the Validator's hostname or IP address instead of the default ("validator"), which only works with Docker, or "localhost", which may not be routable
2) If you are using Docker, make sure the Docker ports are mapped to the Ubuntu OS, and that the OS IP address/port is routable between the two machines. Check the expose: and ports: entries in your docker-compose.yaml file or similar file.
3) Verify network connectivity to the remote machine with ping
4) Verify port connectivity telnet aremotehostname 8800 (replace aremotehostname with the remote peer's hostname or IP address)
5) Check peer configuration in your /etc/sawtooth/validator.toml files. Check the peering and endpoint lines. Check the seeds line (for dynamic peering) or peers line (for static peering)

using websphere MQ 7.5.0 on 2 different virtual machines

i am currently experimenting on websphere MQ 7.5.0, which is used to send message from one machine to another.
I have a server with 2 virtual machines (VM1 and VM2) configured, as well as another standalone laptop. all the machines mentioned above are set using the same ip range (192.168.0.2 -5) and the same subnet, and i turn off the firewall during my experiment.
I follow the ibm website and set up the necessary queue manager, local queues, remote definition and channels. I have success with connecting the laptop to the server, and also from the server to VM1.
however, when i am trying to connect VM1 and VM2 together, after binding, my sender channel is still in retrying status, it means that connection between VM1 and VM2 is not established. I try to ping VM2 using my cmd and i receive all the packets successfully.
what could be the reasons why VM1 and VM2 cannot be connected? Is there any requirement for IBM MQ such that at least one of the MQ must be located in the physical computer?
Thank you everybody in advance!
I would suggest to check the ports that are being used by you VM queue managers and be sure the listeners for those ports are running successfully and not being used by another process.

ElasticSearch Multicast not working in Linode

I have 2 fresh Ubuntu Linodes in the same data centre with the same ES config except different node names. The cluster name is the same. They can each curl to each other's ElasticSearch server and there's no firewall yet in place, but multicast isn't working and I can't figure out why. They both elect themselves as master and nothing is logged about the other node or the cluster.
Is there any reason why multicast wouldn't work in an environment like this?
As Konstantin says in the comments, multicast is typically not supported in a multitenant environment, which makes sense, but still could have been useful for testing. Some more info here: http://blog.killtheradio.net/how-tos/keepalived-haproxy-and-failover-on-the-cloud-or-any-vps-without-multicast/
"The problem with multicast in reality is that most “cloud” (VPS) providers (AWS, Linode, Slicehost, Rackspace, etc) don’t support it on their networks. You can send a multicast message to a group, but your other machines listening on that group won’t hear it."
While there are workarounds, the simplest thing in this case is to switch to unicast.

Full Clustering in Apache Traffic Server

I followed the steps mentioned in the official documentation for full clustering of multiple ATS instances. I installed 2 instances of ATS on 2 different Ubuntu machines (having the same specs, OS versions and hardware), and both of these act as a reverse proxy for web service hosted on a Tomcat server in a different machine. I wasnt able to set up the cluster. Here are some of the queries that I have.
They are on the same switch or same VLAN : The two Ubuntu machines on which I installed the ATS are connected to the same switch. They have the same interface mentioned in the /etc/network/interfaces. Are these enough or there is something else that has to be done to get the clustering?.
Running the comment traffic_line -r proxy.process.cluster.nodes : This returned 1 after I ran the traffic_line -x and traffic_line -L commands. But, in the cluster.config file, there isnt any additions or changes.
Moreover, when I make a query to one of these ATS instances (I have mapped the URLs in the remap.config file), both of them cache the responses locally and is not shared across.
From this information, can anyone tell me if I am doing something wrong. Let me know if anymore info is required.
Are these on virtual machines? I almost wasted 2 days trying to figure out what is wrong, when I initially set it up on openvz containers. Out of a wild guess, I decided to migrate to 2 physical nodes, and it went well. See Apache Traffic Server Clustering not working
proxy.process.cluster.nodes returns 1
means that it is just the standalone single node, and the second node on the cluster is not discovered.
Try a tcp dump for multicast and broadcast messages. If the other server's IP is not showing in the discovery packet, it has something to do at the network level, where the netops might have disabled multicast packet forwarding across switches.

Starting multiple remote servers with Akka

I'm running into some deployment issues using Akka remoting to implement a small search application.
I want to deploy my ActorSystem on a set of local cluster machines to use them as workers, but I'm a bit confused for what to put into my application.conf to make this happen. For example, I can use:
akka.remote {
transport = "akka.remote.netty.NettyRemoteTransport"
netty {
hostname = "0.0.0.0"
port = 2552
}
}
Each worker just runs the ActorSystem at startup.
This allows my worker machines to bind to their address when they start up, but then they refuse to listen to messages:
beaker-24: [ERROR] ... dropping message DaemonMsgWatch for non-local recipient akka://SearchService#beaker-24:2552/remote at akka://SearchService#0.0.0.0:2552
The documentation I've found for this so far only discusses deployment on my localhost, which is not so useful :). I'm hoping there is a way to do this without generating a separate configuration for each host.
Update:
Using an empty string as the hostname allows for contacting the host via the normal IP address. Addressing using the hostname itself doesn't work at the moment.
Setting “0.0.0.0” as host name will currently basically disable remoting, because that is not a legal IP to send to. Background: actor references get the configured IP (or host name) inserted in their address part when they leave the local system, and that is exactly their “pointer home” for other systems to send messages back.
There has been an effort by Scott which would enable a system to receive replies to a different address here, but that is not included yet—and we may well chose a different solution to this problem.

Resources