How can i set-up
multiple nodes of WildFly in a single machine in Clustered mode ?
Should i create multiple Standalone nodes or multiple Domain nodes if I want to create all these multiple nodes in my 1 machine ?
Reason i want to have all nodes in 1 machine is because currently I am learning & validating few of it's capabilities.
Note: I referred to this http://middlewaremagic.com/jboss/?p=1952 but, i keep getting the following error
ERROR [org.jboss.msc.service.fail] (MSC service thread 1-2) MSC000001: Failed to start service jboss.network.public: org.jboss.msc.service.StartException in service jboss.network.public: JBAS015810: failed to resolve interface public
at org.jboss.as.server.services.net.NetworkInterfaceService.start(NetworkInterfaceService.java:96) [wildfly-server-8.2.0.Final.jar:8.2.0.Final]
at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1948) [jboss-msc-1.2.2.Final.jar:1.2.2.Final]
at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1881) [jboss-msc-1.2.2.Final.jar:1.2.2.Final]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_25]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_25]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_25]
Updated understanding on Standalone vs Domain:
The HA or Failover feature provided by the multiple nodes in a cluster is the same between Standalone or Domain mode. The difference is that, in Domain mode, it allows the admins to manage/deploy all the nodes via a single (domain controler) node's Admin Console. This URL has a good explanation on it
Ref: https://docs.jboss.org/author/display/WFLY8/Admin+Guide#AdminGuide-
Update: As of now for learning purpose, i have configured to run multiple nodes in my machine using Domain mode blog.arungupta.me/wildfly-8-clustering-and-session-failover
Try to execute two or more instances on the same machine using a HA profile, choose the standalone-ha.xml. For example, on the same machine:
%WILDFLY_HOME_1%/standalone.bat -c standalone-ha.xml -Djboss.node.name=srv1;
and
%WILDFLY_HOME_2%/standalone.bat -c standalone-ha.xml -Djboss.node.name=srv2 -Djboss.socket.binding.port-offset=100
and you'll have a cluster of two nodes on the same machine, the first one bound to port 8080 and the second one bound to port 8180. There is no need configuring a domain to have a cluster.
Related
I am very new to consul , and has been reading about consul clustering recently. My understanding is , for each node (equivalent to a physical machine or VM), we will run a local consul agent (in client mode), hence any microservices running in that node will register itself thru this agent. but what happen if this one and only one agent is down, won't the microservices in that node unable to register anymore? Or should we expect more than one consul agent (in client mode) per node to handle such situation?
You are correct. If the Consul agent is down, the services on that host will not be able to register with the agent, and Consul will consider all services which were previously registered against the agent to be unavailable.
A very simple solution is to run Consul under a process manager like systemd, and configure systemd to restart the agent if the process unexpectedly fails. You can find an example systemd unit for this at https://learn.hashicorp.com/tutorials/consul/deployment-guide#configure-systemd. If Consul is installed from the HashiCorp Linux package repo (https://learn.hashicorp.com/tutorials/consul/get-started-install), this systemd unit will be included as part of the installation package.
I'm trying to use the SiteToSiteProvenance Reporting Task.
The objective is to send provenance data between two dockerized instances of NiFi, one at port 8080 and another at port 9090.
I've created a input port creatively called "IN" on the destination NiFi and the service configuration on the source NiFi is:
However I'm getting the following error:
Unable to refresh Remote Group's peers due to Unable to communicate with remote NiFi cluster in order to determine which nodes exist in the remote cluster
I've also exposed the port 10000 in the destination docker.
As mentioned in the comments, it appears there was a networking issue between the containers.
It was finally resolved by the asker by not using containers.
There is a Consul cluster in my local environment, and some developers' local machines as well. Each developer has a Tomcat server which runs some web artifacts in Docker container, so I want to register these artifacts as services on Tomcat deploy.
Assuming that we have already registered empty node for each developer's local machine, how can i register/deregister a new service on existing node? Do i need consul agent running on any node?
I know it's possible to add service when registering node, but haven't found any info about how to add services to node dynamically. I'd prefer HTTP API if possible (it's much easier to run on local machines).
Do i need consul agent running on any node?
Yes, even though you can add external services to a remote machine using curl post too, the service discovery is going to benifit you with the agent running on nodes too.
I know it's possible to add service when registering node, but haven't found any info about how to add services to node dynamically.
Registering a service is fairly easy on consul and you can find more details at the following link:
https://www.consul.io/intro/getting-started/services.html
However, if you wish to give better isolation to your developers, I would recommend running the consul agent server/client in docker and let registrator take care of everything.
Registrator from gliderlabs is service registry bridge for Docker. It automatically registers and deregisters services for any Docker container by inspecting containers as they come online.
You can find more details here: https://github.com/gliderlabs/registrator
In my microservices system I plan to use docker swarm and Consul.
In order to ensure the high availability of Consul I’m going to build a cluster of 3 server agents (along with a client agent per node), but this doesn’t save me from local consul agent failure.
Am I missing something?
If not, how can I configure swarm to be aware of more than 1 consul agents?
Consul is the only service discovery backend that don't support multiple endpoints while using swarm.
Both zookeeper and etcd support the etcd://10.0.0.4,10.0.0.5 format of providing multiple Ip's for the "cluster" of discovery back-ends while using Swarm.
To answer your question how you can configure Swarm to support more than 1 consul (server) - I don't have a definitive answer to it but can point you in a direction and something you can test ( no guarantees ) :
One suggestion worth testing (which is not recommended for production) is to use a Load Balancer that can pass your requests from the Swarm manager to one of the three consul servers.
So when starting the swarm managers you can point to consul://ip_of_loadbalancer:port
This will however cause the LB to be a bottleneck (if it goes down).
I have not tested the above and can't answer if it will work or not - it is merely a suggestion.
Newbie w/ etcd/zookeeper type services ...
I'm not quite sure how to handle cluster installation for etcd. Should the service be installed on each client or a group of independent servers? I ask because if I'm on a client, how would I query the cluster? Every tutorial I've read shows a curl command running against localhost.
For etcd cluster installation, you can install the service on independent servers and form a cluster. The cluster information can be queried by logging onto one of the machines and running curl or remotely by specifying the IP address of one of the cluster member node.
For more information on how to set it up, follow this article