infinispan cluster two nodes configuration 5.3.Final - caching

We want to create a cluster with two nodes, we have followed the steps of https://docs.jboss.org/author/display/ISPN/Infinispan+Server, we use the configuration file of standalone/configuration/clustered-two-nodes.xml. We have changed some ports like 8080, 9999, 4447 and 127.0.0.1 to the IP of the machine. We tried it, but it didn't work.
Is the best configuration to work with a cluster of two nodes??
Is there any step that we didn't see??
I saw several threads and it seem to be the correct way, but we tried differents configuration and it doesn't work.
Thank you

We start cluster of 2 different machines, we use tcp in the configurations. I have followed server questions but we didn't get solution to our problems.
These are our changes:
IP.IP.IP.IP = our ip.
<subsystem xmlns="urn:jboss:domain:jgroups:1.2" default-stack="${jboss.default.jgroups.stack:tcp}">
<inet-address value="${jboss.bind.address.management:IP.IP.IP.IP}"/>
<inet-address value="${jboss.bind.address:IP.IP.IP.IP}"/>
<socket-binding name="management-native" interface="management" port="${jboss.management.native.port:19999}"/>
<socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9991}"/>
<socket-binding name="http" port="8081"/>
<socket-binding name="remoting" port="4448"/>

Related

How do you configure JBOSS to allow port 8080 over HTTPS?

I have a JBOSS server (7.0) running an application that uses ServiceWorkers, which requires an HTTPS connection. I was able to update the standalone.xml and Eclipse launch configuration to bind my JBOSS server to my local IP (I'll worry about port forwarding later). Connecting to http://192.168.0.197:8080/[application] works just fine, except that ServiceWorkers won't start because it isn't an HTTPS connection. If I try https://192.168.0.197:8080/[application], the connection fails with the browser reporting "unable to connect".
I've researched several documentation sources and can't figure out what needs to be updated. Please forgive any terminology errors - my background is with application programming and networking tends to be the bane of my existence.
This is the pertinent standalone.xml configuration:
<subsystem xmlns="urn:jboss:domain:webservices:2.0">
<wsdl-host>${jboss.bind.address:192.168.0.97}</wsdl-host>
<endpoint-config name="Standard-Endpoint-Config"/>
<endpoint-config name="Recording-Endpoint-Config">
<pre-handler-chain name="recording-handlers" protocol-bindings="##SOAP11_HTTP ##SOAP11_HTTP_MTOM ##SOAP12_HTTP ##SOAP12_HTTP_MTOM">
<handler name="RecordingHandler" class="org.jboss.ws.common.invocation.RecordingServerHandler"/>
</pre-handler-chain>
</endpoint-config>
<client-config name="Standard-Client-Config"/>
</subsystem>
<subsystem xmlns="urn:jboss:domain:weld:4.0"/>
</profile>
<interfaces>
<interface name="management">
<inet-address value="${jboss.bind.address.management:192.168.0.97}"/>
</interface>
<interface name="public">
<inet-address value="${jboss.bind.address:192.168.0.97}"/>
</interface>
</interfaces>
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
<socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
<socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9993}"/>
<socket-binding name="ajp" port="${jboss.ajp.port:8009}"/>
<socket-binding name="http" port="${jboss.http.port:8080}"/>
<socket-binding name="https" port="${jboss.https.port:8443}"/>
<socket-binding name="txn-recovery-environment" port="4712"/>
<socket-binding name="txn-status-manager" port="4713"/>
<outbound-socket-binding name="mail-smtp">
<remote-destination host="localhost" port="25"/>
</outbound-socket-binding>
</socket-binding-group>
And the Eclipse launch configuration:
-mp "C:\JBOSS-EAP70\modules" org.jboss.as.standalone --server-config=standalone.xml -Djboss.server.base.dir=C:\JBOSS-EAP70\standalone
"-Dprogram.name=JBossTools: Red Hat JBoss Enterprise Application Platform 7.0 at localhost" -server -Xms1303m -Xmx1303m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Dorg.jboss.resolver.warning=true -Djava.net.preferIPv4Stack=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true "-Dorg.jboss.boot.log.file=C:\JBOSS-EAP70\standalone\log\boot.log" "-Dlogging.configuration=file:C:\JBOSS-EAP70\standalone\configuration\logging.properties" "-Djboss.home.dir=C:\JBOSS-EAP70" -Dorg.jboss.logmanager.nocolor=true
It's there in your configuration:
<socket-binding name="http" port="${jboss.http.port:8080}"/>
<socket-binding name="https" port="${jboss.https.port:8443}"/>
You'd need to change the http port and then the https port.

Right configuration for HA cluster ActiveMQ Artemis

I'm new to ActiveMQ Artemis and ask community to check if I am right in configuration of HA cluster of brokers or may be I should configure them in another way as I haven't found detailed tutorial on my case. All of the brokers run on the same machine.
The scenario:
There is a master node on 61617 port and two slave nodes (slave1, slave2) on ports 61618 and 61619. If master node dies, one of slaves become active (replication mode).
It's necessary for the consumer to communicate with cluster as a "black-box". By that I mean that the change of master (i.e. when master dies) shouldn't have any effect on consumer (i.e. the way it connects to the cluster).
What I managed to do (as I understand for this case we should configure only cluster, acceptor, and connector properties, thus I attach only this part of configuration of brokers):
master broker:
<connectors>
<connector name="artemis">tcp://localhost:61617</connector>
</connectors>
<ha-policy>
<replication>
<master/>
</replication>
</ha-policy>
<acceptors>
<acceptor name="artemis">tcp://localhost:61617</acceptor>
</acceptors>
<cluster-user>cluster</cluster-user>
<cluster-password>cluster</cluster-password>
<broadcast-groups>
<broadcast-group name="bg-group1">
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<broadcast-period>5000</broadcast-period>
<connector-ref>artemis</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="dg-group1">
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>artemis</connector-ref>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>0</max-hops>
<discovery-group-ref discovery-group-name="dg-group1"/>
</cluster-connection>
</cluster-connections>
slave 1 broker the cluster conf is the same with master (auto-configuration when creating a node through the console --clustered)
<ha-policy>
<replication>
<slave/>
</replication>
</ha-policy>
<connectors>
<connector name="artemis">tcp://localhost:61618</connector>
<connector name="netty-live-connector">tcp://localhost:61617</connector>
</connectors>
<acceptors>
<acceptor name="artemis">tcp://localhost:61618 </acceptor>
</acceptors>
slave 2 broker the cluster conf is the same with master (auto-configuration when creating a node through the console --clustered)
<ha-policy>
<replication>
<slave/>
</replication>
</ha-policy>
<connectors>
<connector name="artemis">tcp://localhost:61619</connector>
<connector name="netty-live-connector">tcp://localhost:61617</connector>
</connectors>
<acceptors>
<acceptor name="artemis">tcp://localhost:61619</acceptor>
</acceptors>
JNDI configuration in consumer :
java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory
connectionFactory.ConnectionFactory=(tcp://localhost:61617?ha=true&retryInterval=1000&retryIntervalMultiplier=1.0&reconnectAttempts=10,tcp://localhost:61618?ha=true&retryInterval=1000&retryIntervalMultiplier=1.0&reconnectAttempts=10,tcp://localhost:61619?ha=true&retryInterval=1000&retryIntervalMultiplier=1.0&reconnectAttempts=10)
My configuration works, however I don`t sure if it is the right way it should be.
I've also found similar question which uses static connectors. What are they doing? I don't understand how they work. Or may be that is the right way of configuration that I am looking for?
The first thing to note is that using a single live/backup pair (or even live/backup/backup triplet) with network replication is dangerous due to the risk of "split-brain." I would recommend you use either 1 live/backup pair with shared-storage or 3 live/backup pairs with replication (which will allow the establishment of a proper quorum). Read the documentation about split brain for more details.
Aside from the risk of split-brain the broker configuration looks OK. Most (if not all) the configuration details are covered in the clustering and HA documentation. There is also a wealth of examples which ship with the broker many of which are specific to clustering and HA.
You could simplify your connection factory URL. Currently you have:
(tcp://localhost:61617?ha=true&retryInterval=1000&retryIntervalMultiplier=1.0&reconnectAttempts=10,tcp://localhost:61618?ha=true&retryInterval=1000&retryIntervalMultiplier=1.0&reconnectAttempts=10,tcp://localhost:61619?ha=true&retryInterval=1000&retryIntervalMultiplier=1.0&reconnectAttempts=10)
However, you could use:
(tcp://localhost:61617,tcp://localhost:61618,tcp://localhost:61619)?ha=true&retryInterval=1000&retryIntervalMultiplier=1.0&reconnectAttempts=10
Static connectors are typically used in environments which don't support UDP multicast. It allows manual configuration of the cluster members. If you are in an environment which supports UDP multicast I recommend you use the discovery/broadcast groups configuration rather than static discovery.
In general, if everything is working the way you want that indicates your configuration is fine.

How configure Wildfly 11 in HA Mode with preferred master?

I am currently using the default HA configuration in Wildfly 11. I would like to know how can I tell which particular cluster is preferred if it is available.
I believe I should change the singleton subsystem but I do not know how.
<subsystem xmlns="urn:jboss:domain:singleton:1.0">
<singleton-policies default="default">
<singleton-policy name="default" cache-container="server">
<simple-election-policy/>
</singleton-policy>
</singleton-policies>
</subsystem>
EDIT
Run ./jboss-cli
Run the command: /subsystem=singleton/singleton-policy=default/election-policy=simple:write-attribute(name=name-preferences,value=[node3,node2,node1])
The standalone-ha.xml was altered to:
<subsystem xmlns="urn:jboss:domain:singleton:1.0">
<singleton-policies default="default">
<singleton-policy name="default" cache-container="server">
<simple-election-policy>
<name-preferences>node3 node2 node1</name-preferences>
</simple-election-policy>
</singleton-policy>
</singleton-policies>
</subsystem>
Now I'd like to know what is the name to put in place of node3, node2, node1.
How to define the name from my node?
Step 1: Edit the standalone-ha.xml from the master server and enter a name attribute in the tag below:
<server name="master" xmlns="urn:jboss:domain:5.0">
Step 2: Edit the standalone-ha.xml from the slave server and enter a name attribute in the tag below:
<server name="slave" xmlns="urn:jboss:domain:5.0">
Step 3: Edit the subsystem singleton in both servers like below:
<subsystem xmlns="urn:jboss:domain:singleton:1.0">
<singleton-policies default="default">
<singleton-policy name="default" cache-container="server">
<simple-election-policy>
<name-preferences>master</name-preferences>
</simple-election-policy>
</singleton-policy>
</singleton-policies>
</subsystem>
When the master drops then the slave takes over, but when the master get up it reassume the command.

docker-maven-plugin from fabric8: connection between tomcat and postgres container

I am using the docker-maven-plugin plugin from fabric8 to setup two containers:
Postgres
tomcat8
Both containers can be set up separately fine. I can connect from outside (from the host) to both of them. I am doing this as following:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.22.1</version>
<configuration>
<autoCreateCustomNetworks>true</autoCreateCustomNetworks>
<images>


</images>
</configuration>
</plugin>
I am having troubles to configure that the tomcat8 container is allowed to connect to the Postgres container.
As you can see, I am creating a custom network in each image and the tomcat container depends on the database container.
<network>
<name>network</name>
<alias>database</alias>
</network>
and
<network>
<name>network</name>
<alias>tomcat</alias>
</network>
<dependsOn>
<container>database</container>
</dependsOn>
But I am unable to establish a JDBC connection to localhost:5432 in the tomcat container.
Is this configuration correct? Which IP: PORT should the tomcat8 use to connect to the database? Ideally, this IP: PORT should not be fixed, so multiple maven instances can be executed concurrently without interfering (useful for simultaneous builds, such as Jenkins).
I ran into the same issue. I actually ended up with the very same configuration of the docker-maven-plugin as you did following the documentation and also didn't know what would be the URL to get from one container to another.
The missing piece was understanding how Docker networking work. Following this tutorial brought home the message.
In short. To access the database from the tomcat container use database:5432.
When containers are on the same network (e. g. custom bridge network in this case) they can resolve each other using their hostnames - e. g. database. The containers expose ports - in this case database port is randomly-assigned:5432. Now within the Docker network the ports on the machines themselves work - so 5432. From the outside, for instance from the host, it is the randomly-assigned port.

make wildfly listen on port 443 not 8443

so I have added a SSL certificate to my wildfly 9 and it's working, but I want to configure my standalone.xml to listen to https on port 443 not on port 8443 as the default configuration, so when I update the value ${jboss.https.port:8443} to ${jboss.https.port:443} it generate an error.
this what I have in my standalone.xml :
<server name="default-server">
<http-listener name="default" socket-binding="http" redirect-socket="https"/>
<https-listener name="httpsServer" socket-binding="https" security-realm="ApplicationRealm"/>
<host name="default-host" alias="localhost">
<location name="/" handler="welcome-content"/>
<location name="/images" handler="ImagesDirHandler"/>
<filter-ref name="server-header"/>
<filter-ref name="x-powered-by-header"/>
</host>
</server>
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
<socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
<socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9993}"/>
<socket-binding name="ajp" port="${jboss.ajp.port:8009}"/>
<socket-binding name="http" port="${jboss.http.port:8080}"/>
<socket-binding name="https" port="${jboss.https.port:8443}"/>
<socket-binding name="txn-recovery-environment" port="4712"/>
<socket-binding name="txn-status-manager" port="4713"/>
<outbound-socket-binding name="mail-smtp">
<remote-destination host="localhost" port="25"/>
</outbound-socket-binding>
</socket-binding-group>
Please, provide more accurate details about your environment and errors.
I had similar needs like you. The users access our system trough a network where the only requests availables are on port 80 or 443. Than, when a costumer calls the system on port 80, wildfly redirects to port 8443 and the user cannot connect to the system. The solution was to make wildfly redirect to port 443 instead 8443. Follow some instruction for all looking for help in this issue:
In case of a linux based operational system, ports up to 1024 are
available to bind only with root privilegies.
It isn't a great idea run wildfly or any other web/app server with root privilegies in a production oriented server.
In other hands, try to run wildfly with a 'regular' user directly bind to port 443 or 80 generates permission denied like errors.
The solution for the problem I described above was to bind wildfly to ports 8080/8443 (without root privilegies) and ask the operational system to redirect traffic from port 80 to port 8080 and port 443 to port 8443. After it, config wildfly to redirect http requests to https requests on port 443 instead 8443.
So, assuming wildfly is working with http on port 8080 and https on port 8443 in a Linux based OS as service:
1) Stop wildfly: sudo service wildfly stop
2) Add iptables commands in the startup /etc/init.d/wildfly script like:
if [ $launched -eq 0 ]; then
log_warning_msg "$DESC hasn't started within the timeout allowed"
log_warning_msg "please review file \"$JBOSS_CONSOLE_LOG\" to see the status of the service"
else
iptables -t nat -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8080
iptables -t nat -A PREROUTING -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 8443
fi
PS: You add a rule on a table called "nat", from man pages of iptables:
nat:
This table is consulted when a packet that creates a new connection is encountered.
So, if you have requested https://localhost:443 before the rule creation, the connection wal already created, so the nat table is not applied. Try from a new device.
Where $launched is a bash variable to represent the state of wildfly
2) In the standalone.xml, create an additional socket-binding entry:
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
<socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
<socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9993}"/>
<socket-binding name="ajp" port="${jboss.ajp.port:8009}"/>
<socket-binding name="http" port="${jboss.http.port:8080}"/>
<socket-binding name="https" port="${jboss.https.port:8443}"/>
<socket-binding name="https-external" port="443"/>
<socket-binding name="txn-recovery-environment" port="4712"/>
<socket-binding name="txn-status-manager" port="4713"/>
...
</socket-binding-group>
Take attention to new tag entry <socket-binding name="https-external" port="443"/>
3) Change the http-listener to redirect to https-external instead https:
<http-listener name="default" socket-binding="http" redirect-socket="https-external" max-header-size=...
Where the change is redirect-socket="https-external"
4) Restart wildfly: sudo service wildfly start
After wildfly starts, verify the console.log file to see any errors report.
Thus, if your web.xml section assure confidential transport:
....
<security-constraint>
...
<user-data-constraint>
<transport-guarantee>CONFIDENTIAL</transport-guarantee>
</user-data-constraint>
</security-constraint>
...
Wildfly will redirect the requests on port 80 or 8080 to directly to port 443 instead 8443.
Obs: It is a good idea to make backup copies of your /etc/init.d/wildfly script and standalone.xml file configuration before make any changes on them.

Resources