JBoss AS7 Cluster can't Session Failover - session

My JBoss cluster can't Failover. The cluster with two nodes is managed by domain and the
httpd-2.2 realize the load balancing.
Steps to reproduce:
startup the cluster which include two nodes named node1 and node2
login. then find the request is forwarded to node1 .It's meaning that the node1 executes
the login and keep the session.
shutdown the node1
continue to execute the web operations
Then the web export the infomation about session is expired!
The log of node2 has exception like that:
[org.jboss.as.clustering.web.infinispan] JBAS010322,Failed to load
session ; java.lang.RuntimeException:Failed to load session
Attributes for session ;

Related

How I can run service in active-passive mode in Openstack Ansible?

I have a service called workload
I need to run this service on 3 nodes in my primary node is node1 and nod2 & node3 are secondary.
I need to run service only on node1 if anything wrong happens then it should run one for the secondary node so that no interruption occurs for our service.

Standalone mesos master behind load balancer EC2

I have a mesos master behind a load balancer and a mesos agent that tries to connect to the mesos master via the load balancer
Everything is good when the agent directly connects to the master by providing the --master flag but as soon as I change the --master to point to the load balancer (dns entry not the LB IP) I keep getting the following error repeatedly on my agent
I0223 11:16:55.776448 4945 slave.cpp:1416] Detecting new master
I0223 11:16:55.796245 4947 slave.cpp:6456] Got exited event for master#xx.xx.xx.xx:8082
W0223 11:16:55.796283 4947 slave.cpp:6461] Master disconnected! Waiting for a new master to be elected
I don't see any logs in master
mesos master port:8082
load balancer listener:8082->8082
mesos agent port:5052
We use the classic load balancer that does not preserve the IP
I then tried advertising the agent IP & Port but that didn't help either
I also tried setting --hostname, --advertise_ip & --advertise_port on master but that didn't help either
Has anyone faced this issue? What should be the right values for --advertise_ip, --advertise_port
I'm not using the standard mesos master/agent ports FYI
At this point I've tried all sorts of combinations
hostname = DNS name
advertise_ip = IP DNS resolves to
advertise_port = External port

rethinkdb - 2 nodes on separate servers config file example

I'm trying to setup a two node system on separate linux servers. Based on the start multiple
instances documentation they are running the bind all switch when starting rethinkdb on the
primary server. then on the secondary they are using the job switch to point to the primary IP /
port
I would like to use the config file.
On my node1 (primary) I have the follow:
bind 192.168.1.177
canonical-address 192.168.1.177
on node2 (secondary) I have the following:
bind 192.168.1.178
canonical-address 192.168.1.178
join 192.168.1.177:29015
on startup node2 doesn't connect to node1. The only way to get it to work is on node1 add the join
and point it to the node2 IP/port. Is that correct? and example of the 2 configs would be
appreciated.

Edge Node hortonworks usage

I have a 6 Nodes (2 masters + 4 slaves)production cluster with HA configured .
The actual topology is :
Master 1 :
Active HBase Master
Hive Metastore
HiveServer2
HST Server
Knox Gateway
Active NameNode
Oozie Server
Active ResourceManager
WebHCat Server
ZooKeeper Server
HST Agent
JournalNode
Metrics Monitor
Master 2 :
App Timeline Server
Standby HBase Master​​
History Server
Infra Solr Instance
Metrics Collector
Grafana
Standby NameNode
Standby ResourceManager
Spark2 History Server
Zeppelin Notebook
ZooKeeper Server
HST Agent
JournalNode
Metrics Monitor
Clients
SLAVE 1/2/3 :
DataNode
RegionServer
HST Agent
NodeManager
MetricsMonitoring
One of the slave nodes Contains : JournalNode + Zookeeper Server
Now We are planning to add some Edge Node .
Our plan is :
SQL Edge Node :
HCatalog
HiveServer2
WebHCat
Admin Edge Node
Ambari Server
Ranger
Lineage Edge Node
Job History Server
Spark2 History Server
App Timeline Server
Slider Registry Server
End User Access Edge Node
Hue
Knox Edge Node
Knox Gateway
Scheduling Edge Node
Oozie Server
Falcon
What do you think ?
What's the best practice ?
What's the components to move from Master/Slave to Edge nodes ?
Thanks
Edge nodes are meant to be Clients only. No masters/slaves. Very minimal resources other than disk space maybe for being to SCP files before using hdfs dfs -put
The Knox Gateway itself is somewhat self-described as a secure edge-node, proxy into the cluster. Depending on if you are actually using it.
If you aren't using HBase & Zeppelin, then, you could probably remove those from the cluster. If you have the available resources, HBase should sit on its own dedicated servers
Same for Zookeeper - those should ideally be separated for optimal throughput.

Apache Accumulo role assignment

I'm adding Accumulo to my Cloudera cluster.
How should I assign roles.
I have 4 servers currently running.
1 Server: HDFS Name Node, HDFS Secondary Name node, HDFS Balancer, Activity Monitor, Cloudera Management Services, Spark Gateway, Spark History Server, Yarn Job History Server, Yarn Resource Manager, Zookeeper Server
3 Servers: HDFS Data Node, Kafka Broker, Spark Gateway, Yarn Node Manager, Zookeeper Server
Cloudera wizard asks for assignment of the following Accumulo roles: Master, Tablet Server, Garbage Collector, Monitor, Tracer, Gateway.
Is it OK if Tablet Server role is assigned to all HDFS Data Nodes and all other roles to first server?
Is there a sense to assign Accumulo Gateway to the same nodes as Tablet Server?
Yes, running the Accumulo Master, Garbage Collector, Monitor, and Tracer on the first server and running TabletServers on the others make sense.
I'm not sure what the "Accumulo Gateway" is; Apache Accumulo has no such component/service called "Gateway".

Resources