How configure Wildfly 11 in HA Mode with preferred master? - high-availability

I am currently using the default HA configuration in Wildfly 11. I would like to know how can I tell which particular cluster is preferred if it is available.
I believe I should change the singleton subsystem but I do not know how.
<subsystem xmlns="urn:jboss:domain:singleton:1.0">
<singleton-policies default="default">
<singleton-policy name="default" cache-container="server">
<simple-election-policy/>
</singleton-policy>
</singleton-policies>
</subsystem>
EDIT
Run ./jboss-cli
Run the command: /subsystem=singleton/singleton-policy=default/election-policy=simple:write-attribute(name=name-preferences,value=[node3,node2,node1])
The standalone-ha.xml was altered to:
<subsystem xmlns="urn:jboss:domain:singleton:1.0">
<singleton-policies default="default">
<singleton-policy name="default" cache-container="server">
<simple-election-policy>
<name-preferences>node3 node2 node1</name-preferences>
</simple-election-policy>
</singleton-policy>
</singleton-policies>
</subsystem>
Now I'd like to know what is the name to put in place of node3, node2, node1.
How to define the name from my node?

Step 1: Edit the standalone-ha.xml from the master server and enter a name attribute in the tag below:
<server name="master" xmlns="urn:jboss:domain:5.0">
Step 2: Edit the standalone-ha.xml from the slave server and enter a name attribute in the tag below:
<server name="slave" xmlns="urn:jboss:domain:5.0">
Step 3: Edit the subsystem singleton in both servers like below:
<subsystem xmlns="urn:jboss:domain:singleton:1.0">
<singleton-policies default="default">
<singleton-policy name="default" cache-container="server">
<simple-election-policy>
<name-preferences>master</name-preferences>
</simple-election-policy>
</singleton-policy>
</singleton-policies>
</subsystem>
When the master drops then the slave takes over, but when the master get up it reassume the command.

Related

In xml file, append an attribute to node element if that attribute not exist using shell script

I have the below requirement on xml file using Shell script
Standalone.xml file
<?xml version='1.0' encoding='UTF-8'?>
<server xmlns="urn:jboss:domain:11.0">
<profile>
<subsystem xmlns="urn:jboss:domain:deployment-scanner:2.0">
<deployment-scanner path="deployments" relative-to="jboss.server.base.dir" scan-enabled="false" scan-interval="5000" deployment-timeout="600"/>
</subsystem>
...........Other subsystems tags.......
<server>
Would like to check node contains the attribute "deployment-timeout" or not
if exist read the value then check
if the value is <900
update the value to 900
else
Leave as it is
else
Append attribute with value i.e deployment-timeout="900" to node <deployment-scanner> at the end like this
*<deployment-scanner path="deployments" relative-to="jboss.server.base.dir" scan-enabled="false" scan-interval="5000" deployment-timeout="900"/>*
Please help me...
advance thanks

JBoss 7.1.0: Add Default Sender to mail sub system

I have some troubles adding a default sender to my mail subsystem in JBoss EAP 7.1.0. I'm a noob with the JBoss cli ;)
Here is my current standalone.xml:
<subsystem xmlns="urn:jboss:domain:mail:3.0">
<mail-session name="java:jboss/mail/Default" jndi-name="java:jboss/mail/Default">
<smtp-server outbound-socket-binding-ref="mail-smtp"/>
</mail-session>
</subsystem>
What i want (i figured out, that the property I need is call mail.smtp.from)
<subsystem xmlns="urn:jboss:domain:mail:3.0">
<mail-session name="java:jboss/mail/Default" jndi-name="java:jboss/mail/Default">
<smtp-server outbound-socket-binding-ref="mail-smtp">
<property name="mail.smtp.from" value="test#test.de"/>
</smtp-server>
</mail-session>
</subsystem>
I tried a lot with autocompletion in the JBoss CLI, but no success. My current try is:
/subsystem=mail/mail-session=java\:jboss\/mail\/Default/smtp-server/property=mail.smtp.from:write-attribute(name=value, value=test#test.de)
This leads to "Node path format is wrong around 'smtp-server'. Hope someone can help. Thanks in advance!
I believe what you need is the following, with the from attribute held by the mail-session object :
<subsystem xmlns="urn:jboss:domain:mail:3.0">
<mail-session name="java:jboss/mail/Default" jndi-name="java:jboss/mail/Default" from="test#test.de">
<smtp-server outbound-socket-binding-ref="mail-smtp"/>
</mail-session>
</subsystem>
Which you can obtain from your current configuration by running the following CLI command :
/subsystem=mail/mail-session=java\:jboss\/mail\/Default:write-attribute(name=from, value=test#test.de)

How to boot up from CDROM(ISO image) to install the guest OS using virsh

UPDATE on Oct. 4th, 2017: See my answer below. The credit goes to DanielB as I wouldn't have solved the problem without Daniel's help, so I'll accept his answer instead of my own.
I'm a novice in libvirt as well as system administration so excuse me if I'm asking stupid questions, though I've tried to do as much homework as possible beforehand.
My question is: How to boot up from CDROM to install the guest OS right after creating a VM using virsh?
I'm working on Ubuntu Desktop 14.04, virsh 1.2.2.
When I used 'virt-install' and passed the ISO file path as its '--cdrom' argument, I could successfully bring up the virt-viewer window which allowed me to go through the guest OS installation.
As I know I can also create a VM using an XML definition, I dumped the XML definition of the VM which I created using 'virt-install'. I then expected that the 'virt-viewer' window would be brought up automatically when I started the VM so I could install the guest OS. But it didn't.
Below is the XML definition of my VM.
If I enable the loader line, as I marked as "suspicious" below, I would get an error message of "error: internal error: cannot load AppArmor profile 'libvirt-1092d51d-3b66-46a2-bf9b-71e13dc91799'". I did that because I was trying the example given in libvirt's document here.
However, if I disable the "loader" line, and ran virsh create def_domain_test.xml, the domain can be created successfully and is shown as 'running', but the virt-viewer window is not brought up, so I can't install the guest OS on the VM.
Could anyone help me on that? I don't understand why 'virt-install' can bring up the virt-viewer but my XML definition can't. I probably mis-configured the domain XML definition but I couldn't figure out which specific part I was wrong even if I'd tried to read as much documentation as possible.
Feel free to ask for more details if needed.
<!-- Let's call this file 'def_domain_test.xml' -->
<domain type='kvm'>
<name>vm_c2</name>
<memory unit='KiB'>2097152</memory>
<currentMemory unit='KiB'>2097152</currentMemory>
<vcpu placement='static'>1</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-trusty'>hvm</type>
<!-- Next line is suspicious! -->
<!-- <loader readonly='yes' secure='no' type='rom'>/usr/lib/xen-4.4/boot/hvmloader</loader> -->
<boot dev='cdrom'/>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/bin/kvm-spice</emulator>
<!-- Here is the hard drive that doesn't have OS installed. -->
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/home/me/me/testing/vm/pool/mvs_vol_c2'/>
<target dev='hda' bus='ide'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<!-- Here is the Ubuntu ISO. -->
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/me/me/testing/vm/ubuntu-14.04.5-server-amd64.iso'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<alias name='ide0-1-0'/>
<address type='drive' controller='0' bus='1' target='0' unit='0'/>
</disk>
<controller type='usb' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='ide' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<interface type='network'>
<source network='default'/>
<model type='rtl8139'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='vnc' port='-1' autoport='yes'/>
<video>
<model type='cirrus' vram='9216' heads='1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</memballoon>
</devices>
</domain>
virsh is a very low level tool whose commands direct map to individual libvirt API calls. A installation done by virt-install will make many API calls to accomplish its job. So just taking the final XML of the installed guest and passing it to virsh define is not equivalant.
For a start, virt-install will usually change the XML - it first creates a transient guest will an XML doc suitable for booting off the CDROM, and after that completes it'll change the XML to boot off the disk instead. virt-install will manually launch virt-viewer to display the console, which is not something virsh does.
That particular <loader> line should never be used with KVM - it is only relevant for Xen - by using that you've told KVM to run Xen paravirt code as its BIOS instead of SeaBIOS - this will certainly crash & burn.
If you use the '--debug' arg to virt-install you'll see details of what it does at each step. You could also set LIBVIRT_LOG_FILTERS=1:libvirt and LIBVIRT_LOG_OUTPUTS=1:stderr if you want to see details of every libvirt API call made.
Thanks for DanielB's help! The '--debug' argument of virt-install does reveal the information I need to solve this problem.
First of all, in the XML definition, I don't need the <loader> line. The <os> section should be:
<os>
<type arch='x86_64' machine='pc-i440fx-trusty'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
</os>
The two <boot> tags already specify the order of booting.
Secondly, virt-install's debug output suggests the desired way to bring up virt-viewer:
Run: virt-viewer --connect=qemu:///system --wait vm_c2
Optionally, you can add '--debug' and '--verbose' to virt-viewer to see more output.
At this moment, the viewer window should be brought up and a message is shown: Waiting for guest domain to be created.
Run: virsh create --file def_domain_test.xml.
The viewer would now be activated and display the output of the VM.
Here is one caveat which caused me to get stuck at the beginning: You don't have to start virt-viewer prior to the VM. However, if you start VM first and then the viewer, the viewer screen may be a whole black one which may confuse you and make you think nothing is happening there. In this case, click the viewer window to let it get the input focus, then press 'Enter' key and you may get it refreshed and see what is there actually. (Resizing the window doesn't force it to refresh.)
FYI: If you output the virt-viewer's debug messages, you may see a message like this:
(virt-viewer:6296): virt-viewer-DEBUG: Error operation forbidden: read only access prevents virDomainOpenGraphics
This doesn't seem to cause me any problems, but maybe it's the hint of other problems if virt-viewer doesn't work correctly for you.

How to connect to a Kerberos-secured Apache Phoenix data source with WildFly?

I have recently spent several weeks trying to get WildFly to successfully connect to a Kerberized Apache Phoenix data source. There is a surprisingly limited amount of documentation on how to do this, but now that I have cracked it, I'm sharing.
Environment:
WildFly 9+. An equivalent JBoss version should also work (but untested). WildFly 8 does not contain the required org.jboss.security.negotiation.KerberosLoginModule class (but you can hack it, see Kerberos sql server datasource in Wildfly 8.2). I used WildFly 10.1.0.Final, and used a standalone deployment.
Apache Phoenix 4.2.0.2.2.4.10. I have not tested any other version.
Kerberos v5. My KDC is running on Windows Active Directory, but this should not make a noticable difference.
My Hadoop environment is a HortonWorks version, and maintained by Ambari. Ambari ensures that all of the configuration files and Kerberos implementation settings are correct.
Firstly, you'll want to add a system property to WildFly's standalone.xml to specify the location of the Kerberos configuration file:
...
</extensions>
<system-properties>
<property name="java.security.krb5.conf" value="/path/to/krb5.conf"/>
</system-properties>
...
I'm not going to go into the format of the krb5.conf file here, as it is dependent on your own implementation of Kerberos. What is important is that it contains the default realm and network location of the KDC. On Linux you can normally find it at /etc/krb5.conf or /etc/security/krb5.conf. If you're running WildFly on Windows, then make sure you use forward-slashes in your path, e.g. "C:/Source/krb5.conf"
Secondly, add two new security domains to standalone.xml - one called "Client" which is used by ZooKeeper, and another called "host", which is used by WildFly. Do not ask me why (it caused me so much pain) but the name of the "Client" security domain must match that defined in Zookeeper's JAAS client configuration file on the server. If you've set up with Ambari, "Client" is the default name. Also note that you cannot simply provide a jaas.config file as a system property, you must define it here:
<security-domain name="Client" cache-type="default">
<login-module code="com.sun.security.auth.module.Krb5LoginModule" flag="required">
<module-option name="useTicketCache" value="true"/>
<module-option name="debug" value="true"/>
</login-module>
</security-domain>
<security-domain name="host" cache-type="default">
<login-module code="org.jboss.security.negotiation.KerberosLoginModule" flag="required" module="org.jboss.security.negotiation">
<module-option name="useTicketCache" value="true"/>
<module-option name="debug" value="true"/>
<module-option name="refreshKrb5Config" value="true"/>
<module-option name="addGSSCredential" value="true"/>
</login-module>
</security-domain>
The module options will vary depending on your implementation. I'm getting my tickets from the default Java ticket cache, which is defined in the java.security file of your JRE, but you can supply a keytab here if you want. Note that setting storeKey to true broke my implementation. Check the Java documentation for all of the options. Note that each security domain uses a different login module: this is not by accident - Phoenix does not know how to use the org.jboss... version.
Now you need to provide WildFly with the org.apache.phoenix.jdbc.PhoenixDriver class in phoenix-<version>-client.jar. Create the following directory tree under the WildFly directory:
/modules/system/layers/base/org/apache/phoenix/main/
In the main directory, paste the phoenix--client.jar which you can find on the server (e.g. /usr/hdp/<version>/phoenix/client/bin) and create a module.xml file:
<?xml version="1.0" ?>
<module xmlns="urn:jboss:module:1.1" name="org.apache.phoenix">
<resources>
<resource-root path="phoenix-<version>-client.jar">
<filter>
<exclude-set>
<path name="javax" />
<path name="org/xml" />
<path name="org/w3c/dom" />
<path name="org/w3c/sax" />
<path name="javax/xml/parsers" />
<path name="com/sun/org/apache/xerces/internal/jaxp" />
<path name="org/apache/xerces/jaxp" />
<path name="com/sun/jersey/core/impl/provider/xml" />
</exclude-set>
</filter>
</resource-root>
<resource-root path=".">
</resource-root>
</resources>
<dependencies>
<module name="javax.api"/>
<module name="sun.jdk"/>
<module name="org.apache.log4j"/>
<module name="javax.transaction.api"/>
<module name="org.apache.commons.logging"/>
</dependencies>
</module>
You also need to paste the hbase-site.xml and core-site.xml from the server into the main directory. These are typically located in /usr/hdp/<version>/hbase/conf and /usr/hdp/<version>/hadoop/conf. If you don't add these, you will get a lot of unhelpful ZooKeeper getMaster errors! If you want the driver to log to the same place as WildFly, then you should also create a log4j.xml file in the main directory. You can find an example elsewhere on the web. The <resource-root path="."></resource-root> element is what adds those xml files to the classpath when deployed by WildFly.
Finally, add a new datasource and driver in the <subsystem xmlns="urn:jboss:domain:datasources:2.0"> section. You can do this with the CLI or by directly editing standalone.xml, I did the latter:
<datasource jndi-name="java:jboss/datasources/PhoenixDS" pool-name="PhoenixDS" enabled="true" use-java-context="true">
<connection-url>jdbc:phoenix:first.quorumserver.fqdn,second.quorumserver.fqdn:2181/hbase-secure</connection-url>
<connection-property name="phoenix.connection.autoCommit">true</connection-property>
<driver>phoenix</driver>
<validation>
<check-valid-connection-sql>SELECT 1 FROM SYSTEM.CATALOG LIMIT 1</check-valid-connection-sql>
</validation>
<security>
<security-domain>host</security-domain>
</security>
</datasource>
<drivers>
<driver name="phoenix" module="org.apache.phoenix">
<xa-datasource-class>org.apache.phoenix.jdbc.PhoenixDriver</xa-datasource-class>
</driver>
</drivers>
It's important that you replace first.quorumserver.fqdn,second.quorumserver.fqdn with the correct ZooKeeper quorum string for your environment. You can find this in hbase-site.xml in the HBase configuration directory: hbase.zookeeper.quorum. You don't need to add Kerberos information to the connection URL string!
tl;dr
Make sure that hbase-site.xml and core-site.xml are in your classpath.
Make sure that you have a <security-domain> with a name that ZooKeeper expects (probably "Client"), that uses the com.sun.security.auth.module.Krb5LoginModule.
The Phoenix connection URL must contain the entire ZooKeeper quorum. You can't miss one server out! Make sure it matches the value in hbase-site.xml.
References:
Using Kerberos for Datasource Authentication
Phoenix data source configuration by Mark S

Session Clustering Tomcat + terracotta on single server

I want to make session clustering with terracotta and 2 tomcat on single server.
i following instruction from :
http://artur.ejsmont.org/blog/content/how-to-setup-terracotta-session-clustering-and-replication-for-apache-tomcat-6
This is my tc-config.xml
<?xml version="1.0" encoding="UTF-8"?>
<tc:tc-config xmlns:tc="http://www.terracotta.org/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.terracotta.org/schema/terracotta-4.xsd">
<servers>
<server name="nodea" host="localhost">
<data>/home/meruvian/mydatafolder</data>
<logs>/home/meruvian/mylogsfolder</logs>
<l2-group-port>9530</l2-group-port>
</server>
<server name="nodeb" host="localhost">
<data>/home/meruvian/mydatafolder</data>
<logs>/home/meruvian/mylogsfolder</logs>
<l2-group-port>9530</l2-group-port>
</server>
</servers>
<clients>
<logs>/var/log/myclientlogsfolder</logs>
<modules>
<module name="tim-tomcat-6.0" version="2.2.0"/>
</modules>
</clients>
<application>
<dso>
<instrumented-classes>
<include>
<class-expression>*..*</class-expression>
</include>
<exclude>org.apache.coyote..*</exclude>
<exclude>org.apache.catalina..*</exclude>
<exclude>org.apache.jasper..*</exclude>
<exclude>org.apache.tomcat..*</exclude>
</instrumented-classes>
<web-applications>
<web-application>sessionapp</web-application>
</web-applications>
</dso>
</application>
</tc:tc-config>
Then when i try to execute command :
/start-tc-server.sh -f ~/Terracotta/terracotta-3.6.2/tc-config.xml
But i get error message like bellow :
Fatal Terracotta startup exception:
*******************************************************************************
You have not specified a name for your Terracotta server, and there are 2 servers defined in the Terracotta configuration file. The script can not automatically choose between the following server names: nodea, nodeb. Pass the desired server name to the script using the -n flag.
*******************************************************************************
What the meaning of
<web-application>sessionapp</web-application>
Is it my contex path of my app ?
Anyone can help me to solve this, to cluster session with tomcat + terracotta ?
Thanks
I am by no means an authority on Terracotta, but according to me,
there are 2 problems here:
You have specified 2 servers running on localhost, with no port specifications. Both will try to take up port 9510 (dso port) which will cause a problem. You need to specify different dso ports.
Assuming you do fix the port configuration, you have both your servers running on localhost, so terracotta needs to know the server you are trying to start.
Use this command to start nodea:
/start-tc-server.sh -f ~/Terracotta/terracotta-3.6.2/tc-config.xml -n nodea
Similarly for nodeb.
See if that helps.

Resources