H2 database console, how the setting -webAllowOthers works? - h2

I'm starting H2 console from spring:
<spring:bean id="H2WebServer" class="org.h2.tools.Server"
scope="singleton" factory-method="createWebServer" init-method="start"
destroy-method="stop">
<spring:constructor-arg value="-web,-webAllowOthers,true,-webPort,8082" />
</spring:bean>
H2 version is 1.3.160
I do not want the console to be accesible from other computers in my network.
The documentation says to use setting webAllowOthers to allow/or not to other computers.
But if i set "-webAllowOthers,false", the console is still available in my local network.
I also checked .h2.server.properties file.
How should the setting work?

If you don't want to allow other computers, then remove the -webAllowOthers:
Like:
<spring:constructor-arg value="-web,-webPort,8082" />
else keep just the -webAllowOthers
<spring:constructor-arg value="-web,-webAllowOthers,-webPort,8082" />
If setting removed,
then it respondes with "Sorry, remote connections are disabled on this server".

You can always use firewall rules ;) iptables on linux and such.

Related

HTTP Service is not getting discovered in OpenNms for tomcat8.5 without SendResonPharse

HttpMonitor Config was working fine with tomcat7 and http service was getting detected however as we have updated to tomcat 8.5 http service is not getting discovered unless we set connector properties as sendReasonPhrase=true, the difference in the curl response is like below in compare to tomcat7 and tomcat 8.5 (withoutsend Reson Phrase parameter set)
tomcat7 gives ok in its header and tomcat8.5 does not.
however option for sendreson phase will be deprecated and removed from tomcat9 and even this option will not be available (https://tomcat.apache.org/tomcat-8.5-doc/config/http.html)
I am not sure why httpMonitor in openNms is not able to detect the http service even the snmpwalk output see port 80 available (both with and without send reason phrase)
in poller-configuration.xml looks like below for http
<service name="HTTP" interval="300000" user-defined="false" status="on">
<parameter key="retry" value="1"/>
<parameter key="timeout" value="3000"/>
<parameter key="port" value="80"/>
<parameter key="url" value="/"/>
<parameter key="rrd-repository" value="/var/lib/opennms/rrd/response"/>
<parameter key="rrd-base-name" value="http"/>
<parameter key="ds-name" value="http"/>
</service>
even we tried setting
<parameter key="response-text" value="~\bOK\b"/>
and
<parameter key="response" value="200"/>
however it did not help, i guess above parameter play its role once service got discovered however the problem here the service is not getting discovered (unless SendReson pharse is turn on ) it will only detect http service in tomcat 8.5 if we have sendreson phase turn on in connector definition inside server.xml
Kindly help to understand this behaviour here and possible solution for this without any change on client side.
openNms version info:

How to connect to a Kerberos-secured Apache Phoenix data source with WildFly?

I have recently spent several weeks trying to get WildFly to successfully connect to a Kerberized Apache Phoenix data source. There is a surprisingly limited amount of documentation on how to do this, but now that I have cracked it, I'm sharing.
Environment:
WildFly 9+. An equivalent JBoss version should also work (but untested). WildFly 8 does not contain the required org.jboss.security.negotiation.KerberosLoginModule class (but you can hack it, see Kerberos sql server datasource in Wildfly 8.2). I used WildFly 10.1.0.Final, and used a standalone deployment.
Apache Phoenix 4.2.0.2.2.4.10. I have not tested any other version.
Kerberos v5. My KDC is running on Windows Active Directory, but this should not make a noticable difference.
My Hadoop environment is a HortonWorks version, and maintained by Ambari. Ambari ensures that all of the configuration files and Kerberos implementation settings are correct.
Firstly, you'll want to add a system property to WildFly's standalone.xml to specify the location of the Kerberos configuration file:
...
</extensions>
<system-properties>
<property name="java.security.krb5.conf" value="/path/to/krb5.conf"/>
</system-properties>
...
I'm not going to go into the format of the krb5.conf file here, as it is dependent on your own implementation of Kerberos. What is important is that it contains the default realm and network location of the KDC. On Linux you can normally find it at /etc/krb5.conf or /etc/security/krb5.conf. If you're running WildFly on Windows, then make sure you use forward-slashes in your path, e.g. "C:/Source/krb5.conf"
Secondly, add two new security domains to standalone.xml - one called "Client" which is used by ZooKeeper, and another called "host", which is used by WildFly. Do not ask me why (it caused me so much pain) but the name of the "Client" security domain must match that defined in Zookeeper's JAAS client configuration file on the server. If you've set up with Ambari, "Client" is the default name. Also note that you cannot simply provide a jaas.config file as a system property, you must define it here:
<security-domain name="Client" cache-type="default">
<login-module code="com.sun.security.auth.module.Krb5LoginModule" flag="required">
<module-option name="useTicketCache" value="true"/>
<module-option name="debug" value="true"/>
</login-module>
</security-domain>
<security-domain name="host" cache-type="default">
<login-module code="org.jboss.security.negotiation.KerberosLoginModule" flag="required" module="org.jboss.security.negotiation">
<module-option name="useTicketCache" value="true"/>
<module-option name="debug" value="true"/>
<module-option name="refreshKrb5Config" value="true"/>
<module-option name="addGSSCredential" value="true"/>
</login-module>
</security-domain>
The module options will vary depending on your implementation. I'm getting my tickets from the default Java ticket cache, which is defined in the java.security file of your JRE, but you can supply a keytab here if you want. Note that setting storeKey to true broke my implementation. Check the Java documentation for all of the options. Note that each security domain uses a different login module: this is not by accident - Phoenix does not know how to use the org.jboss... version.
Now you need to provide WildFly with the org.apache.phoenix.jdbc.PhoenixDriver class in phoenix-<version>-client.jar. Create the following directory tree under the WildFly directory:
/modules/system/layers/base/org/apache/phoenix/main/
In the main directory, paste the phoenix--client.jar which you can find on the server (e.g. /usr/hdp/<version>/phoenix/client/bin) and create a module.xml file:
<?xml version="1.0" ?>
<module xmlns="urn:jboss:module:1.1" name="org.apache.phoenix">
<resources>
<resource-root path="phoenix-<version>-client.jar">
<filter>
<exclude-set>
<path name="javax" />
<path name="org/xml" />
<path name="org/w3c/dom" />
<path name="org/w3c/sax" />
<path name="javax/xml/parsers" />
<path name="com/sun/org/apache/xerces/internal/jaxp" />
<path name="org/apache/xerces/jaxp" />
<path name="com/sun/jersey/core/impl/provider/xml" />
</exclude-set>
</filter>
</resource-root>
<resource-root path=".">
</resource-root>
</resources>
<dependencies>
<module name="javax.api"/>
<module name="sun.jdk"/>
<module name="org.apache.log4j"/>
<module name="javax.transaction.api"/>
<module name="org.apache.commons.logging"/>
</dependencies>
</module>
You also need to paste the hbase-site.xml and core-site.xml from the server into the main directory. These are typically located in /usr/hdp/<version>/hbase/conf and /usr/hdp/<version>/hadoop/conf. If you don't add these, you will get a lot of unhelpful ZooKeeper getMaster errors! If you want the driver to log to the same place as WildFly, then you should also create a log4j.xml file in the main directory. You can find an example elsewhere on the web. The <resource-root path="."></resource-root> element is what adds those xml files to the classpath when deployed by WildFly.
Finally, add a new datasource and driver in the <subsystem xmlns="urn:jboss:domain:datasources:2.0"> section. You can do this with the CLI or by directly editing standalone.xml, I did the latter:
<datasource jndi-name="java:jboss/datasources/PhoenixDS" pool-name="PhoenixDS" enabled="true" use-java-context="true">
<connection-url>jdbc:phoenix:first.quorumserver.fqdn,second.quorumserver.fqdn:2181/hbase-secure</connection-url>
<connection-property name="phoenix.connection.autoCommit">true</connection-property>
<driver>phoenix</driver>
<validation>
<check-valid-connection-sql>SELECT 1 FROM SYSTEM.CATALOG LIMIT 1</check-valid-connection-sql>
</validation>
<security>
<security-domain>host</security-domain>
</security>
</datasource>
<drivers>
<driver name="phoenix" module="org.apache.phoenix">
<xa-datasource-class>org.apache.phoenix.jdbc.PhoenixDriver</xa-datasource-class>
</driver>
</drivers>
It's important that you replace first.quorumserver.fqdn,second.quorumserver.fqdn with the correct ZooKeeper quorum string for your environment. You can find this in hbase-site.xml in the HBase configuration directory: hbase.zookeeper.quorum. You don't need to add Kerberos information to the connection URL string!
tl;dr
Make sure that hbase-site.xml and core-site.xml are in your classpath.
Make sure that you have a <security-domain> with a name that ZooKeeper expects (probably "Client"), that uses the com.sun.security.auth.module.Krb5LoginModule.
The Phoenix connection URL must contain the entire ZooKeeper quorum. You can't miss one server out! Make sure it matches the value in hbase-site.xml.
References:
Using Kerberos for Datasource Authentication
Phoenix data source configuration by Mark S

Polling directory with multiple sub-directories

I am trying to build a simple utility that will copy the files from multiple directories from one sftp server to another server.
I tried use sftp outbound gateway to poll single high level directory with command "mget" , but it did not work. So I thought of writing two inbound adapters ( not a good solution, but still wanted this to be done badly !) .
<int-sftp:inbound-channel-adapter
id="pdbInbound"
session-factory="sftpSessionFactory"
auto-create-local-directory="true" delete-remote-files="true"
filename-pattern="*.*" remote-directory="${remote.pdb.directory}"
local-directory="${local.pdb.directory}">
<int:poller fixed-rate="5000"/>
</int-sftp:inbound-channel-adapter>
<int-sftp:inbound-channel-adapter
id="galaxyInbound"
session-factory="sftpSessionFactory"
auto-create-local-directory="true" delete-remote-files="true"
filename-pattern="*.*" remote-directory="${remote.galaxy.directory}"
local-directory="${local.galaxy.directory}" >
<int:poller fixed-rate="5000"/>
</int-sftp:inbound-channel-adapter>
Above code works perfectly fine and files are copied to local directories as expected.
Problem appears when I need to transfer these files to remote directory with the same directory structure as that of source directory. I could not achieve it using sftp-outbound gateway with command = "mput" and command-options= "-R". So, I tried to write two outbound adapters as below. But only one directory is written to remote.
Any idea what is going wrong here ?
<int:service-activator input-channel="pdbInbound" output-channel="pdbOutbound" expression="payload"/>
<int:service-activator input-channel="galaxyInbound" output-channel="galaxyOutbound" expression="payload"/>
<int-sftp:outbound-channel-adapter id="sftPdbOutboundAdapter" auto-create-directory="true"
session-factory="sftpSessionFactory"
auto-startup="true"
channel="pdbOutbound"
charset="UTF-8"
remote-file-separator="/"
remote-directory="${remote.out.pdb.directory}"
mode="REPLACE">
</int-sftp:outbound-channel-adapter>
<int-sftp:outbound-channel-adapter id="sftpGalaxyOutboundAdapter" auto-create-directory="true"
auto-startup="true"
session-factory="sftpSessionFactory"
channel="galaxyOutbound"
charset="UTF-8"
remote-file-separator="/"
remote-directory="${remote.out.galaxy.directory}"
mode="REPLACE">
</int-sftp:outbound-channel-adapter>
<int:poller default="true" fixed-delay="50"/>
Note: I am using same sftp server (but different directories) for inbound and outbound files for testing purpose.
You need to explain your issues in more details - "did not work" is woefully inadequate and you won't get much help here with such a question. You need to show what you tried and what you observed.
There are test cases for both recursive mget and recursive mput.
The directory structure for the tests is shown in a comment at the top of that file.
I suggest you compare those with what you tried and come back here if you have a specific question/observation. Best thing to do to solve these issues is to turn on DEBUG logging; including for jsch.

ActiveMQ - delete/purge all queue via command line

Is there a way to delete / purge all queues in ActiveMQ via the command line (win/linux)?
I could only find the commands for a specific queue.
Or maybe there's a way to do this via the activeMQ admin? Again, I only found how to delete/purge the queues one by one, which can be very tedious.
Thanks!
You can do tweak your activemq.xml a bit:
<broker deleteAllMessagesOnStartup="true" ...>
This works with KahaDB message stores (it has problems with JDBC message stores), all your messages get deleted and subsequently queues are cleared.
As you want all queues to be deleted, restarting the broker won't be a costly option to clean everything up.
The purge will happen on 'every' restart
I developed my own ActiveMQ command line utility (activemq-cli) to do this. You can find it here: https://github.com/antonwierenga/activemq-cli (command 'purge-all-queues' or 'remove-all-queues').
As of version 5.0 it looks like this can be done using the CLI provided with ActiveMQ itself:
$ ActiveMQ/bin/activemq purge
1- go to amq bin folder, in my case:
cd /opt/amq/bin
2- run amq client:
./client
3- run purge on desired queue
activemq:purge <QUEUE NAME HERE>
Another possibility is to deploy a small Camel route in a container (e.g. Apache ServiceMix) or simply by executing a java program which contain the route.
For example here is the route I currently use on my development computer where I also have the ServiceMix installed:
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:ext="http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0"
xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.1.0"
xsi:schemaLocation="
http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
http://camel.apache.org/schema/blueprint http://camel.apache.org/schema/blueprint/camel-blueprint.xsd
http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.1.0 http://aries.apache.org/schemas/blueprint-cm/blueprint-cm-1.1.0.xsd">
<cm:property-placeholder persistent-id="amq.cleanup" update-strategy="reload">
<cm:default-properties>
<cm:property name="amq.local.url" value="tcp://localhost:61616" />
</cm:default-properties>
</cm:property-placeholder>
<camelContext xmlns="http://camel.apache.org/schema/blueprint">
<onException useOriginalMessage="true">
<exception>java.lang.Exception</exception>
<handled>
<constant>true</constant>
</handled>
<to uri="activemq:queue:CLEANUP_DLQ" />
</onException>
<route id="drop-all-queues" autoStartup="true">
<from uri="activemq:queue:*.>" />
<stop/>
</route>
</camelContext>
<bean id="activemq" class="org.apache.activemq.camel.component.ActiveMQComponent">
<property name="brokerURL" value="${amq.local.url}" />
</bean>
</blueprint>

Spring RMI: handleRemoteConnectFailure

I have a RMI client/server configuration created with Spring 3.0.
When client and server run on the same machine at the url:
rmi://localhost:1099/myService
everything is ok. When I run the client on a different machine (server run now on 192.168.1.67) and the client "points" to:
rmi://192.168.1.67:1099/myService
I can see this error message from the client:
org.spring...RmiClientInterceptor handlerRemoteConnectFailure.
Could not connect to Rmi Service [rmi://192.1681.67:1099/myService]
The server is configured in this way:
<bean id="myService" class="org.springframework.remoting.rmi.RmiServiceExporter">
<property name="service" ref="myService"/>
<property name="serviceInterface" value="org.myapp.MyService"/>
<property name="serviceName" value="myService"/>
<property name="alwaysCreateRegistry" value="true"/>
</bean>
<bean id="myService" class="org.myapp.MyServiceImpl" />
and the client:
RmiProxyFactoryBean rpfb = new RmiProxyFactoryBean();
rpfb.setServiceInterface(MyService.class);
rpfb.setLookupStubOnStartup(true);
rpfb.setRefreshStubOnConnectFailure(true);
RMICustomClientSocketFactory socketFactory = new RMICustomClientSocketFactory();
socketFactory.setTimeout(5000);
rpfb.setRegistryClientSocketFactory(socketFactory);
rpfb.setServiceUrl(getRmiUrl(address, port));
rpfb.afterPropertiesSet();
I checked with a sniffer the port 1099 of the server, and when the client starts its process I can see some data "dispatched" on the server side:
JRMI..K
...192.168.1.65..
..192.168.1.65....
P....w"..........................D.M...;.t..myService
Q....w.....e...7B+#5..s}.....5org.springframework.remoting.rmi.RmiInvocationHandlerpxr..java.lang.reflect.Proxy.'. ..C....L..ht.%Ljava/lang/reflect/InvocationHandler;pxpsr.-java.rmi.server.RemoteObjectInvocationHandler...........pxr..java.rmi.server.RemoteObject.a...a3....pxpw2.
UnicastRef..127.0.1.1..../.T~.X.....e...7B+#5...x
R
S
T...e...7B+#5..
My question is: Why if client & server run on the same machine, everything is ok but on different machines I get this problem? and how to fix it?
I run the server on windows and client on linux (ubuntu) and everything was ok.
When I run the server on linux and client on windows I get the problem.
To fix it on linux just run the server with: -Djava.rmi.server.hostname=192.168.1.67.

Resources