Remoting channel creation options values not set on Channel - jboss-eap-7

I am running EAP 7.1.0 CR2 and have seen something odd in the channel creation options for remoting connections. For remote EJB calls. The options configuration is not being set on the xnio layer. I found the TCP_NODELAY property is not set on the Channel to the value I specified when it is created.
In my standalone-full.xml file my remoting connection XNIO properties are set with
<remote connector-ref="http-remoting-connector" thread-pool-name="ejbWorker">
<channel-creation-options>
<option name="READ_TIMEOUT" value="${prop.remoting-connector.read.timeout:20}" type="xnio"/>
<option name="TCP_NODELAY" value="false" type="xnio"/>
<option name="WORKER_READ_THREADS" value="2" type="xnio"/>
<option name="WORKER_WRITE_THREADS" value="2" type="xnio"/>
<option name="MAX_INBOUND_MESSAGES" value="300" type="remoting"/>
<option name="MAX_OUTBOUND_MESSAGES" value="300" type="remoting"/>
</channel-creation-options>
</remote>
I'd expect the remoting subsystem to set these. But TRACE logging on XNIO shows it does not. Here is the log messages.
2017-11-22 19:42:20,170 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: JBoss EAP 7.1.0.GA (WildFly Core 3.0.3.Final-redhat-1) started in 265750ms - Started 1371 of 1652 services (502 services are lazy, passive or on-demand)
2017-11-22 19:43:22,937 TRACE [org.jboss.remoting.remote.connection] (ejbWorker I/O-2) Initialized connection from /192.168.0.4:48861 to /192.168.0.4:8084 with options {org.xnio.Options.TCP_NODELAY=>true,org.jboss.remoting3.RemotingOptions.SASL_PROTOCOL=
>remote,org.xnio.Options.REUSE_ADDRESSES=>true}
Looking at the code in the wildfly-core remoting subsystem the org.jboss.as.remoting.HttpConnectorAdd.launchServices method uses org.jboss.as.remoting.ConnectorUtils.getFullOptions to obtain a Map of properties. Defaulted to this
{org.jboss.remoting3.RemotingOptions.SASL_PROTOCOL=>remote,org.xnio.Options.TCP_NODELAY=>true,org.xnio.Options.REUSE_ADDRESSES=>true}
I would expect the OptionsMap to be updated. With the settings in the channel creation options. Am I wrong to expect this ?
Regards,
Jeremy

The EJB subsystem remote/channel-creation-options settings don't go through org.jboss.as.remoting.HttpConnectorAdd. They get passed in to org.jboss.as.ejb3.remote.EJBRemoteConnectorService where they are passed in to a remoting Endpoint.registerService call. AFAICT they only options that are read from the OptionMap passed in to that method are
RemotingOptions.TRANSMIT_WINDOW_SIZE,
RemotingOptions.MAX_OUTBOUND_MESSAGES
RemotingOptions.RECEIVE_WINDOW_SIZE
RemotingOptions.MAX_INBOUND_MESSAGES
RemotingOptions.MAX_OUTBOUND_MESSAGE_SIZE
RemotingOptions.MAX_INBOUND_MESSAGE_SIZE.
This is done by org.jboss.remoting3.remote.RemoteReadListener as it sets up a Channel after a channel-open request is received on the connection.

The TCP_NODELAY socket option cannot be disabled as doing so will interfere with the correct operation of the Remoting protocol.
I did explore using TCP_CORK instead on Linux to improve throughput without sacrificing latency, and it did seem to work well, however it would require a finished and well-tested native XNIO implementation which is (at best) a side experiment at the moment.

Related

Connecting ActiveMQ Web-Console to an existing broker (instead of starting a new one)

Having deployed the activemq-web-console war into a Tomcat embedded application how can one make it connect to an existing broker rather than create a new one?
The war comes with a set of predefined configurations, in particular, the WEB-INF/activemq.xml contains a configuration for the BrokerService
<broker brokerName="web-console" useJmx="true" xmlns="http://activemq.apache.org/schema/core">
<persistenceAdapter><kahaDB directory="target/kahadb"/></persistenceAdapter>
<transportConnectors>
<transportConnector uri="tcp://localhost:12345"/>
</transportConnectors>
</broker>
used from webconsole-embedded.xml in the following manner:
<bean id="brokerService" class="org.apache.activemq.xbean.BrokerFactoryBean">
<property name="config" value="/WEB-INF/activemq.xml"/>
</bean>
This configuration creates a new instance of BrokerService and tries to start the broker.
It is reported that the web console can be used to monitor an existing broker service rather than creating a new one. For this one should set the following properties somewhere:
webconsole.type=properties
webconsole.jms.url=tcp://localhost:61616
webconsole.jmx.url=service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-trun
The questions is, where does one have to set these properties within the Tomcat embedded app and which XML changes in the above have to be performed for them to be used. I cannot find any sensible explanation how to configure it, and a BrokerService instance seems to be required by the remaining spring config.
Any ideas?
Please do not suggest to use hawtio instead!
I had the same problem today. You can start the webconsole in "properties" mode which gives you the oppertunity to connect over jmx.
I added following java arguments to our Jboss 6.1 and it worked immediatley. I didn't change any of the xmls (works out of the box)...
Example:
-Dwebconsole.type=properties -Dwebconsole.jms.url=tcp://<hostname>:61616 -Dwebconsole.jmx.url=service:jmx:rmi:///jndi/rmi://<hostname>:1090/jmxrmi -Dwebconsole.jmx.user=admin -Dwebconsole.jmx.password=123456
Also discussed here: https://svn.apache.org/repos/infra/websites/production/activemq/content/5.7.0/web-console.html

What's the differences between this Infinispan cache factories for Hibernate second level cache?

I'm trying to figure out the difference between this factories, used in hibernate.cache.region.factory_class property.
Example:
<property name="hibernate.cache.region.factory_class" value="org.hibernate.cache.infinispan.JndiInfinispanRegionFactory" />
<property name="hibernate.cache.infinispan.cachemanager" value="java:jboss/infinispan/container/hibernate" />
There are 4 possible options.
The 2 options that I know something about is:
org.hibernate.cache.infinispan.InfinispanRegionFactory: for standalone aplications (not in a cluster, I think).
org.hibernate.cache.infinispan.JndiInfinispanRegionFactory: this is bounded to a JNDI in the property hibernate.cache.infinispan.cachemanager.
And I don't have any idea about these 2:
org.jboss.as.jpa.hibernate5.infinispan.SharedInfinispanRegionFactory: ?
org.jboss.as.jpa.hibernate5.infinispan.InfinispanRegionFactory: ?
We have a cluster configured on Wildfly 10.1.0 using domain mode. We want to share the entity cache among the nodes and we are having some doubts about how to do that.
If you're using Wildfly, you don't have to worry about setting the region factory class because Wildfly uses Infinispan as second-level cache provider by default. It's all explained here.
All you have to do is enable hibernate.cache.use_second_level_cache and you're good to go. See examples in the docu.
I agree with Galder, +1!
Regarding the purpose of [org.jboss.as.jpa.hibernate5.infinispan.SharedInfinispanRegionFactory][1] + [org.jboss.as.jpa.hibernate5.infinispan.InfinispanRegionFactory][2], these classes extend the Hibernate ORM [hibernate-infinispan][3] implementation classes, the purpose being to start the internal WildFly Infinispan cache services used for the JPA second level caching. They also deal with configuration as well. The below links may become outdated over time, as I think we [3] code might move to the Infinispan project (eventually).
A little more of the related code is at [HibernateSecondLevelCache.java][4], which backs up what Galder said. You can see that the WildFly JPA container automatically is setting the region factory class for you (if caching is enabled via [HibernatePersistenceProviderAdaptor.java][5].
I'm not sure if the code links are helpful to you, I thought they might be. :)
As a stackoverflow newbie, I am not allowed to post with more than 2 links, which is why [3] - [5] are invalid links.
Scott
[1] https://github.com/wildfly/wildfly/blob/master/jpa/hibernate5/src/main/java/org/jboss/as/jpa/hibernate5/infinispan/SharedInfinispanRegionFactory.java
[2] https://github.com/wildfly/wildfly/blob/master/jpa/hibernate5/src/main/java/org/jboss/as/jpa/hibernate5/infinispan/InfinispanRegionFactory.java
[3] github.com/hibernate/hibernate-orm/tree/master/hibernate-infinispan
[4] github.com/wildfly/wildfly/blob/master/jpa/hibernate5/src/main/java/org/jboss/as/jpa/hibernate5/HibernateSecondLevelCache.java
[5] github.com/wildfly/wildfly/blob/master/jpa/hibernate5/src/main/java/org/jboss/as/jpa/hibernate5/HibernatePersistenceProviderAdaptor.java#L91
The configuration envolves Infinispan, Hibernate and JGroups.
Using domain-mode on Wildfly10 you'll need this configuration on your application EAR:
<property name="hibernate.cache.use_second_level_cache" value="true"/>
Your server group need to use a profile that has the ha resources(high availability) such as full-ha or ha profiles. These profiles have the Infinispan and JGroups default configuration.
Then, you need to have the private 'Network Interfaces' on ALL Hosts' configuration that share the cache. JGroups uses the private Edit the domain/configuration/host.xml or use the wildfly console admin to add this configuration (200.0.0.171 needs to be replaced by the server's IP):
<interfaces>
...
<interface name="private">
<inet-address value="${jboss.bind.address.private:200.0.0.171}"/>
</interface>
<!-- .... -->
</interfaces>
For example, supposing you have a HostController HC1(with server-1 and server-2) and HC2(with server-3 and server-4)
Starting all the servers and Host Controllers, you'll see on your server.log:
INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-4) ISPN000078: Starting JGroups channel hibernate
....
....
Received new cluster view for channel hibernate: [HC1:server-1, HC2-server-2, HC2-server-3, HC2-server-4]

Oracle DB connections not releasing from connection pool in Tomcat 8

We are migrating Tomcat6, java 6 and Oracle 10g web-applications to Tomcat 8, Java 8 and Oracle 10g. Our applications working fine after migrated, but initial connections (initialSize="5") available in connection pool not released after Tomcat shut down. When second time starting tomcat, its creating 5 more initial connections to pool. I am using below resource configuration in server.xml
<Resource name="TestAppDataSource"
auth="Container"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
type="javax.sql.DataSource"
driverClassName="oracle.jdbc.OracleDriver"
initialSize="5"
maxActive="40"
maxIdle="40"
minIdle="5"
timeBetweenEvictionRunsMillis="30000"
minEvictableIdleTimeMillis="30000"
maxWait="10000"
testWhileIdle="true"
testOnBorrow="true"
testOnReturn="false"
validationQuery="SELECT 1 from dual"
validationInterval="30000"
logAbandoned="true"
removeAbandonedTimeout="30"
removeAbandonedOnBorrow="true"
removeAbandonedOnMaintenance="true"
suspectTimeout="300"
maxAge="60000"
url="jdbc:oracle:thin:#//IP_ADDRESS:1521/SCHEMA_NAME"
username="USER_NAME"
password="PASSWORD" />
And below resource link configuration in application META_INF/context.xml
<ResourceLink
name="APP_TEST"
global="TestAppDataSource"
type="javax.sql.DataSource"
/>
I am using ojdbc7.jar for oracle driver. Please help whether i missed any configuration..
try with the follow option:
removeAbandoned = true
(boolean) Flag to remove abandoned connections if they exceed the
removeAbandonedTimeout. If set to true a connection is considered
abandoned and eligible for removal if it has been in use longer than
the removeAbandonedTimeout Setting this to true can recover db
connections from applications that fail to close a connection. See
also logAbandoned The default value is false.
Tomcat now use JDBC Connection Pool org.apache.tomcat.jdbc.pool that is a replacement or an alternative to the Apache Commons DBCP connection pool.
removeAbandoned is a option for JDBC Connection Pool
https://tomcat.apache.org/tomcat-8.0-doc/jdbc-pool.html
You have to add closeMethod="close" to your JDBC resource in context.xml. That way, Tomcat properly releases pending connections to the database.
Nitpick: when tomcat is shut down, the JVM is shut down, hence all of its resources are "released" too, and there is no more connection pool - what you meant is that the connections are not properly being shut down, so the DB is not being notified that they're closed, and hence the sessions there are not being ended. This is either because the pool is not getting a shutdown command, or because something else is hanging in tomcat during shutdown and hence it's not getting to the point of shutting down the pool, being force-killed by the shutdown script after a wait timeout has expired. You can take thread dumps during shutdown to see what it's waiting on, and look at catalina.out for messages about leaked threads (...has started a thread ... that's not been shut down ...). It is a common problem that webapps will launch long-running threads without daemonizing them - such webapps need to implement a ServletContextListener that will stop this thread/resource when the ServletContext is stopped.

Default taskScheduler Bean - Spring integraion 2.2.0 Vs 3.0.2 with Spring 3.2.9

I have a stand alone application that uses File inbound channel adapter to poll for a file from a Specified location at certain interval.
I don't have a taskScheduler instance defined.
When running the application with both Spring integration 2.2.0 and 3.0.2, I see that there are 10 threads created with name task-scheduler-x after certain amount of time. I believe this is the default behavior.
I removed the file inbound channel adapter configuration from my application and re-run it, I noticed the following behavior.
In 3.0.2 , 10 threads are getting created with name task-scheduler-x.
In 2.2.0, Though a taskScheduler instance is getting created (I can see the message about the bean creation in the logs), I don't see any threads getting created with the name task-scheduler-x.
Why is this behavior different between these two versions? What should I do if I don't want to create a taskScheduler instance or I don't want to create any threads for task scheduling?
Thanks for the help.
The framework now has a built-in component (header channel registry) that uses the taskScheduler.
It's not really using many resources although it does have this side effect of instantiating the scheduler thread pool.
We'll look at adding an option to disable it if you don't need/use it. In the meantime, you can revert to the pre 3.0 behavior by adding this bean to your context:
<bean id="integrationHeaderChannelRegistry" class="org.springframework.integration.channel.DefaultHeaderChannelRegistry">
<property name="autoStartup" value="false" />
</bean>
I opened a JIRA Issue for this.

Logging consistent request thread id with log4j when using CFX, Spring and DBCP

Using the pretty standard stack of CFX and Spring for JAX-RS rest endpoints. I am also using DBCP for database connection pooling. Everything runs inside a tomcat container as a standard web-app. The issue I am trying to solve is to log a consistent thread-id on every log line. That way I can track a request end to end.
But the problem seems to be that Spring/DBCP is switching the thread to a different one, so the log line has thread ids all jumbled up. I can't track a single request end-to-end as a result.
Below is an example
2014-07-22 18:55:47,224 INFO ajp-bio-8034-exec-8 <log_line_omitted>
2014-07-22 18:55:47,226 INFO pool-10-thread-77 <log_line_omitted>
2014-07-22 18:55:47,299 INFO ajp-bio-8034-exec-8 <log_line_omitted>
As you can see, the thread id has changed from ajp-bio-8034-exec-8 to pool-10-thread-100 and back again.
Most likely the thread-id is switching because of the database connection pooling by DBCP.
What can I do go get a consistent view of the request end-to-end?
The log4j properties file uses the following layout
log4j.appender.R.layout=org.apache.log4j.PatternLayout
log4j.appender.R.layout.ConversionPattern=%d %p %t %c - %m%n

Resources