Spring boot 2 tomcat ssl handshake caching - spring

with nginx is possible to cache the ssl handshake like this (https://docs.nginx.com/nginx/admin-guide/security-controls/terminating-ssl-tcp/):
ssl_session_cache shared:SSL:20m;
ssl_session_timeout 4h;
Is it possible to do the same thing in SpringBoot 2 with Tomcat?
I'm asking because during our performance tests we see errors like the following (using Gatling):
j.n.ConnectException: handshake timed out
If not is there any other options that can help me?
Thanks

Java's JSSE has already a default session cache of 20480 entries with an expiration of 24 hours.
If you want to change these values:
on a the global level you can set the system property javax.net.ssl.sessionCacheSize to the desired cache size (cf. customizing JSSE),
in an external Tomcat, you can use the properties sessionCacheSize and sessionTimeout of the <SSLHostConfig> element,
when you use the embedded Tomcat server, you can define a TomcatConnectorCustomizer, e.g.
#Component
public class SSLSessionCustomizer implements TomcatConnectorCustomizer {
#Override
public void customize(Connector connector) {
for (final SSLHostConfig hostConfig : connector.findSslHostConfigs()) {
hostConfig.setSessionCacheSize(40960);
hostConfig.setSessionTimeout(2 * 24 * 60 * 60);
}
}
}
Remark: You can use openssl s_client -reconnect to test the session resumption. However in recent versions of Java, JSSE aborts session resumption if the TLS extension extended_master_secret is missing (cf. release notes). Older clients, like those based on OpenSSL 1.0 do not support this extension. If compatibility is important for you, you can set the system property:
jdk.tls.useExtendedMasterSecret=false

Related

Spring Data Gemfire: TTL expiration annotation is not working; how to set TTL using annotations?

In Gfsh, I was able to do: create region --name=employee --type=REPLICATE --enable-statistics=true --entry-time-to-live-expiration=900.
We have a requirement to create a Region using Java using the #EnableEntityDefinedRegions annotation. When I use describe in Gfsh the Regions are showing, but entity time to live expiration (TTL) is not setting by using below ways.
Any idea how to set TTL in Java?
Spring Boot 2.5.x and spring-gemfire-starter 1.2.13.RELEASE are used in the app.
#EnableStatistics
#EnableExpiration(policies = {
#EnableExpiration.ExpirationPolicy(regionNames = "Employee", timeout = 60, action = ExpirationActionType.DESTROY))
})
#EnableEntityDefinedRegions
public class BaseApplication {
....
#Region("Employee")
public class Employee {
or
#EnableStatistics
#EnableExpiration
public class BaseApplication {
----
#Region("Employee")
#TimeToLiveExpiration(timeout = "60", action = "DESTROY")
#Expiration(timeout = "60", action = "DESTROY")
public class Employee {
....
or
using bean creation way also not working, getting error "operation is not supported on a client cache"
#EnableEntityDefinedRegions
//#PeerCacheApplication for peer cache Region is not creating PCC gemfire
public class BaseApplication {
---
}
#Bean(name="employee")
PartitionedRegionFactoryBean<String, Employee> getEmployee
(final GemFireCache cache,
RegionAttributes<String, Employee> peopleRegionAttributes) {
PartitionedRegionFactoryBean<String, Employee> getEmployee = new PartitionedRegionFactoryBean<String, Employee>();
getEmployee.setCache(cache);
getEmployee.setAttributes(peopleRegionAttributes);
getEmployee.setCache(cache);
getEmployee.setClose(false);
getEmployee.setName("Employee");
getEmployee.setPersistent(false);
getEmployee.setDataPolicy( DataPolicy.PARTITION);
getEmployee.setStatisticsEnabled( true );
getEmployee.setEntryTimeToLive( new ExpirationAttributes(60) );
return getEmployee;
}
#Bean
#SuppressWarnings("unchecked")
RegionAttributesFactoryBean EmployeeAttributes() {
RegionAttributesFactoryBean EmployeeAttributes = new RegionAttributesFactoryBean();
EmployeeAttributes.setKeyConstraint( String.class );
EmployeeAttributes.setValueConstraint( Employee.class );
}
First, Spring Boot for Apache Geode (SBDG) 1.2.x is already EOL because Spring Boot 2.2.x is EOL (see details on support). SBDG follows Spring Boot's support lifecycle and policies.
Second, SBDG 1.2.x is based on Spring Boot 2.2.x. See the Version Compatibility Matrix for further details. We will not support mismatched dependency versions. While mismatched dependency versions may work in certain cases (mileage varies depending on your use case), the version combinations not explicitly stated in the Version Compatibility Matrix will not be supported none-the-less. Also see the documentation on this matter.
Now, regarding your problem with TTL Region entry expiration policies...
SBDG auto-configuration creates an Apache Geode ClientCache instance by default (see docs). You cannot create a PARTITION Region using a ClientCache instance.
If your Spring Boot application is intended to be a peer Cache instance in an Apache Geode cluster (server-side), then you must explicitly declare your intention by overriding SBDG's auto-configuration, like so:
#PeerCacheApplication
#SpringBootApplication
class MySpringBootApacheGeodePeerCacheApplication {
// ...
}
TIP: See the documentation on creating peer Cache applications using SBDG.
Keep in mind that when you override SBDG's auto-configuration, then you may necessarily and implicitly be responsible for other aspects of Apache Geode's configuration, e.g. Security! Heed the warning.
On the other hand, if your intent is to truly enable your Spring Boot/SBDG application as a cache "client" (i.e. a ClientCache instance, the default), then TTL Region entry expiration policies do not make sense on client PROXY Regions, which is the default DataPolicy (EMPTY) for client Regions when using the Spring Data for Apache Geode (SDG) #EnableEntityDefinedRegions annotation (see Javadoc). This is because Apache Geode client PROXY Regions do not store any data locally. All data access operations are forward to the server/cluster.
Even if you alter the configuration to use client CACHING_PROXY Regions, the TTL Region expiration policies will only take effect locally. You must configure your corresponding server/cluster Regions, separately (e.g. using Gfsh).
Also, even though you can push cluster configuration from the client using SDG's #EnableClusterConfiguration (doc, Javadoc), or alternatively and preferably, SBDG's #EnableClusterAware annotation (doc, Javadoc; which is meta-annotated with SDG's #EnableClusterConfiguation), this functionality only pushes Region and Index configuration to the cluster, not expiration policies.
See the SBDG documentation on expiration for further details. This doc also leads to SDG's documentation on expiration, and specifically Annotation-based expiration configuration.
I see that the SBDG docs are not real clear on the matter of expiration, so I have filed an Issue ticket in SBDG to make this more clear.

Liquibase in Spring boot application keeps 10 connections open

I'm working on a Spring Boot application with Liquibase integration to setup the database. We use a different user for the database changes which we configured using the application.properties file
liquibase.user=abc
liquibase.password=xyz
liquibase.url=jdbc:postgresql://something.eu-west-1.rds.amazonaws.com:5432/app?ApplicationName=${appName}-liquibase
liquibase.enabled=true
liquibase.contexts=dev,postgres
We have at this moment 3 different microservices in deployment and we noticed that for every running instance, Liquibase opens 10 connections and it never closes these connections unless we stop the application. This basically means that in development we regularly hit the connection limit of our Amazon RDS instance.
Right now, in development, 40 of 74 active connections are occupied by Liquibase. If we ever want to go to production with this, having autoscaling enabled for all the microservices, that would mean we'll have to over-scale the database in order not to hit any connection limits.
Is there a way to
tell liquibase to not use a connection pool of 10 connections
tell liquibase to stop or close the connections
So far I found no documentation on how to do this.
Thanks to the response of Slava I managed to fix the problem with following datasource configuration class
#Configuration
public class LiquibaseDataSourceConfiguration {
private static final Logger LOG = LoggerFactory.getLogger(LiquibaseDataSourceConfiguration.class);
#Autowired
private LiquibaseDataSourceProperties liquibaseDataSourceProperties;
#LiquibaseDataSource
#Bean
public DataSource liquibaseDataSource() {
DataSource ds = DataSourceBuilder.create()
.username(liquibaseDataSourceProperties.getUser())
.password(liquibaseDataSourceProperties.getPassword())
.url(liquibaseDataSourceProperties.getUrl())
.driverClassName(liquibaseDataSourceProperties.getDriver())
.build();
if (ds instanceof org.apache.tomcat.jdbc.pool.DataSource) {
((org.apache.tomcat.jdbc.pool.DataSource) ds).setInitialSize(1);
((org.apache.tomcat.jdbc.pool.DataSource) ds).setMaxActive(2);
((org.apache.tomcat.jdbc.pool.DataSource) ds).setMaxAge(1000);
((org.apache.tomcat.jdbc.pool.DataSource) ds).setMinIdle(0);
((org.apache.tomcat.jdbc.pool.DataSource) ds).setMinEvictableIdleTimeMillis(60000);
} else {
// warnings or exceptions, whatever you prefer
}
LOG.info("Initialized a datasource for {}", liquibaseDataSourceProperties.getUrl());
return ds;
}
}
The documentation of the properties can be found on the site of Tomcat: https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html
initialSize: The initial number of connections that are created when the pool is started
maxActive: The maximum number of active connections that can be allocated from this pool at the same time
minIdle: The minimum number of established connections that should be kept in the pool at all times
maxAge: Time in milliseconds to keep this connection. When a connection is returned to the pool, the pool will check to see if the now - time-when-connected > maxAge has been reached, and if so, it closes the connection rather than returning it to the pool. The default value is 0, which implies that connections will be left open and no age check will be done upon returning the connection to the pool.
minEvictableIdleTimeMillis: The minimum amount of time an object may sit idle in the pool before it is eligible for eviction.
So it does not appear to be a connection leak, it's just the default configuration of the datasource which is not optimal for Liquibase if you use a dedicated datasource. I don't expect this to be a problem if the liquibase datasource is your primary datasource.
Update: This has been fixed in 2.5.0-M2 and Liquibase now uses a SimpleDriverDataSource without a connection pool.
Original answer: This change to connection pool management was introduced in Spring Boot version 2.0.6.RELEASE, and only takes effect if you use Spring Boot Actuator. There is an actuator endpoint (enabled by default) which allows you to get change sets applied by Liquibase. For this to work Liquibase keeps its database connections open. You can disable the endpoint with management.endpoint.liquibase.enabled = false, in which case the connection pool used by Liquibase will be shutdown after the initial run.
GitHub issue related to this change: https://github.com/spring-projects/spring-boot/issues/13832
Spring Boot Actuator (see 12. Liquibase: https://docs.spring.io/spring-boot/docs/2.0.6.RELEASE/actuator-api/html/
I don't know why liquibase doesn't close a connection, maybe it's a bug and you should create an issue for that.
To set connection pool for liquibase you have to create a custom data source and mark it with #LiquibaseDataSource annotation.
Related issues provide more details:
Possibility to specify custom dataSource configuration for liquibase only
Add LiquibaseDataSource annotation

Spring boot Artemis embedded broker behaviour

Morning all,
I've been struggling lately with the spring-boot-artemis-starter.
My understanding of its spring-boot support was the following:
set spring.artemis.mode=embedded and, like tomcat, spring-boot will instanciate a broker reachable through tcp (server mode). The following command should be successful: nc -zv localhost 61616
set spring.artmis.mode=native and spring-boot will only configure the jms template according to the spring.artemis.* properties (client mode).
The client mode works just fine with a standalone artemis server on my machine.
Unfortunatelly, I could never manage to reach the tcp port in server mode.
I would be grateful if somebody confirms my understanding of the embedded mode.
Thank you for tour help
After some digging I noted that the implementation provided out of the box by the spring-boot-starter-artemis uses org.apache.activemq.artemis.core.remoting.impl.invm.InVMAcceptorFactory acceptor. I'm wondering if that's not the root cause (again I'm by no means an expert).
But it appears that there is a way to customize artemis configuration.
Therefore I tried the following configuration without any luck:
#SpringBootApplication
public class MyBroker {
public static void main(String[] args) throws Exception {
SpringApplication.run(MyBroker.class, args);
}
#Autowired
private ArtemisProperties artemisProperties;
#Bean
public ArtemisConfigurationCustomizer artemisConfigurationCustomizer() {
return configuration -> {
try {
configuration.addAcceptorConfiguration("netty", "tcp://localhost:" + artemisProperties.getPort());
} catch (Exception e) {
throw new RuntimeException("Failed to add netty transport acceptor to artemis instance");
}
};
}
}
You just have to add a Connector and an Acceptor to your Artemis Configuration. With Spring Boot Artemis starter Spring creates a Configuration bean which will be used for EmbeddedJMS configuration. You can see this in ArtemisEmbeddedConfigurationFactory class where an InVMAcceptorFactory will be set for the configuration. You can edit this bean and change Artemis behaviour through custom ArtemisConfigurationCustomizer bean which will be sucked up by Spring autoconfig and be applied to the Configuration.
An example config class for your Spring Boot application:
import org.apache.activemq.artemis.api.core.TransportConfiguration;
import org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptorFactory;
import org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnectorFactory;
import org.springframework.boot.autoconfigure.jms.artemis.ArtemisConfigurationCustomizer;
import org.springframework.context.annotation.Configuration;
#Configuration
public class ArtemisConfig implements ArtemisConfigurationCustomizer {
#Override
public void customize(org.apache.activemq.artemis.core.config.Configuration configuration) {
configuration.addConnectorConfiguration("nettyConnector", new TransportConfiguration(NettyConnectorFactory.class.getName()));
configuration.addAcceptorConfiguration(new TransportConfiguration(NettyAcceptorFactory.class.getName()));
}
}
My coworker and I had the exact same problem as the documentation on this link (chapter Artemis Support) says nothing about adding an individual ArtemisConfigurationCustomizer - Which is sad because we realized that without this Customizer our Spring Boot App would start and act as if everything was okay but actually it wouldn't do anything.
We also realized that without the Customizer the application.properties file is not beeing loaded so no matter what host or port you mentioned there it would not count.
After adding the Customizer as stated by the two examples it worked without a problem.
Here some results that we figured out:
It only loaded the application.properties after configuring an ArtemisConfigurationCustomizer
You don't need the broker.xml anymore with an embedded spring boot artemis client
Many examples showing the use of Artemis use a "in-vm" protocol while we just wanted to use the netty tcp protocol so we needed to add it into the configuration
For me the most important parameter was pub-sub-domain as I was using topics and not queues. If you are using topics this parameter needs to be set to true or the JMSListener won't read the messages.
See this page: stackoverflow jmslistener-usage-for-publish-subscribe-topic
When using a #JmsListener it uses a DefaultMessageListenerContainer
which extends JmsDestinationAccessor which by default has the
pubSubDomain set to false. When this property is false it is
operating on a queue. If you want to use topics you have to set this
properties value to true.
In Application.properties:
spring.jms.pub-sub-domain=true
If anyone is interested in the full example I have uploaded it to my github:
https://github.com/CorDharel/SpringBootArtemisServerExample
The embedded mode starts the broker as part of your application. There is no network protocol available with such setup, only InVM calls are allowed. The auto-configuration exposes the necessary pieces you can tune though I am not sure you can actually have a TCP/IP channel with the embedded mode.

Spring boot JDBC with password lease/renew (as in Vault)

Hashicorp's Vault can be set up to provide database passwords on demand; each password can be used for a certain "lease" period (say 1 hour) before being renewed, and a maximum use period can be set after which the password has to be trashed and a new one obtained.
In Spring Boot, the JDBC connection is configured at application start, and it is assumed that the JDBC password is coded in the application.properties file (or, alternatively, obtained at application bootstrap time via Spring Cloud Config or equivalent) and used forever.
QUESTION: How might I implement a way in Spring Boot to reset the JDBC password by accessing Vault when a connection attempt fails due to an expired password?
Is there a way to set some sort of handler gets invoked when the connection fails due to an old password, and resets it to a new value?
Check out this open source project available on GitHub; I think it might just be what you are looking for. Note: From the looks of it, this is currently a Spring Cloud Incubator project (has the potential of becoming an official Spring endorsed open source library in the future), and there are only three contributors. You would have to see if it is "reliable enough" to suit your needs.
https://github.com/spring-cloud-incubator/spring-cloud-vault-config
--- Here's a quick summary of useful information ---
Add the following dependency to pom.xml:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-vault-starter-config</artifactId>
<version>x.y.z</version>
</dependency>
Create a standard Spring Boot application - provided example is just a main application class:
#SpringBootApplication
#RestController
public class Application {
#RequestMapping("/")
public String home() {
return "Hello World!";
}
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
When it runs it will pick up the external configuration from the
default local Vault server on port 8200 if it is running. To modify
the startup behavior you can change the location of the Vault server
using bootstrap.properties (like application.properties but for the
bootstrap phase of an application context), e.g.
bootstrap.yml:
spring.cloud.vault:
host: localhost
port: 8200
scheme: http
connection-timeout: 5000
read-timeout: 15000
host sets the hostname of the Vault host. The host name will be used for SSL certificate validation
port sets the Vault port
scheme setting the scheme to http will use plain HTTP. Supported schemes are http and https.
connection-timeout sets the connection timeout in milliseconds
read-timeout sets the read timeout in milliseconds

Grizzly / Glassfish - Cant establish websockets handshake

I'm trying to get WebSockets working on top of Grizzly / Glassfish. I've cloned the sample WebSockets chat application, built it and deployed it to Glassfish 3.1.2. However, I cannot get WebSockets to connect. The WebSockets handshake is failing because I'm getting a 405 (Method Not Allowed) response. This makes sense because of what is in the Servlet:
public class WebSocketsServlet extends HttpServlet {
private final ChatApplication app = new ChatApplication();
#Override
public void init(ServletConfig config) throws ServletException {
WebSocketEngine.getEngine().register(app);
}
#Override
public void destroy() {
WebSocketEngine.getEngine().unregister(app);
}
}
There is no doGet method specified, so I'm wondering if there is more configuration required somewhere, or if you need to implement the handshake logic in the servlet doGet method yourself?
I was trying to use grizzly-websockets-chat-2.1.9.war on glassfish 3.1.2 and getting same error.
Followed the advice from this page http://www.java.net/forum/topic/glassfish/glassfish/websocket-connection-not-establishing-glasshfish-server-how-fix-it-0
which states to use the version found here (I would think the version would indicate it being older however the time stamps on the files are Jan 30 2012):
WAR
https://maven.java.net/service/local/artifact/maven/redirect?r=releases&g=com.sun.grizzly.samples&a=grizzly-websockets-chat&v=1.9.46&e=war
SOURCES
https://maven.java.net/service/local/artifact/maven/redirect?r=releases&g=com.sun.grizzly.samples&a=grizzly-websockets-chat&v=1.9.46&e=jar&c=sources
By using that war everything works.
For those that like using the glassfish web console. You can enable web-sockets by:
Configurations > server-config > Network Config > Protocols > http-listener-1, then HTTP tab > Scroll to bottom and check off Websockets Support.
Note Configurations > default-config > ... also has the same options
You might find this more continent that keeping a terminal around.
It appears you haven't enabled websocket support (disabled by default).
Issue the following command and then restart the server:
asadmin set configs.config.server-config.network-config.protocols.protocol.http-listener-1.http.websockets-support-enabled=true
You can replace http-listener-1 with whatever http-listener you wish to enable WS support for.

Resources