Spring boot config server - spring

I have been trying to get a grip on the spring boot config server that is located Here: https://github.com/spring-cloud/spring-cloud-config and after reading the documentation more thoroughly I was able to work through most of my issues. I did however have to write an additional class for a file based PropertySourceLocator
/*
* Copyright 2013-2014 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.config.client;
import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.Iterator;
import java.util.Properties;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.core.env.PropertiesPropertySource;
import org.springframework.core.env.PropertySource;
import org.springframework.util.StringUtils;
/**
* #author Al Dispennette
*
*/
#ConfigurationProperties("spring.cloud.config")
public class ConfigServiceFilePropertySourceLocator implements PropertySourceLocator {
private Logger logger = LoggerFactory.getLogger(ConfigServiceFilePropertySourceLocator.class);
private String env = "default";
#Value("${spring.application.name:'application'}")
private String name;
private String label = name;
private String basedir = System.getProperty("user.home");
#Override
public PropertySource<?> locate() {
try {
return getPropertySource();
} catch (IOException e) {
logger.error("An error ocurred while loading the properties.",e);
}
return null;
}
/**
* #throws IOException
*/
private PropertySource getPropertySource() throws IOException {
Properties source = new Properties();
Path path = Paths.get(getUri());
if(Files.isDirectory(path)){
Iterator<Path> itr = Files.newDirectoryStream(path).iterator();
String fileName = null!=label||StringUtils.hasText(label)?label:name+".properties";
logger.info("Searching for {}",fileName);
while(itr.hasNext()){
Path tmpPath = itr.next();
if(tmpPath.getFileName().getName(0).toString().equals(fileName)){
logger.info("Found file: {}",fileName);
source.load(Files.newInputStream(tmpPath));
}
}
}
return new PropertiesPropertySource("configService",source);
}
public String getUri() {
StringBuilder bldr = new StringBuilder(basedir)
.append(File.separator)
.append(env)
.append(File.separator)
.append(name);
logger.info("loading properties directory: {}",bldr.toString());
return bldr.toString();
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getEnv() {
return env;
}
public void setEnv(String env) {
this.env = env;
}
public String getLabel() {
return label;
}
public void setLabel(String label) {
this.label = label;
}
public String getBasedir() {
return basedir;
}
public void setBasedir(String basedir) {
this.basedir = basedir;
}
}
Then I added this to the ConfigServiceBootstrapConfiguration.java
#Bean
public PropertySourceLocator configServiceFilePropertySource(
ConfigurableEnvironment environment) {
ConfigServiceFilePropertySourceLocator locator = new ConfigServiceFilePropertySourceLocator();
String[] profiles = environment.getActiveProfiles();
if (profiles.length==0) {
profiles = environment.getDefaultProfiles();
}
locator.setEnv(StringUtils.arrayToCommaDelimitedString(profiles));
return locator;
}
In the end this did what I wanted.
Now I'm curious to know if this is what I should have done or if I am still missing something and this was already handled and I just missed it.
*****Edit for info asked for by Dave******
If I take out the file property source loader and update the bootstrap.yml with
uri: file://${user.home}/resources
the sample application throws the following error on start up:
ConfigServiceBootstrapConfiguration : Could not locate PropertySource: Object of class [sun.net.www.protocol.file.FileURLConnection] must be an instance of class java.net.HttpURLConnection
This is why I thought the additional class would be needed. As far as the test case goes I believe you are talking about the SpringApplicationEnvironmentRepositoryTests.java and I agree creating the environment works but as a whole the application does not seem to be opertaing as expected when the uri protocol is 'file'.
******Additional Edits*******
This is how I understanding this is working:
The sample project has a dependency on the spring-cloud-config-client artifact so therefore has a transitive dependency on the spring-cloud-config-server artifact.
The ConfigServiceBootstrapConfiguration.java in the client artifact creates a property source locator bean of type ConfigServicePropertySourceLocator.
The ConfigServicePropertySourceLocator.java in the config client artifact has the annotation #ConfigurationProperties("spring.cloud.config")
And the property uri exists in said class, hence the setting of spring.cloud.config.uri in the bootstrap.yml file.
I believe this is reenforced up by the following statement in the quickstart.adoc:
When it runs it will pick up the external configuration from the
default local config server on port 8888 if it is running. To modify
the startup behaviour you can change the location of the config server
using bootstrap.properties (like application.properties but for
the bootstrap phase of an application context), e.g.
---- spring.cloud.config.uri: http://myconfigserver.com
At this point, some how the JGitEnvironmentRepository bean is getting used and looking for a connection to github.
I assumed that since uri was the property being set in the ConfigServicePropertySourceLocator then any valid uri protocol would work for pointing to a location.
That is why I used the 'file://' protocol thinking that the server would pick up the NativeEnvironmentRepository.
So at this point I'm sure I'm either missing some step or the file system property source locator needs to be added.
I hope that is a little clearer.
the Full Stack:
java.lang.IllegalArgumentException: Object of class [sun.net.www.protocol.file.FileURLConnection] must be an instance of class java.net.HttpURLConnection
at org.springframework.util.Assert.isInstanceOf(Assert.java:339)
at org.springframework.util.Assert.isInstanceOf(Assert.java:319)
at org.springframework.http.client.SimpleClientHttpRequestFactory.openConnection(SimpleClientHttpRequestFactory.java:182)
at org.springframework.http.client.SimpleClientHttpRequestFactory.createRequest(SimpleClientHttpRequestFactory.java:140)
at org.springframework.http.client.support.HttpAccessor.createRequest(HttpAccessor.java:76)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:541)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:506)
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:448)
at org.springframework.cloud.config.client.ConfigServicePropertySourceLocator.locate(ConfigServicePropertySourceLocator.java:68)
at org.springframework.cloud.bootstrap.config.ConfigServiceBootstrapConfiguration.initialize(ConfigServiceBootstrapConfiguration.java:70)
at org.springframework.boot.SpringApplication.applyInitializers(SpringApplication.java:572)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:303)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:952)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:941)
at sample.Application.main(Application.java:20)

I read this thread yesterday and it was missing a vital piece of information
If you don't want to use git as a repository, then you need to configure the spring cloud server to have spring.profiles.active=native
Checkout the spring-config-server code to understand it
org.springframework.cloud.config.server.NativeEnvironmentRepository
spring:
application:
name: configserver
jmx:
default_domain: cloud.config.server
profiles:
active: native
cloud:
config:
server:
file :
url : <path to config files>

I just came cross same issue. I want the configuration server load properties from local file system instead of git repository. The following configuration works for me on windows.
spring:
profiles:
active: native
cloud:
config:
server:
native:
searchLocations: file:C:/springbootapp/properties/
Suppose the property file is under C:/springbootapp/properties/
For more information please refer to Spring Cloud Documentation and Configuring It All Out

I think I have the final solution based on your last comments
In the configserver.yml I added
spring.profiles.active: file
spring.cloud.config.server.uri: file://${user.home}/resources
In the ConfigServerConfiguration.java I added
#Configuration
#Profile("file")
protected static class SpringApplicationConfiguration {
#Value("${spring.cloud.config.server.uri}")
String locations;
#Bean
public SpringApplicationEnvironmentRepository repository() {
SpringApplicationEnvironmentRepository repo = new SpringApplicationEnvironmentRepository();
repo.setSearchLocations(locations);
return repo;
}
}
And I was able to view the properties with:
curl localhost:8888/bar/default
curl localhost:8888/foo/development

Related

how to override an application property programatically in Quarkus

I've recently started using testcontantainers for unit/integration testing database operations in my Quarkus webapp. It works fine except I cannot figure out a way to dynamically set the MySQL port in the quarkus.datasource.url application property. Currently I'm using the deprecated withPortBindings method to force the containers to bind the exposed MySQL port to port 11111 but the right way is to let testcontainers pick a random one and override the quarkus.datasource.url property.
My unit test class
#Testcontainers
#QuarkusTest
public class UserServiceTest {
#Container
private static final MySQLContainer MY_SQL_CONTAINER = (MySQLContainer) new MySQLContainer()
.withDatabaseName("userServiceDb")
.withUsername("foo")
.withPassword("bar")
.withUrlParam("serverTimezone", "UTC")
.withExposedPorts(3306)
.withCreateContainerCmdModifier(cmd ->
((CreateContainerCmd) cmd).withHostName("localhost")
.withPortBindings(new PortBinding(Ports.Binding.bindPort(11111), new ExposedPort(3306))) // deprecated, let testcontainers pick random free port
);
#BeforeAll
public static void setup() {
// TODO: use the return value from MY_SQL_CONTAINER.getJdbcUrl()
// to set %test.quarkus.datasource.url
LOGGER.info(" ********************** jdbc url = {}", MY_SQL_CONTAINER.getJdbcUrl());
}
// snip...
}
my application.properties:
%test.quarkus.datasource.url=jdbc:mysql://localhost:11111/userServiceDb?serverTimezone=UTC
%test.quarkus.datasource.driver=com.mysql.cj.jdbc.Driver
%test.quarkus.datasource.username=foo
%test.quarkus.datasource.password=bar
%test.quarkus.hibernate-orm.dialect=org.hibernate.dialect.MySQL8Dialect
The Quarkus guide to configuring an app describes how to programmatically read an application property:
String databaseName = ConfigProvider.getConfig().getValue("database.name", String.class);
but not how to set it. This tutorial on using test containers with Quarkus implicates it should be possible:
// Below should not be used - Function is deprecated and for simplicity of test , You should override your properties at runtime
SOLUTION:
As suggested in the accepted answer, I don't have to specify host and port in the datasource property. So the solution is to simply replace the two lines in application.properties:
%test.quarkus.datasource.url=jdbc:mysql://localhost:11111/userServiceDb
%test.quarkus.datasource.driver=com.mysql.cj.jdbc.Driver
with
%test.quarkus.datasource.url=jdbc:tc:mysql:///userServiceDb
%test.quarkus.datasource.driver=org.testcontainers.jdbc.ContainerDatabaseDriver
(and remove the unnecessary withExposedPorts and withCreateContainerCmdModifier method calls)
Please read the documentation carefully. The port can be omitted.
https://www.testcontainers.org/modules/databases/jdbc/
now (quarkus version 19.03.12) it can be a bit simpler.
Define test component that starts container and overrides JDBC props
import io.quarkus.test.common.QuarkusTestResourceLifecycleManager;
import org.testcontainers.containers.PostgreSQLContainer;
public class PostgresDatabaseResource implements QuarkusTestResourceLifecycleManager {
public static final PostgreSQLContainer<?> DATABASE = new PostgreSQLContainer<>("postgres:10.5")
.withDatabaseName("test_db")
.withUsername("test_user")
.withPassword("test_password")
.withExposedPorts(5432);
#Override
public Map<String, String> start() {
DATABASE.start();
return Map.of(
"quarkus.datasource.jdbc.url", DATABASE.getJdbcUrl(),
"quarkus.datasource.db-kind", "postgresql",
"quarkus.datasource.username", DATABASE.getUsername(),
"quarkus.datasource.password", DATABASE.getPassword());
}
#Override
public void stop() {
DATABASE.stop();
}
}
use it in test
import io.quarkus.test.common.QuarkusTestResource;
import io.quarkus.test.junit.QuarkusTest;
import org.junit.jupiter.api.Test;
import javax.ws.rs.core.MediaType;
import java.util.UUID;
import java.util.stream.Collectors;
import static io.restassured.RestAssured.given;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.*;
#QuarkusTest
#QuarkusTestResource(PostgresDatabaseResource.class)
public class MyControllerTest {
#Test
public void myAwesomeControllerTestWithDb() {
// whatever you want to test here. Quarkus will use Container DB
given().contentType(MediaType.APPLICATION_JSON).body(blaBla)
.when().post("/create-some-stuff").then()
.statusCode(200).and()
.extract()
.body()
.as(YourBean.class);
}

Problem with #PropertySource and Map binding

I have this specific problem with my yml configuration file.
I have a multi-module maven project as follows:
app
|-- core
|-- web
|-- app
I have this configuration file in core project
#Configuration
#PropertySource("core-properties.yml")
public class CoreConfig {
}
And this mapping:
#Component
#ConfigurationProperties(prefix = "some.key.providers.by")
#Getter
#Setter
public class ProvidersByMarket {
private Map<String, List<String>> market;
}
Here are my core-properties.yml
some.key.providers:
p1: 'NAME1'
p2: 'NAME2'
some.key.providers.by.market:
de:
- ${some.key.providers.p1}
- ${some.key.providers.p2}
gb:
- ${some.key.providers.p1}
When I load the file via profile activation, for example, rename the file to application-core-properties.yml and then -Dspring.profiles.active=core-propertiesit does work however if when I try to load the file via #PropertySource("core-properties.yml") it does not and I get the following error:
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2019-03-27 10:07:36.397 -ERROR 13474|| --- [ restartedMain] o.s.b.d.LoggingFailureAnalysisReporter :
***************************
APPLICATION FAILED TO START
***************************
Description:
Failed to bind properties under 'some.key.providers.by.market' to java.util.Map<java.lang.String, java.util.List<java.lang.String>>:
Reason: No converter found capable of converting from type [java.lang.String] to type [java.util.Map<java.lang.String, java.util.List<java.lang.String>>]
Action:
Update your application's configuration
Process finished with exit code 1
Bacouse you don't have equivalent properties stracture,
example
spring:
profiles: test
name: test-YAML
environment: test
servers:
- www.abc.test.com
- www.xyz.test.com
---
spring:
profiles: prod
name: prod-YAML
environment: production
servers:
- www.abc.com
- www.xyz.com
And config class should be
#Configuration
#EnableConfigurationProperties
#ConfigurationProperties
public class YAMLConfig {
private String name;
private String environment;
private List<String> servers = new ArrayList<>();
// standard getters and setters
I have resolved the issue implementing the following PropertySourceFactory detailed described in here
import java.io.FileNotFoundException;
import java.io.IOException;
import java.util.Properties;
import org.springframework.beans.factory.config.YamlPropertiesFactoryBean;
import org.springframework.core.env.PropertiesPropertySource;
import org.springframework.core.env.PropertySource;
import org.springframework.core.io.support.EncodedResource;
import org.springframework.core.io.support.PropertySourceFactory;
import org.springframework.lang.Nullable;
public class YamlPropertySourceFactory implements PropertySourceFactory {
#Override
public PropertySource<?> createPropertySource(#Nullable String name, EncodedResource resource) throws IOException {
Properties propertiesFromYaml = loadYamlIntoProperties(resource);
String sourceName = name != null ? name : resource.getResource().getFilename();
return new PropertiesPropertySource(sourceName, propertiesFromYaml);
}
private Properties loadYamlIntoProperties(EncodedResource resource) throws FileNotFoundException {
try {
YamlPropertiesFactoryBean factory = new YamlPropertiesFactoryBean();
factory.setResources(resource.getResource());
factory.afterPropertiesSet();
return factory.getObject();
} catch (IllegalStateException e) {
// for ignoreResourceNotFound
Throwable cause = e.getCause();
if (cause instanceof FileNotFoundException)
throw (FileNotFoundException) e.getCause();
throw e;
}
}
}
I had a similar problem and found a workaround like this:
diacritic:
isEnabled: true
chars: -> I wanted this to be parsed to map but it didn't work
ą: a
ł: l
ę: e
And my solution so far:
diacritic:
isEnabled: true
chars[ą]: a -> these ones could be parsed to Map<String, String>
chars[ł]: l
chars[ę]: e

How to integrate a Spring RMI server with a pure Java RMI client which is a non-spring Swing GUI?

I'm migrating a J2EE EJB application to Spring services. It's a desktop application which has a Swing GUI and to communicate to the J2EE server it uses RMI. I have created a simple spring service with spring boot which exports a service by using spring remoting, RMIServiceExporter. The client is a rich client and have a complicated architecture so i'm trying make minimum changes to it to call the spring rmi service.
So in summary I have a plain RMI client and a spring RMI server. I have learned that spring rmi abstracts pure java rmi so in my case they don't interoperate.
I will show the code below but the current error is this. Note that my current project uses "remote://". So after I have got this error I have also tried "rmi://". But, in both cases it gives this error.
javax.naming.CommunicationException: Failed to connect to any server. Servers tried: [rmi://yyy:1099 (No connection provider for URI scheme "rmi" is installed)]
at org.jboss.naming.remote.client.HaRemoteNamingStore.failOverSequence(HaRemoteNamingStore.java:244)
at org.jboss.naming.remote.client.HaRemoteNamingStore.namingStore(HaRemoteNamingStore.java:149)
at org.jboss.naming.remote.client.HaRemoteNamingStore.namingOperation(HaRemoteNamingStore.java:130)
at org.jboss.naming.remote.client.HaRemoteNamingStore.lookup(HaRemoteNamingStore.java:272)
at org.jboss.naming.remote.client.RemoteContext.lookupInternal(RemoteContext.java:104)
at org.jboss.naming.remote.client.RemoteContext.lookup(RemoteContext.java:93)
at org.jboss.naming.remote.client.RemoteContext.lookup(RemoteContext.java:146)
at javax.naming.InitialContext.lookup(InitialContext.java:417)
at com.xxx.ui.common.communication.JbossRemotingInvocationFactory.getRemoteObject(JbossRemotingInvocationFactory.java:63)
at com.xxx.gui.comm.CommManager.initializeSpringEJBz(CommManager.java:806)
at com.xxx.gui.comm.CommManager.initializeEJBz(CommManager.java:816)
at com.xxx.gui.comm.CommManager.initializeAndLogin(CommManager.java:373)
at com.xxx.gui.comm.CommManager$2.doInBackground(CommManager.java:273)
at javax.swing.SwingWorker$1.call(SwingWorker.java:295)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at javax.swing.SwingWorker.run(SwingWorker.java:334)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I have searched for how we can interoperate spring rmi and plain/pure java rmi and i read several answers from similar questions at stackoverflow and web but i couldn't find anything useful or fits my case because even the best matched answer says only that it doesn't interoperate.
I thought that maybe i need to turn my swing gui client to spring by using spring boot but i couldn't be sure about application context since i don't want to break existing client code. So i have looked for maybe there is something like partial spring context so that maybe i can put only my CommManager.java client code to it and spring only manages this file.
And then I thought that maybe I need to change my RMI server to force spring to create some kind of plain/pure Java RMI instead of default spring RMI thing. I say thing because I read something about spring rmi that explains it's an abstraction over rmi and we can force it to create standard RMI stub.
While I'm searching for a solution i have encountered the Spring Integration but I couldn't understand it really since it looks like an other abstraction but it also tell something about adapters. Since I have seen "adapter" maybe it is used for this kind of integration/legacy code migration cases. But I couldn't go further.
Client Side:
CommManager.java
private boolean initializeEJBz(String userName, String password) throws Exception {
...
ri = RemoteInvocationFactory.getRemoteInvocation(user, pass);
if (ri != null) {
return initializeEJBz(ri);
} else {
return false;
}
}
RemoteInvocationFactory.java
package com.xxx.ui.common.communication;
import javax.naming.NamingException;
public final class RemoteInvocationFactory {
private static final CommunicationProperties cp = new CommunicationProperties();
public static synchronized RemoteInvocation getRemoteInvocation(
byte[] userName, byte[] password) throws NamingException {
String url = System.getProperty("rmi://xxx.com:1099");
if (url != null) {
return new JbossRemotingInvocationFactory(userName, password, url);
}
return null;
}
...
JbossRemotingInvocationFactory.java
package com.xxx.ui.common.communication;
...
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
...
import java.util.Hashtable;
import java.util.concurrent.TimeUnit;
public class JbossRemotingInvocationFactory implements RemoteInvocation {
private final byte[] userName, password;
private final String providerURL;
private volatile InitialContext initialContext;
private final SecretKey secretKey;
private static final String SSL_ENABLED = "jboss.naming.client.connect.options.org.xnio.Options.SSL_ENABLED";
private static final String SSL_STARTTLS = "jboss.naming.client.connect.options.org.xnio.Options.SSL_STARTTLS";
private static final String TIMEOUT = "jboss.naming.client.connect.timeout";
private long timeoutValue;
private final boolean startSsl;
#SuppressWarnings("unchecked")
public JbossRemotingInvocationFactory(byte[] userName, byte[] password, String providerURL) {
try {
KeyGenerator keyGenerator = KeyGenerator.getInstance("AES");
keyGenerator.init(128);
secretKey = keyGenerator.generateKey();
this.providerURL = providerURL;
startSsl = Boolean.valueOf(System.getProperty(SSL_ENABLED));
String property = System.getProperty("myproject.connect.timeout");
if (property != null) {
try {
timeoutValue = TimeUnit.MILLISECONDS.convert(Long.parseLong(property), TimeUnit.SECONDS);
} catch (Exception e) {
timeoutValue = TimeUnit.MILLISECONDS.convert(10, TimeUnit.SECONDS);
}
}
Hashtable jndiProperties = new Hashtable();
this.userName = encrypt(userName);
addOptions(jndiProperties);
jndiProperties.put(Context.SECURITY_CREDENTIALS, new String(password, UTF_8));
initialContext = new InitialContext(jndiProperties);
this.password = encrypt(password);
} catch (NamingException | NoSuchAlgorithmException ne) {
throw new RuntimeException(ne);
}
}
#Override
#SuppressWarnings("unchecked")
public <T> T getRemoteObject(Class<T> object, String jndiName) throws NamingException {
if (initialContext != null) {
T value = (T) initialContext.lookup(jndiName);
initialContext.removeFromEnvironment(Context.SECURITY_CREDENTIALS);
initialContext.removeFromEnvironment(Context.SECURITY_PRINCIPAL);
return value;
} else {
throw new IllegalStateException();
}
}
#Override
public <T> T getRemoteObject(Class<T> object) throws NamingException {
throw new IllegalAccessError();
}
...
private void addOptions(Hashtable jndiProperties) {
jndiProperties.put(Context.INITIAL_CONTEXT_FACTORY, "org.jboss.naming.remote.client.InitialContextFactory");
jndiProperties.put("jboss.naming.client.ejb.context", "true");
jndiProperties.put("jboss.naming.client.connect.options.org.xnio.Options.SASL_POLICY_NOANONYMOUS", "false");
jndiProperties.put("jboss.naming.client.connect.options.org.xnio.Options.SASL_POLICY_NOPLAINTEXT", "false");
jndiProperties.put(SSL_STARTTLS, "false");
jndiProperties.put(TIMEOUT, Long.toString(timeoutValue));
if (startSsl) {
jndiProperties.put("jboss.naming.client.remote.connectionprovider.create.options.org.xnio.Options.SSL_ENABLED", "true");
jndiProperties.put(SSL_ENABLED, "true");
}
jndiProperties.put("jboss.naming.client.connect.options.org.xnio.Options.SASL_DISALLOWED_MECHANISMS", "JBOSS-LOCAL-USER");
jndiProperties.put(Context.PROVIDER_URL, providerURL);
jndiProperties.put(Context.SECURITY_PRINCIPAL, new String(decrypt(userName), UTF_8));
}
#Override
public void reconnect() {
try {
Hashtable jndiProperties = new Hashtable();
addOptions(jndiProperties);
jndiProperties.put(Context.SECURITY_CREDENTIALS, new String(decrypt(password), UTF_8));
initialContext = new InitialContext(jndiProperties);
} catch (NamingException ignore) {
}
}
}
CommManager.java
private boolean initializeEJBz(RemoteInvocation remoteInvocation) throws Exception {
cs = remoteInvocation.getRemoteObject(CustomerService.class, JNDINames.CUSTOMER_SERVICE_REMOTE);
...
// here is the integration point. try to get RMI service exported.
myService = remoteInvocation.getRemoteObject(HelloWorldRMI.class, JNDINames.HELLO_WORLD_REMOTE);
return true;
}
public static final String CUSTOMER_SERVICE_REMOTE = getRemoteBean("CustomerServiceBean", CustomerService.class.getName());
public static final string HELLO_WORLD_REMOTE = getRemoteBean("HelloWorldRMI", HelloWorldRMI.class.getName());
...
private static final String APPLICATION_NAME = "XXX";
private static final String MODULE_NAME = "YYYY";
...
protected static String getRemoteBean(String beanName, String interfaceName) {
return String.format("%s/%s/%s!%s", APPLICATION_NAME, MODULE_NAME, beanName, interfaceName);
}
Server Side:
HelloWorldRMI.java:
package com.example.springrmiserver.service;
public interface HelloWorldRMI {
public String sayHelloRmi(String msg);
}
HelloWorldRMIImpl:
package com.example.springrmiserver.service;
import java.util.Date;
public class HelloWorldRMIimpl implements HelloWorldRMI {
#Override
public String sayHelloRmi(String msg) {
System.out.println("================Server Side ========================");
System.out.println("Inside Rmi IMPL - Incoming msg : " + msg);
return "Hello " + msg + " :: Response time - > " + new Date();
}
}
Config.java:
package com.example.springrmiserver;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.remoting.rmi.RmiServiceExporter;
import org.springframework.remoting.support.RemoteExporter;
import com.example.springrmiserver.service.HelloWorldRMI;
import com.example.springrmiserver.service.HelloWorldRMIimpl;
#Configuration
public class Config {
#Bean
RemoteExporter registerRMIExporter() {
RmiServiceExporter exporter = new RmiServiceExporter();
exporter.setServiceName("helloworldrmi");
//exporter.setRegistryPort(1190);
exporter.setServiceInterface(HelloWorldRMI.class);
exporter.setService(new HelloWorldRMIimpl());
return exporter;
}
}
SpringServerApplication.java:
package com.example.springrmiserver;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import java.util.Collections;
#SpringBootApplication
public class SpringRmiServerApplication {
public static void main(String[] args)
{
//SpringApplication.run(SpringRmiServerApplication.class, args);
SpringApplication app = new SpringApplication(SpringRmiServerApplication.class);
app.setDefaultProperties(Collections.singletonMap("server.port", "8084"));
app.run(args);
}
}
So, my problem is how to interoperate pure/plain/standard java rmi client which is in a swing GUI with spring rmi server?
Edit #1:
By the way if you can provide further explanations or links about internal details of spring RMI stub creation and why they don't interoperate i will be happy. Thanks indeed.
And also, if you look at my getRemoteBean method which is from legacy code, how does this lookup string works? I mean where does rmi registry file or something resides at server or is this the default format or can i customize it?
Edit #2:
I have also tried this kind of lookup in the client:
private void initializeSpringEJBz(RemoteInvocation remoteInvocation) throws Exception {
HelloWorldRMI helloWorldService = (HelloWorldRMI) Naming.lookup("rmi://xxx:1099/helloworldrmi");
System.out.println("Output" + helloWorldService.sayHelloRmi("hello "));
//hw = remoteInvocation.getRemoteObject(HelloWorldRMI.class, "helloworldrmi");
}
Edit #3:
While I'm searching i found that someone in a spring forum suggested that to force spring to create plain java rmi stub we have to make some changes on the server side so i have tried this:
import java.rmi.server.RemoteObject;
public interface HelloWorldRMI extends **Remote** {
public String sayHelloRmi(String msg) throws **RemoteException**;
...
}
...
public class HelloWorldRMIimpl extends **RemoteObject** implements HelloWorldRMI {
...
}
Is the code above on the right path to solve the problem?
Beside that the first problem is the connection setup as you can see in the beginning of the question. Why i'm getting this error? What is the difference between "rmi://" and "remote://" ?
While I was trying to figure out, I could be able to find a solution. It's true that Spring RMI and Java RMI do not interoperate but currently i don't have enough knowledge to explain its cause. I couldn't find any complete explanation about internals of this mismatch yet.
The solution is using plain Java RMI in Spring backend by using java.rmi.*(Remote, RemoteException and server.UnicastRemoteObject).
java.rmi.server.UnicastRemoteObject is used for exporting a remote object with Java Remote Method Protocol (JRMP) and obtaining a stub that communicates to the remote object.
Edit:
I think this post is closely related to this interoperability issue: Java Spring RMI Activation
Spring doesn't support RMI activation. Spring includes an RmiServiceExporter for calling remote objects that contains nice improvements over standard RMI, such as not requiring that services extend java.rmi.Remote.
Solution:
This is the interface that server exports:
package com.xxx.ejb.interf;
import java.rmi.Remote;
import java.rmi.RemoteException;
public interface HelloWorldRMI extends Remote {
public String sayHelloRmi(String msg) throws RemoteException;
}
and this is the implementation of exported class:
package com.xxx.proxyserver.service;
import com.xxx.ejb.interf.HelloWorldRMI;
import java.rmi.RemoteException;
import java.rmi.server.UnicastRemoteObject;
import java.util.Date;
public class HelloWorldRMIimpl extends UnicastRemoteObject implements HelloWorldRMI {
public HelloWorldRMIimpl() throws RemoteException{
super();
}
#Override
public String sayHelloRmi(String msg) {
System.out.println("================Server Side ========================");
System.out.println("Inside Rmi IMPL - Incoming msg : " + msg);
return "Hello " + msg + " :: Response time - > " + new Date();
}
}
and the RMI Registry is:
package com.xxx.proxyserver;
import com.xxx.proxyserver.service.CustomerServiceImpl;
import com.xxx.proxyserver.service.HelloWorldRMIimpl;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import java.rmi.registry.LocateRegistry;
import java.rmi.registry.Registry;
import java.util.Collections;
#SpringBootApplication
public class ProxyServerApplication {
public static void main(String[] args) throws Exception
{
Registry registry = LocateRegistry.createRegistry(1200); // this line of code automatic creates a new RMI-Registry. Existing one can be also reused.
System.out.println("Registry created !");
registry.rebind("just_an_alias",new HelloWorldRMIimpl());
registry.rebind("path/to/service_as_registry_key/CustomerService", new CustomerServiceImpl());
SpringApplication app = new SpringApplication(ProxyServerApplication.class);
app.setDefaultProperties(Collections.singletonMap("server.port", "8084")); // Service port
app.run(args);
}
}
Client:
...
HelloWorldRMI helloWorldService = (HelloWorldRMI)Naming.lookup("rmi://st-spotfixapp1:1200/just_an_alias");
System.out.println("Output" + helloWorldService.sayHelloRmi("hello from client ... "));
...

Attaching AWS documentDB to Spring Boot application

I've recently tried using the new AWS DocumentDB service as my DB in a Spring application.
The cluster has been created in the same VPC as the EKS on which I deploy my application. Security groups allow connections between all nodes in the VPC.
AWS exposes a mongo URI like this for my DB cluster:
mongodb://<my-user>:<insertYourPassword>#<my-cluster-endpoint>:27017/?ssl_ca_certs=rds-combined-ca-bundle.pem&replicaSet=rs0
My question:
How do I make my Spring code work with this kind of connection?
I have tried adding the followig to my application.properties file:
spring.data.mongodb.uri=mongodb://<my-user>:<insertYourPassword>#<my-cluster-endpoint>:27017/admin?ssl_ca_certs=rds-combined-ca-bundle.pem&replicaSet=rs00
spring.data.mongodb.database=admin
server.ssl.key-store=classpath:rds-combined-ca-bundle.pem
And placing the PEM file in /src/main/resources
However the code still fails to connect to the DB cluster.
I get this message as an error: No server chosen by com.mongodb.client.internal.MongoClientDelegate
Followed by a Exception in monitor thread while connecting to server ...
And finally a timeout exception: com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message
It looks kind of like a security group issue but I have no problem connecting with mongo shell from the same EC2 running the Spring application Pod.
Any ideas?
As mentioned in the documentation,
By design, you access Amazon DocumentDB (with MongoDB compatibility) resources from an Amazon EC2 instance within the same Amazon VPC as the Amazon DocumentDB resources. However, suppose that your use case requires that you or your application access your Amazon DocumentDB resources from outside the cluster's Amazon VPC. In that case, you can use SSH tunneling (also known as "port forwarding") to access your Amazon DocumentDB resources.
Connect from outside VPC
Your Amazon DocumentDB cluster should be running in your default virtual private cloud (VPC). To interact with your Amazon DocumentDB cluster, you must launch an Amazon Elastic Compute Cloud (Amazon EC2) instance into your default VPC, in the same AWS Region where you created your Amazon DocumentDB cluster.
Follow the guide to connect to the cluster
AWS DocumentDB cluster
GitHub Reference: spring-boot-aws-documentdb
Update:
To connect through SSL, use below logic by setting SSL_CERTIFICATE pointing to aws region specific intermediate certificate.
This can be downloaded from SSL certs and copy it to base directory.
Alternatively, you can provide absolute path to the variable SSL_CERTIFICATE.
private static final String SSL_CERTIFICATE = "rds-ca-2015-us-east-1.pem";
private static final String KEY_STORE_TYPE = "JKS";
private static final String KEY_STORE_PROVIDER = "SUN";
private static final String KEY_STORE_FILE_PREFIX = "sys-connect-via-ssl-test-cacerts";
private static final String KEY_STORE_FILE_SUFFIX = ".jks";
private static final String DEFAULT_KEY_STORE_PASSWORD = "changeit";
public static void main(String[] args) {
SSLContextHelper.setSslProperties();
SpringApplication.run(Application.class, args);
}
protected static class SSLContextHelper{
/**
* This method sets the SSL properties which specify the key store file, its type and password:
* #throws Exception
*/
private static void setSslProperties() {
try {
System.setProperty("javax.net.ssl.trustStore", createKeyStoreFile());
} catch (Exception e) {
e.printStackTrace();
}
System.setProperty("javax.net.ssl.trustStoreType", KEY_STORE_TYPE);
System.setProperty("javax.net.ssl.trustStorePassword", DEFAULT_KEY_STORE_PASSWORD);
}
private static String createKeyStoreFile() throws Exception {
return createKeyStoreFile(createCertificate()).getPath();
}
/**
* This method generates the SSL certificate
* #return
* #throws Exception
*/
private static X509Certificate createCertificate() throws Exception {
CertificateFactory certFactory = CertificateFactory.getInstance("X.509");
URL url = new File(SSL_CERTIFICATE).toURI().toURL();
if (url == null) {
throw new Exception();
}
try (InputStream certInputStream = url.openStream()) {
return (X509Certificate) certFactory.generateCertificate(certInputStream);
}
}
/**
* This method creates the Key Store File
* #param rootX509Certificate - the SSL certificate to be stored in the KeyStore
* #return
* #throws Exception
*/
private static File createKeyStoreFile(X509Certificate rootX509Certificate) throws Exception {
File keyStoreFile = File.createTempFile(KEY_STORE_FILE_PREFIX, KEY_STORE_FILE_SUFFIX);
try (FileOutputStream fos = new FileOutputStream(keyStoreFile.getPath())) {
KeyStore ks = KeyStore.getInstance(KEY_STORE_TYPE, KEY_STORE_PROVIDER);
ks.load(null);
ks.setCertificateEntry("rootCaCertificate", rootX509Certificate);
ks.store(fos, DEFAULT_KEY_STORE_PASSWORD.toCharArray());
}
return keyStoreFile;
}
}
connection output:
019-01-17 13:33:22.316 INFO 3598 --- [onaws.com:27017] org.mongodb.driver.cluster : Canonical address mongodb.cktoiipu3bbd.us-east-1.docdb.amazonaws.com:27017 does not match server address. Removing mongodb.cluster-cktoiipu3bbd.us-east-1.docdb.amazonaws.com:27017 from client view of cluster
2019-01-17 13:33:22.401 INFO 3598 --- [onaws.com:27017] org.mongodb.driver.connection : Opened connection [connectionId{localValue:2}] to mongodb.cktoiipu3bbd.us-east-1.docdb.amazonaws.com:27017
2019-01-17 13:33:22.403 INFO 3598 --- [onaws.com:27017] org.mongodb.driver.cluster : Monitor thread successfully connected to server with description ServerDescription{address=mongodb.cktoiipu3bbd.us-east-1.docdb.amazonaws.com:27017, type=REPLICA_SET_PRIMARY, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 6, 0]}, minWireVersion=0, maxWireVersion=6, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=null, roundTripTimeNanos=2132149, setName='rs0', canonicalAddress=mongodb.cktoiipu3bbd.us-east-1.docdb.amazonaws.com:27017, hosts=[mongodb.cktoiipu3bbd.us-east-1.docdb.amazonaws.com:27017], passives=[], arbiters=[], primary='mongodb.cktoiipu3bbd.us-east-1.docdb.amazonaws.com:27017', tagSet=TagSet{[]}, electionId=7fffffff0000000000000001, setVersion=null, lastWriteDate=Thu Jan 17 13:33:21 UTC 2019, lastUpdateTimeNanos=516261208876}
2019-01-17 13:33:22.406 INFO 3598 --- [onaws.com:27017] org.mongodb.driver.cluster : Discovered replica set primary mongodb.cktoiipu3bbd.us-east-1.docdb.amazonaws.com:27017
2019-01-17 13:33:22.595 INFO 3598 --- [ main] com.barath.app.CustomerService : Saving the customer with customer details com.barath.app.Customer#6c130c45
2019-01-17 13:33:22.912 INFO 3598 --- [ main] org.mongodb.driver.connection : Opened connection [connectionId{localValue:3}] to mongodb.cktoiipu3bbd.us-east-1.docdb.amazonaws.com:27017
2019-01-17 13:33:23.936 INFO 3598 --- [ main] pertySourcedRequestMappingHandlerMapping : Mapped URL path [/v2/api-docs] onto method [public org.springframework.http.ResponseEntity<springfox.documentation.spring.web.json.Json> springfox.documentation.swagger2.web.Swagger2Controller.getDocumentation(java.lang.String,javax.servlet.http.HttpServletRequest)]
The answer provided by #Sunny Pelletier worked for me with a mashup of #Frank's answer in our Java setup.
So for me, I wanted a solution that worked for our local docker setup and for any of our AWS environments that have active profiles and other env vars set in our environment via the CDK.
I first started with a simple Configuration POJO to setup my properties outside the spring.data.mongo.* paradigm. You don't have to do this and can just let Spring handle it as it normally does to create the MongoClient.
My default local dev application.yml and corresponding config class.
mongo:
user: mongo
password: mongo
host: localhost
port: 27017
database: my-service
#Data
#Configuration
#ConfigurationProperties(prefix = "mongo")
public class MongoConnectConfig {
private int port;
private String host;
private String user;
private String database;
private String password;
}
Then, I created two AbstractMongoClientConfiguration child classes; one for local and one for non-local. The key here is that I didn't create my own MongoClient. The reason is because I want all the good Spring Boot initialization stuff that you get with the framework. For example, the auto-registration of all the converters and such.
Instead, I leveraged the customization hook provided by AbstractMongoClientConfiguration.configureClientSettings(MongoClientSettings.Builder builder) to then aggregate the custom settings like the .pem piece.
The other part is that I leveraged profiles to enable/disable the configurations to make it "seamless" for local developers; we don't use any profiles other than default for local development so it's easier to get setup without having to "know" so much from the start.
import com.mongodb.ConnectionString;
import com.mongodb.MongoClientSettings;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import org.springframework.data.mongodb.config.AbstractMongoClientConfiguration;
import org.springframework.data.mongodb.repository.config.EnableMongoRepositories;
#Slf4j
#Configuration
#RequiredArgsConstructor
#Profile({"!dev && !qa && !prod"})
#EnableMongoRepositories(basePackages = "co.my.data.repositories")
public class LocalDevMongoConfig extends AbstractMongoClientConfiguration {
private final MongoConnectConfig config;
#Override
public String getDatabaseName() {
return config.getDatabase();
}
#Override
protected void configureClientSettings(MongoClientSettings.Builder builder) {
log.info("Applying Local Dev MongoDB Configuration");
builder.applyConnectionString(new ConnectionString(getConnectionString()));
}
//mongodb://${mongo.user}:${mongo.password}#${mongo.host}:${mongo.port}/${mongo.database}?authSource=admin
private String getConnectionString() {
return String.format("mongodb://%s:%s#%s:%s/%s?authSource=admin",
config.getUser(),
config.getPassword(),
config.getHost(),
config.getPort(),
config.getDatabase()
);
}
}
import com.mongodb.ConnectionString;
import com.mongodb.MongoClientSettings;
import lombok.RequiredArgsConstructor;
import lombok.SneakyThrows;
import lombok.extern.slf4j.Slf4j;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import org.springframework.core.io.ClassPathResource;
import org.springframework.data.mongodb.config.AbstractMongoClientConfiguration;
import org.springframework.data.mongodb.repository.config.EnableMongoRepositories;
import javax.net.ssl.SSLContext;
import javax.net.ssl.TrustManagerFactory;
import java.io.ByteArrayInputStream;
import java.io.File;
import java.nio.file.Files;
import java.security.KeyStore;
import java.security.cert.CertificateFactory;
import java.util.Arrays;
import java.util.stream.Collectors;
#Slf4j
#Configuration
#RequiredArgsConstructor
#Profile({"dev || qa || prod"})
#EnableMongoRepositories(basePackages = "co.my.data.repositories")
public class DocumentDbMongoConfig extends AbstractMongoClientConfiguration {
private final MongoConnectConfig config;
#Override
public String getDatabaseName() {
return config.getDatabase();
}
#SneakyThrows
#Override
protected void configureClientSettings(MongoClientSettings.Builder builder) {
log.info("Applying AWS DocumentDB Configuration");
builder.applyConnectionString(new ConnectionString(getConnectionString()));
var endOfCertificateDelimiter = "-----END CERTIFICATE-----";
File resource = new ClassPathResource("certs/rds-combined-ca-bundle.pem").getFile();
String pemContents = new String(Files.readAllBytes(resource.toPath()));
var allCertificates = Arrays.stream(pemContents
.split(endOfCertificateDelimiter))
.filter(line -> !line.isBlank())
.map(line -> line + endOfCertificateDelimiter)
.collect(Collectors.toUnmodifiableList());
var certificateFactory = CertificateFactory.getInstance("X.509");
var keyStore = KeyStore.getInstance(KeyStore.getDefaultType());
// This allows us to use an in-memory key-store
keyStore.load(null);
for (int i = 0; i < allCertificates.size(); i++) {
var certString = allCertificates.get(i);
var caCert = certificateFactory.generateCertificate(new ByteArrayInputStream(certString.getBytes()));
keyStore.setCertificateEntry(String.format("AWS-certificate-%s", i), caCert);
}
var trustManagerFactory = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
trustManagerFactory.init(keyStore);
var sslContext = SSLContext.getInstance("TLS");
sslContext.init(null, trustManagerFactory.getTrustManagers(), null);
builder.applyToSslSettings(ssl -> {
ssl.enabled(true).context(sslContext);
});
}
/**
* Partly based on the AWS Console "Connectivity & security " section in the DocumentDB Cluster View.
* Since we register the pem above, we don't need to add the ssl & sslCAFile piece
* mongodb://${user}:${password}#${host}:${port}/?replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=false
*/
private String getConnectionString() {
return String.format("mongodb://%s:%s#%s:%s/%s?replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=false",
config.getUser(),
config.getPassword(),
config.getHost(),
config.getPort(),
config.getDatabase()
);
}
}
Lastly, we place the rds-combined-ca-bundle.pem in the src/main/resources/certs/ folder.
Side Notes:
Again, I believe you should be able to get away with using the default spring.data* properties and your MongoClient should have used them.
Ignore the #SneakyThrows here, I just did that for code brevity purposes, handle your checked exceptions as you see fit.
I guess we can see why Kotlin syntax can be considered "cleaner" huh? :)
I can confirm the solution provided by #Barath allows you to secure the AWS DocumentDB TLS connection inside the Java application itself. This is a much cleaner approach compared to the one described by AWS on https://docs.aws.amazon.com/documentdb/latest/developerguide/connect_programmatically.html which requires you to run a script on your server which is more complicated and difficult for automated deploys etc.
To further set up the connection itself in the Spring application I used the following #Configuration class, which allows you to connect to a local MongoDB for testing during development, and the AWS one once deployed with settings from the properties file.
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import org.springframework.core.convert.converter.Converter;
import org.springframework.data.mongodb.config.AbstractMongoClientConfiguration;
import org.springframework.data.mongodb.core.convert.MongoCustomConversions;
import org.springframework.data.mongodb.repository.config.EnableMongoRepositories;
#Configuration
#EnableMongoRepositories(basePackages = "YOUR.PACKAGE.WITH.repository")
public class MongoDbConfig extends AbstractMongoClientConfiguration {
#Value("${spring.profiles.active}")
private String activeProfile;
#Value("${mongodb.host:localhost}")
private String dbUri;
#Value("${mongodb.port:27017}")
private int dbPort;
#Value("${mongodb.database.name:YOUR_DOCUMENTDB_NAME}")
private String dbName;
#Value("${mongodb.username:}")
private String dbUser;
#Value("${mongodb.password:}")
private String dbPassword;
#Override
public String getDatabaseName() {
return dbName;
}
#Override
public MongoClient mongoClient() {
ConnectionString connectionString = new ConnectionString(getConnectionString());
MongoClientSettings mongoClientSettings = MongoClientSettings.builder()
.applyConnectionString(connectionString)
.build();
return MongoClients.create(mongoClientSettings);
}
private String getConnectionString() {
if (activeProfile.contains("local")) {
return String.format("mongodb://%s:%s/%s", dbUri, dbPort, dbName);
}
return String.format("mongodb://%s:%s#%s:%s/%s?ssl=true&replicaSet=rs0&readpreference=secondaryPreferred&retrywrites=false",
dbUser, dbPassword, dbUri, dbPort, dbName);
}
}
I actually faced the same issue as you did, but now AWS uses rds-combined-ca-bundle.pem which combines together many certificates into one.
If you don't want to create a trust-store using their outdated documentation, you can do it yourself and have the rds-combined-ca-bundle.pem into your application generating the key-store at runtime.
I managed to get this to work with this code sample. This has been tested with spring:2.4, mongo-driver: 4.1.1 and documentDB using mongo 4.0 compatibility.
val endOfCertificateDelimiter = "-----END CERTIFICATE-----"
// rds-combined-ca-bundle.pem contains more than one certificate. We need to add them all to the trust-store independantly.
val allCertificates = ClassPathResource("certificates/rds-combined-ca-bundle.pem").file.readText()
.split(endOfCertificateDelimiter)
.filter { it.isNotBlank() }
.map { it + endOfCertificateDelimiter }
val certificateFactory = CertificateFactory.getInstance("X.509")
val keyStore = KeyStore.getInstance(KeyStore.getDefaultType())
keyStore.load(null) // This allows us to use an in-memory key-store
allCertificates.forEachIndexed { index, certificate ->
val caCert = certificateFactory.generateCertificate(certificate.byteInputStream()) as X509Certificate
keyStore.setCertificateEntry("AWS-certificate-$index", caCert)
}
val trustManagerFactory = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm())
trustManagerFactory.init(keyStore)
val sslContext = SSLContext.getInstance("TLS")
sslContext.init(null, trustManagerFactory.trustManagers, null)
builder.applyToSslSettings {
it.enabled(true)
.context(sslContext)
}
Here is a solution that worked for me just call the setSslProperties method before you connect to your documentdb.
/**
* This method sets the SSL properties which specify the key store file, its type and password.
*
* #throws Exception
*/
private static void setSslProperties() throws Exception {
System.setProperty("javax.net.ssl.trustStore", createKeyStoreFile());
System.setProperty("javax.net.ssl.trustStoreType", KEY_STORE_TYPE);
System.setProperty("javax.net.ssl.trustStorePassword", DEFAULT_KEY_STORE_PASSWORD);
}
/**
* This method returns the path of the Key Store File needed for the SSL verification during the IAM Database Authentication to
* the db instance.
*
* #return
* #throws Exception
*/
private static String createKeyStoreFile() throws Exception {
return createKeyStoreFile(createCertificate()).getPath();
}
/**
* This method generates the SSL certificate.
*
* #return
* #throws Exception
*/
private static X509Certificate createCertificate() throws Exception {
final CertificateFactory certFactory = CertificateFactory.getInstance("X.509");
final ClassLoader classLoader = MyClass.class.getClassLoader();
final InputStream is = classLoader.getResourceAsStream(SSL_CERTIFICATE);
return (X509Certificate) certFactory.generateCertificate(is);
}
/**
* This method creates the Key Store File.
*
* #param rootX509Certificate - the SSL certificate to be stored in the KeyStore
* #return
* #throws Exception
*/
private static File createKeyStoreFile(final X509Certificate rootX509Certificate) throws Exception {
final File keyStoreFile = File.createTempFile(KEY_STORE_FILE_PREFIX, KEY_STORE_FILE_SUFFIX);
try (final FileOutputStream fos = new FileOutputStream(keyStoreFile.getPath())) {
final KeyStore ks = KeyStore.getInstance(KEY_STORE_TYPE, KEY_STORE_PROVIDER);
ks.load(null);
ks.setCertificateEntry("rootCaCertificate", rootX509Certificate);
ks.store(fos, DEFAULT_KEY_STORE_PASSWORD.toCharArray());
}
return keyStoreFile;
}
Here are the constants.
public static final String SSL_CERTIFICATE = "rds-ca-2019-root.pem";
public static final String KEY_STORE_TYPE = "JKS";
public static final String KEY_STORE_PROVIDER = "SUN";
public static final String KEY_STORE_FILE_PREFIX = "sys-connect-via-ssl-test-cacerts";
public static final String KEY_STORE_FILE_SUFFIX = ".jks";
public static final String DEFAULT_KEY_STORE_PASSWORD = "changeit";
Here is the link for rds-ca-2019-root.pem file place that file inder resources folder.
let me know this works for you.
Here is a sample
setSslProperties();
final MongoCredential credential = MongoCredential.createCredential(userName, mongoProps.getDatabaseName(), password.toCharArray());
final MongoClientSettings settings = MongoClientSettings.builder()
.credential(credential)
.readPreference(ReadPreference.secondaryPreferred())
.retryWrites(false)
.applyToSslSettings(builder -> builder.enabled(true))
.applyToConnectionPoolSettings(connPoolBuilder ->
ConnectionPoolSettings.builder().
maxSize(1).build())
.applyToClusterSettings(builder ->
builder.hosts(Arrays.asList(new ServerAddress(clusterEndPoint, 27017))))
.build();
mongoClient = MongoClients.create(settings);
As pointed out by #mmr25 in comments to #Barath answer, The solution only works for when service needs to only connect to documentDB. You start getting "Gettting PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested" for other http requests.
To address this issue we need to only enable sslcontext for documentdb connections. To do we can use Netty as HttpClient for mongodb connections. To enable netty we need to add following maven dependency to your spring boot project:
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-tcnative-boringssl-static</artifactId>
<version>2.0.53.Final</version>
</dependency>
and put your pem file in your resources folder and define following beans in one of the class annotated with #Configutation annotations.
#Slf4j
#Configuration
public class MongoDbConfiguration {
private static final String AWS_PUBLIC_KEY_NAME = "rds-ca-2019-root.pem";
private final String mongoConnectionUri;
private final String databaseName;
public MongoDbConfiguration(#Value("${spring.data.mongodb.uri}") String mongoConnectionUri, #Value("${spring.data.mongodb.database}") String databaseName) {
this.mongoConnectionUri = mongoConnectionUri;
this.databaseName = databaseName;
}
#Bean
#Primary
#SneakyThrows
#Profile("!default")
public MongoClient mongoClient() {
SslContext sslContext = SslContextBuilder.forClient()
.sslProvider(SslProvider.OPENSSL)
.trustManager(new ClassPathResource(AWS_PUBLIC_KEY_NAME).getInputStream())
.build();
ConnectionString connectionString = new ConnectionString(mongoConnectionUri);
return MongoClients.create(
MongoClientSettings.builder()
.applyConnectionString(connectionString)
.applyToSslSettings(builder -> {
builder.enabled((null == connectionString.getSslEnabled()) ? false : connectionString.getSslEnabled());
builder.invalidHostNameAllowed((null == connectionString.getSslInvalidHostnameAllowed()) ? false : connectionString.getSslInvalidHostnameAllowed());
})
.streamFactoryFactory(NettyStreamFactoryFactory.builder()
.sslContext(sslContext)
.build())
.build());
}
}
Import Statements:
import com.mongodb.ConnectionString;
import com.mongodb.MongoClientSettings;
import com.mongodb.client.MongoClient;
import com.mongodb.client.MongoClients;
import com.mongodb.connection.netty.NettyStreamFactoryFactory;
import io.netty.handler.ssl.SslContext;
import io.netty.handler.ssl.SslContextBuilder;
import io.netty.handler.ssl.SslProvider;
import lombok.SneakyThrows;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import org.springframework.context.annotation.Profile;
import org.springframework.core.io.ClassPathResource;
import org.springframework.data.mongodb.MongoDatabaseFactory;
import org.springframework.data.mongodb.MongoTransactionManager;
Now you should be able to connect to your documentdb and other http connection should also work as expected.
Reference: https://www.mongodb.com/docs/drivers/java/sync/current/fundamentals/connection/tls/#customize-tls-ssl-configuration-through-the-netty-sslcontext
The Simple solution is you can remove the TLS (SSL) option in AWS, then you can remove the "ssl_ca_certs=rds-combined-ca-bundle.pem" from your connection string. But if the application required the SSL DB connectivity, then you can use the
AWS Guide

How to use a custom ssh key location with Spring Cloud Config

I am trying to setup a Spring Cloud Config server that uses a custom location for the ssh private key.
The reason i need to specify a custom location for the key is because the user running the application has no home directory ..so there is not way for me to use the default ~/.ssh directory for my key.
I know that there is the option of creating a read-only account and provide the user/password in the configuration but the ssh way seams more clean.Is there a way I can setup this?
After reading a lot more code... I found a relatively simple work around to allow you to set whatever SSH keys you want.
First: Create a class as follows:
/**
* #file FixedSshSessionFactory.java
*
* #date Aug 23, 2016 2:16:11 PM
* #author jzampieron
*/
import org.eclipse.jgit.transport.JschConfigSessionFactory;
import org.eclipse.jgit.transport.OpenSshConfig.Host;
import org.eclipse.jgit.util.FS;
import com.jcraft.jsch.JSch;
import com.jcraft.jsch.JSchException;
import com.jcraft.jsch.Session;
/**
* Short Desc Here.
*
* #author jzampieron
*
*/
public class FixedSshSessionFactory extends JschConfigSessionFactory
{
protected String[] identityKeyPaths;
/**
* #param string
*/
public FixedSshSessionFactory( String... identityKeyPaths )
{
this.identityKeyPaths = identityKeyPaths;
}
/* (non-Javadoc)
* #see org.eclipse.jgit.transport.JschConfigSessionFactory#configure(org.eclipse.jgit.transport.OpenSshConfig.Host, com.jcraft.jsch.Session)
*/
#Override
protected void configure( Host hc, Session session )
{
// nothing special needed here.
}
/* (non-Javadoc)
* #see org.eclipse.jgit.transport.JschConfigSessionFactory#getJSch(org.eclipse.jgit.transport.OpenSshConfig.Host, org.eclipse.jgit.util.FS)
*/
#Override
protected JSch getJSch( Host hc, FS fs ) throws JSchException
{
JSch jsch = super.getJSch( hc, fs );
// Clean out anything 'default' - any encrypted keys
// that are loaded by default before this will break.
jsch.removeAllIdentity();
for( final String identKeyPath : identityKeyPaths )
{
jsch.addIdentity( identKeyPath );
}
return jsch;
}
}
Then register it with jgit:
...
import org.eclipse.jgit.transport.SshSessionFactory;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.config.server.EnableConfigServer;
#SpringBootApplication
#EnableConfigServer
public class ConfigserverApplication
{
public static void main(String[] args) {
URL res = ConfigserverApplication.class.getClassLoader().getResource( "keys/id_rsa" );
String path = res.getPath();
SshSessionFactory.setInstance( new FixedSshSessionFactory( path ) );
SpringApplication.run(ConfigserverApplication.class, args);
}
}
For this example I'm storing the keys in the src/main/resources/keys folder and
I'm using the class loader to get at them.
The removeAllIdentities is important b/c JSch was loading my default ssh key before the one I specified and then Spring Cloud was crashing out b/c its encrypted.
This allowed me to successfully authenticate with bitbucket.
The FixedSshSessionFactory solution of #Jeffrey Zampieron is good. However it won't work if packaging the spring boot app as a fat jar.
Polish it a bit for working with fat jar,
/**
* #file FixedSshSessionFactory.java
* #date Aug 23, 2016 2:16:11 PM
* #author jzampieron
*/
import com.jcraft.jsch.JSch;
import com.jcraft.jsch.JSchException;
import com.jcraft.jsch.Session;
import lombok.extern.slf4j.Slf4j;
import org.eclipse.jgit.transport.JschConfigSessionFactory;
import org.eclipse.jgit.transport.OpenSshConfig.Host;
import org.eclipse.jgit.util.FS;
import org.springframework.util.StreamUtils;
import java.io.IOException;
import java.io.InputStream;
import java.net.URL;
/**
* Short Desc Here.
*
* #author jzampieron
*/
#Slf4j
public class FixedSshSessionFactory extends JschConfigSessionFactory {
protected URL[] identityKeyURLs;
/**
* #param url
*/
public FixedSshSessionFactory(URL... identityKeyURLs) {
this.identityKeyURLs = identityKeyURLs;
}
/* (non-Javadoc)
* #see org.eclipse.jgit.transport.JschConfigSessionFactory#configure(org.eclipse.jgit.transport.OpenSshConfig.Host, com.jcraft.jsch.Session)
*/
#Override
protected void configure(Host hc, Session session) {
// nothing special needed here.
}
/* (non-Javadoc)
* #see org.eclipse.jgit.transport.JschConfigSessionFactory#getJSch(org.eclipse.jgit.transport.OpenSshConfig.Host, org.eclipse.jgit.util.FS)
*/
#Override
protected JSch getJSch(Host hc, FS fs) throws JSchException {
JSch jsch = super.getJSch(hc, fs);
// Clean out anything 'default' - any encrypted keys
// that are loaded by default before this will break.
jsch.removeAllIdentity();
int count = 0;
for (final URL identityKey : identityKeyURLs) {
try (InputStream stream = identityKey.openStream()) {
jsch.addIdentity("key" + ++count, StreamUtils.copyToByteArray(stream), null, null);
} catch (IOException e) {
logger.error("Failed to load identity " + identityKey.getPath());
}
}
return jsch;
}
}
I am having a similar problem because my default SSH key is encrypted with a password and therefore doesn't "just work", which makes sense because this is a head-less setup.
I went source-diving into Spring Cloud Config, org.eclipse.jgit and eventually ended up in com.jcraft.jsch. The short answer is that neither JGit nor Spring Cloud expose an obvious way to do this.
JSch clearly supports this feature within a JSch() instance, but you can't get at it from the Spring Cloud level. At least not that I could find in a hour or so of looking.

Resources