Autowiring MongoClient and MongoClientSettings without explicitly specifying a Connection String - spring

I am upgrading the MongoDB driver which requires moving away from the older MongoClientOptions to the newer MongoClientSettings.
In the older implementation, the following configuration was used within a #Configuration class with the ConnectionString inferred from the spring.data.mongodb.uri and an #Autowired MongoTemplate:
#Bean
public MongoClientOptions mongoOptions() {
Builder clientOptionsBuilder = MongoClientOptions.builder()
//Timeout Configurations
if(sslIsEnabled) {
clientOptionsBuilder.sslEnabled(true)
//Other SSL options
}
return clientOptionsBuilder.build();
}
And in the Newer Implementation, a ConnectionString parameter is specifically expected, and the property file spring.data.mongodb.uri is not selected automatically. As a result, I have specified the connection string using the #Value Annotation. Not doing this results in the program to infer localhost:27017 as the connection source.
#Value("${spring.data.mongodb.uri}")
String connectionString;
#Bean
public MongoClient mongoClient() {
MongoClientSettings.Builder clientSettingsBuilder = MongoClientSettings.builder()
.applyToSocketSettings(builder -> {
// Timeout Configurations
}).applyConnectionString(new ConnectionString(connectionString));
if (sSLEnabled) {
clientSettingsBuilder.applyToSslSettings(builder -> {
builder.enabled(sslIsEnabled);
//Other SSL Settings
});
}
return MongoClients.create(clientSettingsBuilder.build());
}
While documentation and other StackOverflow posts mention MongoClientSettings overrides the property file entries, is there a way to retrieve/infer the MongoClientSettings from the property files and then append other custom configurations to it?
I am using Spring Boot 2.6 and spring starter dependency for MongoDB
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>

I found that a similar question asked on GitHub earlier.
Modify MongoClientSettings while using auto configuration with mongodb #20195
Replacing the #Bean configuration which had MongoClientOptions with MongoClientSettingsBuilderCustomizer helped solve the problem.
#Bean
public MongoClientSettingsBuilderCustomizer mongoDBDefaultSettings()
throws KeyManagementException, NoSuchAlgorithmException {
return builder -> {
builder.applyToSocketSettings(bldr -> {
//Apply any custom socket settings
});
builder.applyToSslSettings(blockBuilder -> {
// Apply SSL settings
});
// Apply other settings to the builder.
};
}

Related

Leverage Spring boot Redis Auto configure logic for RedisConnectionFactory

Spring boot auto configures RedisConnectionFactory if spring-data-redis exists on classpath and RedisConnectionFactory is initialized in LettuceConnectionConfiguration if Lettuce-core available on classpath.
I've only one Redis store as of now, so leveraging Spring boot auto configuration.
Now I'm adding two redis stores, one redis store used as default and other is used when specified with parameter cacheManager = "secondayCacheManager" in #Cacheable annotation so, application should've capability to cache/cache-get on both redis stores.
To configure both Redis Stores, we've to configure both the primary and secondary RedisConnectionFactory and cacheManager using custom configuration. (because spring doesn't auto configure RedisConnectionFactory if it already exists in any custom configuration)
Now the above is custom configuration and missing lot of logic that is happening while configuring RedisConnectionFactory in LettuceConnectionConfiguration.
Auto configure logic for LettuceConnectionConfiguration is package private so, cannot be called directly from custom configuration.
We would like to leverage the auto configure logic in
LettuceConnectionConfiguration while configuring the custom
RedisConnectionFactory for both primary and secondary redis caches.
Is there a way to achieve this?
Reason being we would like keep the redis connection configurations as it is done by spring boot auto configure.
Currently using below code to configure both the primary and secondary RedisConnectionFactory with Pool configuration and some code copy pasted from LettuceConnectionConfiguration class.
public static LettuceConnectionFactory buildLettuceConnectionFactory(RedisProperties properties, ClientResources clientResources) {
RedisStandaloneConfiguration standaloneConfiguration = new RedisStandaloneConfiguration(properties.getHost(), properties.getPort());
standaloneConfiguration.setDatabase(properties.getDatabase());
if (properties.getPassword() != null) {
standaloneConfiguration.setPassword(RedisPassword.of(properties.getPassword()));
}
if (properties.getUsername() != null) {
standaloneConfiguration.setUsername(properties.getUsername());
}
LettucePoolingClientConfiguration poolingClientConfiguration = LettucePoolingClientConfiguration.builder()
.poolConfig(buildGenericObjectPoolConfig(properties))
.shutdownTimeout(properties.getLettuce().getShutdownTimeout())
.clientOptions(createClientOptions(properties))
.clientResources(clientResources)
.build();
LettuceConnectionFactory lettuceConnectionFactory = new LettuceConnectionFactory(
standaloneConfiguration, poolingClientConfiguration);
lettuceConnectionFactory.afterPropertiesSet();
return lettuceConnectionFactory;
}
private static GenericObjectPoolConfig buildGenericObjectPoolConfig(RedisProperties properties) {
RedisProperties.Pool pool = properties.getLettuce().getPool();
GenericObjectPoolConfig poolConfig = new GenericObjectPoolConfig();
if (Objects.nonNull(pool)) {
poolConfig.setMaxIdle(pool.getMaxIdle());
poolConfig.setMinIdle(pool.getMinIdle());
poolConfig.setMaxTotal(pool.getMaxActive());
poolConfig.setMaxWaitMillis(pool.getMaxWait().toMillis());
}
return poolConfig;
}
private static ClientOptions createClientOptions(RedisProperties properties) {
ClientOptions.Builder builder = initializeClientOptionsBuilder(properties);
Duration connectTimeout = properties.getConnectTimeout();
if (connectTimeout != null) {
builder.socketOptions(SocketOptions.builder().connectTimeout(connectTimeout).build());
}
return builder.timeoutOptions(TimeoutOptions.enabled()).build();
}
private static ClientOptions.Builder initializeClientOptionsBuilder(RedisProperties properties) {
if (properties.getCluster() != null) {
ClusterClientOptions.Builder builder = ClusterClientOptions.builder();
Refresh refreshProperties = properties.getLettuce().getCluster().getRefresh();
Builder refreshBuilder = ClusterTopologyRefreshOptions.builder()
.dynamicRefreshSources(refreshProperties.isDynamicRefreshSources());
if (refreshProperties.getPeriod() != null) {
refreshBuilder.enablePeriodicRefresh(refreshProperties.getPeriod());
}
if (refreshProperties.isAdaptive()) {
refreshBuilder.enableAllAdaptiveRefreshTriggers();
}
return builder.topologyRefreshOptions(refreshBuilder.build());
}
return ClientOptions.builder();
}

ReadOnlyKeyValueStore from a KTable using spring-kafka

I am migrating a Kafka Streams implementation which uses pure Kafka apis to use spring-kafka instead as it's incorporated in a spring-boot application.
Everything works fine the Stream, GlobalKTable, branching that I have all works perfectly fine but I am having a hard time incorporating a ReadOnlyKeyValueStore. Based on the spring-kafka documentation here: https://docs.spring.io/spring-kafka/docs/2.6.10/reference/html/#streams-spring
It says:
If you need to perform some KafkaStreams operations directly, you can
access that internal KafkaStreams instance by using
StreamsBuilderFactoryBean.getKafkaStreams(). You can autowire
StreamsBuilderFactoryBean bean by type, but you should be sure to use
the full type in the bean definition.
Based on that I tried to incorporate it to my example as in the following fragments below:
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
public KafkaStreamsConfiguration defaultKafkaStreamsConfig() {
Map<String, Object> props = defaultStreamsConfigs();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "quote-stream");
props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, SpecificAvroSerde.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "stock-quotes-stream-group");
return new KafkaStreamsConfiguration(props);
}
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_BUILDER_BEAN_NAME)
public StreamsBuilderFactoryBean defaultKafkaStreamsBuilder(KafkaStreamsConfiguration defaultKafkaStreamsConfig) {
return new StreamsBuilderFactoryBean(defaultKafkaStreamsConfig);
}
...
final GlobalKTable<String, LeveragePrice> leverageBySymbolGKTable = streamsBuilder
.globalTable(KafkaConfiguration.LEVERAGE_PRICE_TOPIC,
Materialized.<String, LeveragePrice, KeyValueStore<Bytes, byte[]>>as("leverage-by-symbol-table")
.withKeySerde(Serdes.String())
.withValueSerde(leveragePriceSerde));
leveragePriceView = myKStreamsBuilder.getKafkaStreams().store("leverage-by-symbol-table", QueryableStoreTypes.keyValueStore());
But adding the StreamsBuilderFactoryBean(which seems to be needed to get a reference to KafkaStreams) definition causes an error:
The bean 'defaultKafkaStreamsBuilder', defined in class path resource [com/resona/springkafkastream/repository/KafkaConfiguration.class], could not be registered. A bean with that name has already been defined in class path resource [org/springframework/kafka/annotation/KafkaStreamsDefaultConfiguration.class] and overriding is disabled.
The issue is I don't want to control the lifecycle of the stream that's what I get with the plain Kafka APIs so I would like to get a reference to the default managed one as I want spring to manage it but whenever I try to expose the bean it gives the error. Any ideas on what's the correct approach to that using spring-kafka?
P.S - I am not interested in solutions using spring-cloud-stream I am looking for implementations of spring-kafka.
You don't need to define any new beans; something like this should work...
spring.application.name=quote-stream
spring.kafka.streams.properties.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
spring.kafka.streams.properties.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
#SpringBootApplication
#EnableKafkaStreams
public class So69669791Application {
public static void main(String[] args) {
SpringApplication.run(So69669791Application.class, args);
}
#Bean
GlobalKTable<String, String> leverageBySymbolGKTable(StreamsBuilder sb) {
return sb.globalTable("gkTopic",
Materialized.<String, String, KeyValueStore<Bytes, byte[]>> as("leverage-by-symbol-table"));
}
private ReadOnlyKeyValueStore<String, String> leveragePriceView;
#Bean
StreamsBuilderFactoryBean.Listener afterStart(StreamsBuilderFactoryBean sbfb,
GlobalKTable<String, String> leverageBySymbolGKTable) {
StreamsBuilderFactoryBean.Listener listener = new StreamsBuilderFactoryBean.Listener() {
#Override
public void streamsAdded(String id, KafkaStreams streams) {
leveragePriceView = streams.store("leverage-by-symbol-table", QueryableStoreTypes.keyValueStore());
}
};
sbfb.addListener(listener);
return listener;
}
#Bean
KStream<String, String> stream(StreamsBuilder builder) {
KStream<String, String> stream = builder.stream("someTopic");
stream.to("otherTopic");
return stream;
}
}

how to Disable Schema Introspection in graphql-spqr-spring-boot-starter

I have integrated my spring boot application with graphql-spqr-spring-boot-starter https://github.com/leangen/graphql-spqr-spring-boot-starter , I need to find a way on how to disable graphql schema introspection since its a security issue for production.
I am using graphql-spqr 0.9.9 and graphql-spqr-spring-boot-starter 0.0.4, but the code base changed for graphql-spqr 0.10. I'll try to cover both cases, but keep in mind you might have to tweak the code snippets a bit.
In Graphql-spqr-spring-boot starter, GraphQLSchemaGenerator is a bean used to generate the GraphQSchema. It is defined in io.leangen.graphql.spqr.spring.autoconfigure.BaseAutoConfiguration (v0.10) or io.leangen.graphql.spqr.spring.autoconfigure.SpqrAutoConfiguration (v0.9).
You need to provide your own GraphQLSchemaGenerator bean that will set the GraphqlFieldVisibility for the introspection query. According to this issue (cached by google: https://webcache.googleusercontent.com/search?q=cache:8VV29F3ovZsJ:https://github.com/leangen/graphql-spqr/issues/305), there are two different ways to set the field visibility:
Graphql-spqr 0.9
#Bean
public GraphQLSchemaGenerator graphQLSchemaGenerator(SpqrProperties spqrProperties) {
GraphQLSchemaGenerator schemaGenerator = new GraphQLSchemaGenerator();
schemaGenerator.withSchemaProcessors((schemaBuilder, buildContext) ->
{
schemaBuilder.fieldVisibility(new NoIntrospectionGraphqlFieldVisibility());
return schemaBuilder;
});
//Other GraphQLSchemaGenerator configuration
}
Graphql-spqr 0.10
#Bean
public GraphQLSchemaGenerator graphQLSchemaGenerator(SpqrProperties spqrProperties) {
GraphQLSchemaGenerator schemaGenerator = new GraphQLSchemaGenerator();
schemaGenerator.withSchemaProcessors((schemaBuilder, buildContext) ->
{
buildContext.codeRegistry.fieldVisibility(NoIntrospectionGraphqlFieldVisibility.NO_INTROSPECTION_FIELD_VISIBILITY);
return schemaBuilder;
});
//Other GraphQLSchemaGenerator configuration
}
You can get inspiration from the default implementation to set the GraphQLGenerator properly.
This seems to work, there is a bean in SpqrAutoConfiguration class to generateGraphql schema from the generator object
#Bean
public GraphQLSchema graphQLSchema(GraphQLSchemaGenerator schemaGenerator) {
schemaGenerator.withSchemaProcessors((schemaBuilder, buildContext) ->
{
schemaBuilder.fieldVisibility(new NoIntrospectionGraphqlFieldVisibility());
return schemaBuilder;
});
return schemaGenerator.generate();
}
schemaBuilder.fieldVisibility is Deprecated.
Graphql-spqr 0.10
#Bean
public GraphQLSchema graphQLSchema(GraphQLSchemaGenerator schemaGenerator) {
schemaGenerator.withSchemaProcessors((schemaBuilder, buildContext) -> {
schemaBuilder.codeRegistry(
buildContext
.codeRegistry
.fieldVisibility(NoIntrospectionGraphqlFieldVisibility.NO_INTROSPECTION_FIELD_VISIBILITY)
.build()
);
return schemaBuilder;
});
return schemaGenerator.generate();
}

how can I solve ldap socket closed error in spring LDAP?

I tried to make the sample program for my project. It is LDAP with spring boot.
I tested it in my development environment. Then, it works well. However, when I test it in the deployment environment, It occurs socket closed error.
The difference is just the LDAP URL and password(I couldn't make a password of admin with special characters, eg. #, #).
So, I tried to access LDAP using ldapsearch in deployment environment. Then, I got some errors. However, when I search for this error, I couldn't search a suitable solution for me.
This is my spring configuration for access to LDAP.
#Bean
public ContextSource contextSource() {
LdapContextSource contextSource = new LdapContextSource();
contextSource.setUrl("ldap://192.168.113.12");
contextSource.setBase("dc=test,dc=test");
contextSource.setUserDn("cn=admin,dc=test,dc=test");
contextSource.setPassword("test2019!#");
contextSource.afterPropertiesSet();
//for develop
// contextSource.setUrl("ldap://192.168.0.192");
// contextSource.setPassword("test2019");
PoolingContextSource pcs = new PoolingContextSource();
pcs.setDirContextValidator(new DefaultDirContextValidator());
pcs.setContextSource(contextSource);
TransactionAwareContextSourceProxy proxy = new TransactionAwareContextSourceProxy(pcs);
return proxy;
}
#Bean
public LdapTemplate ldapTemplate() {
return new LdapTemplate(contextSource());
}
This is error pictures when access to LDAP using spring LDAP.
This is error pictures using ldapsearch.
Help me.
ps. I didn't know how implemented the LDAP server, because it is installed by another team...
I would say the port is missing ;)
contextSource.setUrl("ldap://192.168.113.12:389");
Btw. in my opinion a nicer approach is to set the properties like this:
application.yml (or application.properties)
ldap:
contextSource:
url: ldap://192.168.113.12:389 #Local
base: dc=test,dc=test
userDn: cn=admin,dc=test,dc=test
password: test2019!#
and in the config class:
#Configuration
public class LdapConfiguration {
#Bean
#ConfigurationProperties(prefix="ldap.context-source")
public LdapContextSource contextSource() {
return new LdapContextSource();
}
#Bean
public ContextSource poolingLdapContextSource() {
PoolingContextSource pcs = new PoolingContextSource();
pcs.setDirContextValidator(new DefaultDirContextValidator());
pcs.setContextSource(contextSource());
TransactionAwareContextSourceProxy proxy = new TransactionAwareContextSourceProxy(pcs);
return proxy;
}
// other configs like ldaptemplate
}

Custom property loader with Spring Cloud Config

I'm using Spring Cloud Config in my spring-boot application and I need to write some custom code to handle properties to be read from my corporate password vault when property is flagged as such. I know spring cloud supports Hashicorp Vault, but that's not the one in case.
I don't want to hard-code specific properties to be retrieved from a different source, for example, I would have a properties file for application app1 with profile dev with values:
spring.datasource.url=jdbc:mysql://localhost/test
spring.datasource.username=dbuser
spring.datasource.password=dbpass
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
but for some other profiles such as prod, I would have:
spring.datasource.url=jdbc:mysql://localhost/test
spring.datasource.username=prod-user
spring.datasource.password=[[vault]]
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
So I need the custom property vault to intercept the property loaded whenever it finds a returned value equals to [[vault]] (or some other type of flag), and query from the corporate vault instead. In this case, my custom property loader would find the value of property spring.datasource.password from the corporate password vault. All other properties would still be returned as-is from values loaded by standard spring cloud config client.
I would like to do that using annotated code only, no XML configuration.
You can implement your own PropertySourceLocator and add entry to
spring.factories in directory META-INF.
#spring.factories
org.springframework.cloud.bootstrap.BootstrapConfiguration=/
foo.bar.MyPropertySourceLocator
Then you can you can refer to keys in your corporate password vault like a normal properties in spring.
spring.datasource.url=jdbc:mysql://localhost/test
spring.datasource.username=prod-user
spring.datasource.password=${lodaded.password.from.corporate.vault}
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
Implementation by HasiCorp: VaultPropertySourceLocatorSupport
While trying to solve the identical problem, I believe that I have come to work-around that may be acceptable.
Here is my solution below.
public class JBossVaultEnvironmentPostProcessor implements EnvironmentPostProcessor {
#Override
public void postProcessEnvironment(ConfigurableEnvironment environment, SpringApplication application) {
MutablePropertySources propertySources = environment.getPropertySources();
Map<String, String> sensitiveProperties = propertySources.stream()
.filter(propertySource -> propertySource instanceof EnumerablePropertySource)
.map(propertySource -> (EnumerablePropertySource<?>) propertySource)
.map(propertySource -> {
Map<String, String> vaultProperties = new HashMap<>();
String[] propertyNames = propertySource.getPropertyNames();
for (String propertyName : propertyNames) {
String propertyValue = propertySource.getProperty(propertyName).toString();
if (propertyValue.startsWith("VAULT::")) {
vaultProperties.put(propertyName, propertyValue);
}
}
return vaultProperties;
})
.reduce(new HashMap<>(), (m1, m2) -> {
m1.putAll(m2);
return m1;
});
Map<String, Object> vaultProperties = new HashMap<>();
sensitiveProperties.keySet().stream()
.forEach(key -> {
vaultProperties.put(key, VaultReader.readAttributeValue(sensitiveProperties.get(key)));
});
propertySources.addFirst(new MapPropertySource("vaultProperties", vaultProperties));
}

Resources