Hi I am new to Spring Boot (but have been using Spring in my apps for a while). I am trying to use a custom SSM PropertyPlaceholderConfigurer, based on my SSM Client, that reads my properties from AWS SSM, in addition to properties in my normal application.properties.
This code works fine in my pre-spring-boot application. However, in the new application, I see that it overrides the application.properties. And this seems to be a well documented problem.
So i decided to include the application.properties file in my custom PropertyPlaceholderConfigurer class and load all the properties together and still it does not resolve any properties in application.properties that are marked with "${}" and resolve by my custom location. What more do i need to do?
As an alternative, I tried to have the properties i need to load from SSM to be loaded via an EnvironmentPostProcessor but it was unable to connect to the AWS SSM server at this point in the loading process (not sure why)
The answer is to use the EnvironmentPostProcessor. Works perfectly. See code below:
import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.env.EnvironmentPostProcessor;
import org.springframework.core.env.ConfigurableEnvironment;
import org.springframework.core.env.MapPropertySource;
import java.util.HashMap;
import java.util.Map;
/**
* This class loads SSM parameters base on region and environment
* Needs to be added to spring.factories class so that it will be invoked. as follow:
* org.springframework.boot.env.EnvironmentPostProcessor=<full package>.SSMEnvironmentPostProcessor
* Add the SSM propeties to other properties already set
*/
public class SSMEnvironmentPostProcessor implements EnvironmentPostProcessor {
private static final String QUOTE = "\"";
#Override
public void postProcessEnvironment(ConfigurableEnvironment environment, SpringApplication application) {
SSMClient ssmClient = new SSMClient(DefaultAWSCredentialsProviderChain.getInstance(), System.getProperty("env" +
".region"), new ClientConfiguration());
ssmClient.init();
Map<String, Object> parameters = new HashMap<>();
ssmClient.getParametersByPath("/" + System.getProperty("env"), true).entrySet().stream()
.forEach(entry -> parameters.put(entry.getKey(), entry.getValue()));
MapPropertySource mapPropertySource = new MapPropertySource("ssm", parameters);
environment.getPropertySources().addLast(mapPropertySource);
}
}
Related
At the moment, I have the following HttpSecurityPolicy which is invoked on every request:
import io.quarkus.security.identity.SecurityIdentity;
import io.quarkus.vertx.http.runtime.security.HttpSecurityPolicy;
import io.smallrye.mutiny.Uni;
import io.vertx.ext.web.RoutingContext;
import javax.enterprise.context.ApplicationScoped;
#ApplicationScoped
public class SecurityHandler implements HttpSecurityPolicy {
#Override
public Uni<CheckResult> checkPermission(
final RoutingContext request,
final Uni<SecurityIdentity> identity,
final AuthorizationRequestContext requestContext) {
return ...
}
}
According to the HttpSecurityPolicy documentation, if I created a named policy, it can then be referenced in the application.properties path matching rules, which allows this policy to be applied to specific requests. Same thing is explained in the quarkus.http.auth.permission.-permissions-.policy documentation.
So then, is it possible to create a named mypolicy policy that I can later reference like:
quarkus.http.auth.permission.mypermission.policy=mypolicy
I've tried with #Named but is not working. I cannot use HttpSecurityPolicyBuildItem because I'm not developing an extension. It seems that Quarkus read all available policies in HttpSecurityProcessor, but I have no idea how I can add my policy to that policyMap.
I have mounted one volume which contained username and password inside pod. If I do:
kubectl exec -it my-app -- cat /mnt/secrets-store/git-token
{"USERNAME":"usernameofgit","PASSWORD":"dhdhfhehfhel"}
I want to read this USERNAME and PASSWORD using Spring Boot.
Assuming:
the file (git_token) format is fixed (JSON).
the file may not have an extension suffix (.json).
... we have some Problems!
I tried 2.3.5. Importing Extensionless Files like:
spring.config.import=/mnt/secrets-store/git-token[.json]
But it works only with YAML/.properties yet!(tested with spring-boot:2.6.1))
Same applies to 2.8. Type-safe Configuration Properties. ;(;(
In Spring-Boot we can (out-of-the box) provide JSON-config (only) as SPRING_APPLICATION_JSON environment/command line property, and it has to be the json string, and cannot be a path or file (yet).
The proposed (baeldung) article shows ways to "enable JSON properties", but it is a long article with many details, shows much code and has decent lacks/outdates (#Component on #ConfigurationProperties is rather "unconventional")..
I tried the following (on local machine, under the mentioned assumptions):
package com.example.demo;
import com.fasterxml.jackson.annotation.JsonProperty;
import lombok.Data;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
#SpringBootApplication
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
#Value("""
#{#jacksonObjectMapper.readValue(
T(java.nio.file.Files).newInputStream(
T(java.nio.file.Path).of('/mnt/secrets-store/git-token')),
T(com.example.demo.GitInfo)
)}""" // watch out with #Value and text blocks! (otherwise: No converter found capable of converting from type [com.example.demo.GitInfo] to type [java.lang.String])
)
GitInfo gitInfo;
#Bean
CommandLineRunner runner() {
return (String... args) -> {
System.out.println(gitInfo.getUsername());
System.out.println(gitInfo.getPassword());
};
}
}
#Data
class GitInfo {
#JsonProperty("USERNAME")
private String username;
#JsonProperty("PASSWORD")
private String password;
}
With (only) spring-boot-starter-web and lombok on board, it prints the expected output.
Solution outline:
a pojo for this
the upper case is little problematic, but can be handled as shown.
a (crazy) #Value - (Spring-)Expression, involving:
(hopefully) auto-configured #jacksonObjectMapper bean. (alternatively: custom)
ObjectMapper#readValue (alternatives possible)
java.nio.file.Files#newInputStream (alternatives possible)
java.nio.file.Path#of
When you have your volume mounted, then all you need to do is to read a JSON file from the Spring Boot application. I recommend reading Load Spring Boot Properties From a JSON File.
In short, you can create a class corresponding to your JSON file, something like this one.
#Component
#PropertySource("file:/mnt/secrets-store/git-token")
#ConfigurationProperties
public class GitToken {
private String username;
private String password;
// getters and setters
}
Then, you need to add it to componentScan and you can autowire your class.
Spring Cloud Config Server accepts multiple profile and returns the properties for all the profiles when I access the /env endpoint of the application. The response lists the properties specific to each profile. If same property is present in 2 different property files , the one that is defined last takes precedence. Is there a way to get the final list of property key and values that will be used by the application?
For Cloud Config Client Application
I've tried different ways and found the following (accidentally):
GET /env/.* returns full list of configuration properties
For Cloud Config Server Application
It turns out this is already implemented, but not documented well. All you need is to request json, yml or properties according to the patterns:
/{application}-{profile}.{ext}
/{label}/{application}-{profile}.{ext}
import java.util.properties;
import org.springframework.core.env.AbstractEnvironment;
import org.springframework.core.env.CompositePropertySource;
import org.springframework.core.env.Environment;
public class MyClass {
#Autowired
private Environment env;
Properties getProperties() {
Properties props = new Properties();
CompositePropertySource bootstrapProperties = (CompositePropertySource) ((AbstractEnvironment) env).getPropertySources().get("bootstrapProperties");
for (String propertyName : bootstrapProperties.getPropertyNames()) {
props.put(propertyName, bootstrapProperties.getProperty(propertyName));
}
return props;
}
}
Sorry... this is my first time answering a question here. I created an account specifically to
answer this question because I came upon it while researching the same issue. I found a
solution that worked for me and decided to share it.
Here goes my explanation of what was done:
I initialize a new "Properties" object (could be a HashMap or whatever else you want)
I lookup the property source for the "bootstrapProperties" which is a CompositePropertySource object.
This property source contains all of the application properties that were loaded.
I loop through all the property names returned from the "getPropertyNames" method on the CompositePropertySource object
and create a new property entry.
I return the properties object.
This seems to be an intentional limitation of the Spring Framework.
See here
You could hack it and inject the PropertySources interface, then loop over all the individual PropertySource objects, but you'd have to know what properties you're looking for.
Externalized Configuration
Spring Boot allows you to externalize your configuration so you can work with the same application code in different environments. You can use properties files, YAML files, environment variables and command-line arguments to externalize configuration. Property values can be injected directly into your beans using the #Value annotation, accessed via Spring’s Environment abstraction or bound to structured objects via #ConfigurationProperties.
Spring Boot uses a very particular PropertySource order that is designed to allow sensible overriding of values. Properties are considered in the following order:
Devtools global settings properties on your home directory (~/.spring-boot-devtools.properties when devtools is active).
#TestPropertySource annotations on your tests.
#SpringBootTest#properties annotation attribute on your tests.
Command line arguments.
Properties from SPRING_APPLICATION_JSON (inline JSON embedded in an environment variable or system property)
ServletConfig init parameters.
ServletContext init parameters.
JNDI attributes from java:comp/env.
Java System properties (System.getProperties()).
OS environment variables.
A RandomValuePropertySource that only has properties in random.*.
Profile-specific application properties outside of your packaged jar (application-{profile}.properties and YAML variants)
Profile-specific application properties packaged inside your jar (application-{profile}.properties and YAML variants)
Application properties outside of your packaged jar (application.properties and YAML variants).
Application properties packaged inside your jar (application.properties and YAML variants).
#PropertySource annotations on your #Configuration classes.
Default properties (specified using SpringApplication.setDefaultProperties).
The below program prints properties from spring boot environment.
import org.springframework.beans.BeansException;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ApplicationObjectSupport;
import org.springframework.core.env.Environment;
import org.springframework.core.env.MapPropertySource;
import org.springframework.core.env.MutablePropertySources;
import org.springframework.core.env.PropertySource;
import org.springframework.stereotype.Component;
import org.springframework.web.context.support.StandardServletEnvironment;
#Component
public class EnvironmentLogger extends ApplicationObjectSupport {
#Override
protected void initApplicationContext(ApplicationContext context) throws BeansException {
Environment environment = context.getEnvironment();
String[] profiles = environment.getActiveProfiles();
if(profiles != null && profiles.length > 0) {
for (String profile : profiles) {
System.out.print(profile);
}
} else {
System.out.println("Setting default profile");
}
//Print the profile properties
if(environment != null && environment instanceof StandardServletEnvironment) {
StandardServletEnvironment env = (StandardServletEnvironment)environment;
MutablePropertySources mutablePropertySources = env.getPropertySources();
if(mutablePropertySources != null) {
for (PropertySource<?> propertySource : mutablePropertySources) {
if(propertySource instanceof MapPropertySource) {
MapPropertySource mapPropertySource = (MapPropertySource)propertySource;
if(mapPropertySource.getPropertyNames() != null) {
System.out.println(propertySource.getName());
String[] propertyNames = mapPropertySource.getPropertyNames();
for (String propertyName : propertyNames) {
Object val = mapPropertySource.getProperty(propertyName);
System.out.print(propertyName);
System.out.print(" = " + val);
}
}
}
}
}
}
}
}
I configured Spring Cloud Config server like this:
#SpringBootApplication
#EnableAutoConfiguration
#EnableConfigServer
public class ConfigServer {
public static void main(String[] args) {
SpringApplication.run(ConfigServer.class, args);
}
}
I'm using 'native' profile so properties are picked up from the file system:
server.port=8888
spring.profiles.active=native
spring.cloud.config.server.native.search-locations: classpath:/global
Now the tricky part is that some properties contain environmental variables. Properties in 'global/application-production.properties' are configured like this:
test=${DOCKER_HOST}
When I start up Config Server - everything works fine. However when I access http://localhost:8888/testapp/production I see this:
{
name: "testapp",
profiles: [
"production"
],
label: null,
version: null,
propertySources: [
{
name: "classpath:/global/application-production.properties",
source: {
test: "${DOCKER_HOST}"
}
}
]
}
So value from ENV variable is not replacing ${DOCKER_HOST} put rather returned as is.
But if I access http://localhost:8888/application-production.properties then result is non JSON but rather plain text:
test: tcp://192.168.99.100:2376
Spring documentation says:
The YAML and properties representations have an additional flag (provided as a boolean query parameter resolvePlaceholders) to signal that placeholders in the source documents, in the standard Spring ${…} form, should be resolved in the output where possible before rendering. This is a useful feature for consumers that don’t know about the Spring placeholder conventions.
For some reason resolvePlaceholders is not working for JSON representation thus server config clients need to be aware of all ENV variables configured on server.
Is it possible to force JSON representation resolvePlaceholders same way as plain text (properties) representation?
I faced the same issue. After looking into Spring Cloud Config Repository I have found the following commit:
Omit system properties and env vars from placeholders in config
It looks like such behavior is not supported.
You can try the Property Overrides feature to override properties from git Environment Repository.
To override property foo at runtime, just set a system property or an environment variable spring.cloud.config.server.overrides.foo before starting the config server.
There was an update in order to accomplish this, in the following merge. 1 I found an implementation for resolvePlaceholders. Which gave me the idea of just creating a new controller which uses the EnvironmentController. This will allow you to resolve configuration, this is a good bootstrap.
import com.fasterxml.jackson.databind.ObjectMapper;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cloud.config.server.environment.EnvironmentController;
import org.springframework.cloud.config.server.environment.EnvironmentRepository;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;
#RestController
#RequestMapping(method = RequestMethod.GET, path = "resolved/${spring.cloud.config.server.prefix:}")
public class ReplacedEnvironmentController {
private EnvironmentController environmentController;
#Autowired
public ReplacedEnvironmentController(EnvironmentRepository repository) {
environmentController = new EnvironmentController(repository, new ObjectMapper());
}
public ReplacedEnvironmentController(EnvironmentRepository repository,
ObjectMapper objectMapper) {
environmentController = new EnvironmentController(repository, objectMapper);
}
#RequestMapping("/{name}/{profiles:.*[^-].*}")
public ResponseEntity<String> resolvedDefaultLabel(#PathVariable String name,
#PathVariable String profiles) throws Exception {
return resolvedLabelled(name, profiles, null);
}
#RequestMapping("/{name}/{profiles}/{label:.*}")
public ResponseEntity<String> resolvedLabelled(#PathVariable String name, #PathVariable String profiles,
#PathVariable String label) throws Exception {
return environmentController.labelledJsonProperties(name, profiles, label, true);
}
}
I am trying to connect to two different buckets in couchbase using spring boot. But in a single spring boot application the database config only takes a single bucket name.
Is it possible to connect to more than one couchbase bucket in spring-boot?
So it seems you want to use Spring Data Couchbase from within a Spring Boot application, and have (at least) two different repositories backed by two different Bucket?
You'll have to customize your Spring Data configuration programmatically (as opposed to letting Spring Boot do all the heavy lifting), but that's possible.
Spring Boot creates a CouchbaseConfigurer through which it creates default Cluster and Bucket (as tuned in the properties file).
If you have a CouchbaseRepository on your classpath, it'll also attempt to configure Spring Data by instantiating a SpringBootCouchbaseDataConfiguration class.
You can customize that by extending the SpringBootCouchbaseDataConfiguration above in your project, marking it as #Configuration
Once you're ready to customize the Spring Data configuration programmatically, what you need is to create a second Bucket bean, a second CouchbaseTemplate that uses that bucket, and then instruct Spring Data Couchbase on which template to use with which Repository.
To that end, there is a configureRepositoryOperationsMapping(...) method. You can use the parameter of this method as a builder to:
link a specific Repository interface to a CouchbaseTemplate: map
say that any repo with a specific entity type should use a given template: mapEntity
even redefine the default template to use (initially the one created by Spring Boot): setDefault.
This second part is explained in the Spring Data Couchbase documentation.
Probably what you are trying to say is that Spring boot provides pre-defined properties that you can modify, such as: couchbase.cluster.bucket that takes single value and you want to connect to two or more buckets.
In case you will not find a better solution, I can point you to a slightly different approach, and that is to setup your own couchbase connection manager that you can inject anywhere you need.
Here is the example of such #Service that will provider you with two connections to different buckets.
You can modify to suite your needs, it is very small.
import java.util.ArrayList;
import java.util.List;
import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
import org.apache.log4j.Logger;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;
import com.couchbase.client.java.Bucket;
import com.couchbase.client.java.Cluster;
import com.couchbase.client.java.CouchbaseCluster;
import com.couchbase.client.java.env.CouchbaseEnvironment;
import com.couchbase.client.java.env.DefaultCouchbaseEnvironment;
#Service
public class CouchbaseConnectionManager {
private static final int TIMEOUT = 100000;
#Value("#{configProp['couchbase.nodes']}")
private List<String> nodes = new ArrayList<String>();
#Value("#{configProp['couchbase.binary.bucketname']}")
private String binaryBucketName;
#Value("#{configProp['couchbase.nonbinary.bucketname']}")
private String nonbinaryBucketName;
#Value("#{configProp['couchbase.password']}")
private String password;
private Bucket binaryBucket;
private Bucket nonbinaryBucket;
private Cluster cluster;
private static final Logger log = Logger.getLogger(CouchbaseConnectionManager.class);
#PostConstruct
public void createSession() {
if (nodes != null && nodes.size() != 0) {
try {
CouchbaseEnvironment env = DefaultCouchbaseEnvironment.builder().connectTimeout(TIMEOUT).build();
cluster = CouchbaseCluster.create(env, nodes);
binaryBucket = cluster.openBucket(binaryBucketName, password);
nonbinaryBucket = cluster.openBucket(nonbinaryBucketName, password);
log.info(GOT_A_CONNECTION_TO_COUCHBASE_BUCKETS + binaryBucket + " " + nonbinaryBucket);
} catch (Exception e) {
log.warn(UNABLE_TO_GET_CONNECTION_TO_COUCHBASE_BUCKETS);
}
} else {
log.warn(COUCH_NOT_CONFIGURED);
}
}
#PreDestroy
public void preDestroy() {
if (cluster != null) {
cluster.disconnect();
log.info(SUCCESSFULLY_DISCONNECTED_FROM_COUCHBASE);
}
}
public Bucket getBinaryBucket() {
return binaryBucket;
}
public Bucket getNonbinaryBucket() {
return nonbinaryBucket;
}
private static final String SUCCESSFULLY_DISCONNECTED_FROM_COUCHBASE = "Successfully disconnected from couchbase";
private static final String GOT_A_CONNECTION_TO_COUCHBASE_BUCKETS = "Got a connection to couchbase buckets: ";
private static final String COUCH_NOT_CONFIGURED = "COUCH not configured!!";
private static final String UNABLE_TO_GET_CONNECTION_TO_COUCHBASE_BUCKETS = "Unable to get connection to couchbase buckets";
}
I followed Simon's approach and extended the org.springframework.data.couchbase.config.AbstractCouchbaseConfiguration for the #Configuration instead of SpringBootCouchbaseDataConfiguration.
Also, a point worth mentioning is that for some reason having separate Repository packages and having its own #Configuration doesn't really work. I struggled a great deal to try and make it work and eventually settled on having all the Repositories in a single package and ended up having something like the below to map the Entities and Templates.
baseMapping.mapEntity(Prime.class, noSQLSearchDBTemplate())
.mapEntity(PrimeDetailsMaster.class, noSQLSearchDBTemplate())
.mapEntity(HostDetailsMaster.class, noSQLSearchDBTemplate())
.mapEntity(Events.class, eventsTemplate())
.mapEntity(EventRulesMaster.class, eventsTemplate());