How to read secret key and value from Kubernetes volume mount using Spring Boot - spring

I have mounted one volume which contained username and password inside pod. If I do:
kubectl exec -it my-app -- cat /mnt/secrets-store/git-token
{"USERNAME":"usernameofgit","PASSWORD":"dhdhfhehfhel"}
I want to read this USERNAME and PASSWORD using Spring Boot.

Assuming:
the file (git_token) format is fixed (JSON).
the file may not have an extension suffix (.json).
... we have some Problems!
I tried 2.3.5. Importing Extensionless Files like:
spring.config.import=/mnt/secrets-store/git-token[.json]
But it works only with YAML/.properties yet!(tested with spring-boot:2.6.1))
Same applies to 2.8. Type-safe Configuration Properties. ;(;(
In Spring-Boot we can (out-of-the box) provide JSON-config (only) as SPRING_APPLICATION_JSON environment/command line property, and it has to be the json string, and cannot be a path or file (yet).
The proposed (baeldung) article shows ways to "enable JSON properties", but it is a long article with many details, shows much code and has decent lacks/outdates (#Component on #ConfigurationProperties is rather "unconventional")..
I tried the following (on local machine, under the mentioned assumptions):
package com.example.demo;
import com.fasterxml.jackson.annotation.JsonProperty;
import lombok.Data;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
#SpringBootApplication
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
#Value("""
#{#jacksonObjectMapper.readValue(
T(java.nio.file.Files).newInputStream(
T(java.nio.file.Path).of('/mnt/secrets-store/git-token')),
T(com.example.demo.GitInfo)
)}""" // watch out with #Value and text blocks! (otherwise: No converter found capable of converting from type [com.example.demo.GitInfo] to type [java.lang.String])
)
GitInfo gitInfo;
#Bean
CommandLineRunner runner() {
return (String... args) -> {
System.out.println(gitInfo.getUsername());
System.out.println(gitInfo.getPassword());
};
}
}
#Data
class GitInfo {
#JsonProperty("USERNAME")
private String username;
#JsonProperty("PASSWORD")
private String password;
}
With (only) spring-boot-starter-web and lombok on board, it prints the expected output.
Solution outline:
a pojo for this
the upper case is little problematic, but can be handled as shown.
a (crazy) #Value - (Spring-)Expression, involving:
(hopefully) auto-configured #jacksonObjectMapper bean. (alternatively: custom)
ObjectMapper#readValue (alternatives possible)
java.nio.file.Files#newInputStream (alternatives possible)
java.nio.file.Path#of

When you have your volume mounted, then all you need to do is to read a JSON file from the Spring Boot application. I recommend reading Load Spring Boot Properties From a JSON File.
In short, you can create a class corresponding to your JSON file, something like this one.
#Component
#PropertySource("file:/mnt/secrets-store/git-token")
#ConfigurationProperties
public class GitToken {
private String username;
private String password;
// getters and setters
}
Then, you need to add it to componentScan and you can autowire your class.

Related

How to create a named HttpSecurityPolicy that can be used with quarkus.http.auth

At the moment, I have the following HttpSecurityPolicy which is invoked on every request:
import io.quarkus.security.identity.SecurityIdentity;
import io.quarkus.vertx.http.runtime.security.HttpSecurityPolicy;
import io.smallrye.mutiny.Uni;
import io.vertx.ext.web.RoutingContext;
import javax.enterprise.context.ApplicationScoped;
#ApplicationScoped
public class SecurityHandler implements HttpSecurityPolicy {
#Override
public Uni<CheckResult> checkPermission(
final RoutingContext request,
final Uni<SecurityIdentity> identity,
final AuthorizationRequestContext requestContext) {
return ...
}
}
According to the HttpSecurityPolicy documentation, if I created a named policy, it can then be referenced in the application.properties path matching rules, which allows this policy to be applied to specific requests. Same thing is explained in the quarkus.http.auth.permission.-permissions-.policy documentation.
So then, is it possible to create a named mypolicy policy that I can later reference like:
quarkus.http.auth.permission.mypermission.policy=mypolicy
I've tried with #Named but is not working. I cannot use HttpSecurityPolicyBuildItem because I'm not developing an extension. It seems that Quarkus read all available policies in HttpSecurityProcessor, but I have no idea how I can add my policy to that policyMap.

Share a org.eclipse.microprofile.graphql.GraphQLApi in a JAR File

I'm writing a GraphQL api in java. I would like to provide this GraphQL api in a JAR file that this api implementation could be consumed/reused in other Java EE applications running in OpenLiberty 22. This is my api implementation.
import org.eclipse.microprofile.graphql.Description;
import org.eclipse.microprofile.graphql.GraphQLApi;
import org.eclipse.microprofile.graphql.Name;
import org.eclipse.microprofile.graphql.NonNull;
import org.eclipse.microprofile.graphql.Query;
...
#GraphQLApi
#RequestScoped
public class SystemStatusGraphQL {
#Inject
private DbAdapter databaseAdapter;
#Query("system")
#NonNull
#Description("Gets status information about the system")
public SystemStatus getSystemStatus(#Name("name") String name) {
return database.getCurrentStatus(name);
}
}
I deployed this JAR file as maven package and consumed it in my target application and I have two seperate problems or questions now.
How to reuse this API so that #GraphQLApi is recognized by OpenLiberty? I tried to inherit from the api class but OpenLiberty does not load GraphQL endpoint.
public class MySystemStatusGraphQL extends com.test.stystem.status.api.SystemStatusGraphQL {
}
When I paste all the GraphQL stuff provided at the top and only try to reuse the model class SystemStatus, Jandex can not resolve object types. When starting OpenLibertyServer, this error occurs: Class [com.mylib.system.status.database.model.SystemStatus] is not indexed in Jandex. Can not scan Object Type, might not be mapped correctly. This error retains even if I create jandex index file on build and add it as part of the JAR file. The SystemStatus class contains all type definitions as you can see:
#Type("SystemStatus")
#Description("Describes current state of system.")
public class SystemStatus{
#NonNull
#Name("_id")
private String id;
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
#NonNull
#Name("serial")
private String serial;
public String getSerial() {
return this.serial;
}
}
I would prefer to reuse whole api which brings the issue mentioned in question 1. If this is not possible, how can I solve issue mentioned in question 2?

Warning about config property after migrated to Quarkus 2.8.0

I have migrated an extension from quarkus 2.7.5 to quarkus 2.8.0.
After the migration, I run mvn clean install and the console shows me weird warnings about all (maybe 100) config properties (some of them are defined by me, other not like java.specification.version):
[WARNING] [io.quarkus.config] Unrecognized configuration key "my.property" was provided; it will be ignored; verify that the dependency extension for this configuration is set or that you did not make a typo
I think my integration-tests module causes this issue.
Here is my class in runtime folder:
import java.util.Optional;
import io.quarkus.runtime.annotations.ConfigItem;
import io.quarkus.runtime.annotations.ConfigPhase;
import io.quarkus.runtime.annotations.ConfigRoot;
#ConfigRoot(phase = ConfigPhase.RUN_TIME, prefix="", name = "myApp")
public class MyAppConfig {
#ConfigItem(defaultValue = "defaultValue")
String firstProperty;
#ConfigItem(defaultValue = "")
Optional<String> secondProperty;
#ConfigItem(defaultValue = "defaultValue")
String thirdProperty;
// Getters ...
}
Here is my test:
import static org.junit.jupiter.api.Assertions.assertEquals;
import org.eclipse.microprofile.config.inject.ConfigProperty;
import org.junit.jupiter.api.Test;
import io.quarkus.test.junit.QuarkusTest;
#QuarkusTest
public class MyAppIntegrationTest {
#ConfigProperty(name="myApp.first-property")
String firstProperty;
#ConfigProperty(name="myApp.second-property")
String secondProperty;
#ConfigProperty(name="myApp.third-property")
String thirdProperty;
#Test
public void testConfig() {
assertEquals("firstValue", firstProperty);
assertEquals("secondValue", secondProperty);
assertEquals("thirdValue", thirdProperty);
}
}
Can someone help me on this ? For instance, do I need a BuildItem for that ?
Thanks for your help
After spending hours on those warnings, I found that they was caused by the empty prefix field of the ConfigRoot annotation.
Setting name field to "" and prefix to "myApp" fixed the issue

Spring cloud config server. Environment variables in properties

I configured Spring Cloud Config server like this:
#SpringBootApplication
#EnableAutoConfiguration
#EnableConfigServer
public class ConfigServer {
public static void main(String[] args) {
SpringApplication.run(ConfigServer.class, args);
}
}
I'm using 'native' profile so properties are picked up from the file system:
server.port=8888
spring.profiles.active=native
spring.cloud.config.server.native.search-locations: classpath:/global
Now the tricky part is that some properties contain environmental variables. Properties in 'global/application-production.properties' are configured like this:
test=${DOCKER_HOST}
When I start up Config Server - everything works fine. However when I access http://localhost:8888/testapp/production I see this:
{
name: "testapp",
profiles: [
"production"
],
label: null,
version: null,
propertySources: [
{
name: "classpath:/global/application-production.properties",
source: {
test: "${DOCKER_HOST}"
}
}
]
}
So value from ENV variable is not replacing ${DOCKER_HOST} put rather returned as is.
But if I access http://localhost:8888/application-production.properties then result is non JSON but rather plain text:
test: tcp://192.168.99.100:2376
Spring documentation says:
The YAML and properties representations have an additional flag (provided as a boolean query parameter resolvePlaceholders) to signal that placeholders in the source documents, in the standard Spring ${…​} form, should be resolved in the output where possible before rendering. This is a useful feature for consumers that don’t know about the Spring placeholder conventions.
For some reason resolvePlaceholders is not working for JSON representation thus server config clients need to be aware of all ENV variables configured on server.
Is it possible to force JSON representation resolvePlaceholders same way as plain text (properties) representation?
I faced the same issue. After looking into Spring Cloud Config Repository I have found the following commit:
Omit system properties and env vars from placeholders in config
It looks like such behavior is not supported.
You can try the Property Overrides feature to override properties from git Environment Repository.
To override property foo at runtime, just set a system property or an environment variable spring.cloud.config.server.overrides.foo before starting the config server.
There was an update in order to accomplish this, in the following merge. 1 I found an implementation for resolvePlaceholders. Which gave me the idea of just creating a new controller which uses the EnvironmentController. This will allow you to resolve configuration, this is a good bootstrap.
import com.fasterxml.jackson.databind.ObjectMapper;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cloud.config.server.environment.EnvironmentController;
import org.springframework.cloud.config.server.environment.EnvironmentRepository;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;
#RestController
#RequestMapping(method = RequestMethod.GET, path = "resolved/${spring.cloud.config.server.prefix:}")
public class ReplacedEnvironmentController {
private EnvironmentController environmentController;
#Autowired
public ReplacedEnvironmentController(EnvironmentRepository repository) {
environmentController = new EnvironmentController(repository, new ObjectMapper());
}
public ReplacedEnvironmentController(EnvironmentRepository repository,
ObjectMapper objectMapper) {
environmentController = new EnvironmentController(repository, objectMapper);
}
#RequestMapping("/{name}/{profiles:.*[^-].*}")
public ResponseEntity<String> resolvedDefaultLabel(#PathVariable String name,
#PathVariable String profiles) throws Exception {
return resolvedLabelled(name, profiles, null);
}
#RequestMapping("/{name}/{profiles}/{label:.*}")
public ResponseEntity<String> resolvedLabelled(#PathVariable String name, #PathVariable String profiles,
#PathVariable String label) throws Exception {
return environmentController.labelledJsonProperties(name, profiles, label, true);
}
}

Is it possible to connect to two different buckets of couchbase in spring boot

I am trying to connect to two different buckets in couchbase using spring boot. But in a single spring boot application the database config only takes a single bucket name.
Is it possible to connect to more than one couchbase bucket in spring-boot?
So it seems you want to use Spring Data Couchbase from within a Spring Boot application, and have (at least) two different repositories backed by two different Bucket?
You'll have to customize your Spring Data configuration programmatically (as opposed to letting Spring Boot do all the heavy lifting), but that's possible.
Spring Boot creates a CouchbaseConfigurer through which it creates default Cluster and Bucket (as tuned in the properties file).
If you have a CouchbaseRepository on your classpath, it'll also attempt to configure Spring Data by instantiating a SpringBootCouchbaseDataConfiguration class.
You can customize that by extending the SpringBootCouchbaseDataConfiguration above in your project, marking it as #Configuration
Once you're ready to customize the Spring Data configuration programmatically, what you need is to create a second Bucket bean, a second CouchbaseTemplate that uses that bucket, and then instruct Spring Data Couchbase on which template to use with which Repository.
To that end, there is a configureRepositoryOperationsMapping(...) method. You can use the parameter of this method as a builder to:
link a specific Repository interface to a CouchbaseTemplate: map
say that any repo with a specific entity type should use a given template: mapEntity
even redefine the default template to use (initially the one created by Spring Boot): setDefault.
This second part is explained in the Spring Data Couchbase documentation.
Probably what you are trying to say is that Spring boot provides pre-defined properties that you can modify, such as: couchbase.cluster.bucket that takes single value and you want to connect to two or more buckets.
In case you will not find a better solution, I can point you to a slightly different approach, and that is to setup your own couchbase connection manager that you can inject anywhere you need.
Here is the example of such #Service that will provider you with two connections to different buckets.
You can modify to suite your needs, it is very small.
import java.util.ArrayList;
import java.util.List;
import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
import org.apache.log4j.Logger;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;
import com.couchbase.client.java.Bucket;
import com.couchbase.client.java.Cluster;
import com.couchbase.client.java.CouchbaseCluster;
import com.couchbase.client.java.env.CouchbaseEnvironment;
import com.couchbase.client.java.env.DefaultCouchbaseEnvironment;
#Service
public class CouchbaseConnectionManager {
private static final int TIMEOUT = 100000;
#Value("#{configProp['couchbase.nodes']}")
private List<String> nodes = new ArrayList<String>();
#Value("#{configProp['couchbase.binary.bucketname']}")
private String binaryBucketName;
#Value("#{configProp['couchbase.nonbinary.bucketname']}")
private String nonbinaryBucketName;
#Value("#{configProp['couchbase.password']}")
private String password;
private Bucket binaryBucket;
private Bucket nonbinaryBucket;
private Cluster cluster;
private static final Logger log = Logger.getLogger(CouchbaseConnectionManager.class);
#PostConstruct
public void createSession() {
if (nodes != null && nodes.size() != 0) {
try {
CouchbaseEnvironment env = DefaultCouchbaseEnvironment.builder().connectTimeout(TIMEOUT).build();
cluster = CouchbaseCluster.create(env, nodes);
binaryBucket = cluster.openBucket(binaryBucketName, password);
nonbinaryBucket = cluster.openBucket(nonbinaryBucketName, password);
log.info(GOT_A_CONNECTION_TO_COUCHBASE_BUCKETS + binaryBucket + " " + nonbinaryBucket);
} catch (Exception e) {
log.warn(UNABLE_TO_GET_CONNECTION_TO_COUCHBASE_BUCKETS);
}
} else {
log.warn(COUCH_NOT_CONFIGURED);
}
}
#PreDestroy
public void preDestroy() {
if (cluster != null) {
cluster.disconnect();
log.info(SUCCESSFULLY_DISCONNECTED_FROM_COUCHBASE);
}
}
public Bucket getBinaryBucket() {
return binaryBucket;
}
public Bucket getNonbinaryBucket() {
return nonbinaryBucket;
}
private static final String SUCCESSFULLY_DISCONNECTED_FROM_COUCHBASE = "Successfully disconnected from couchbase";
private static final String GOT_A_CONNECTION_TO_COUCHBASE_BUCKETS = "Got a connection to couchbase buckets: ";
private static final String COUCH_NOT_CONFIGURED = "COUCH not configured!!";
private static final String UNABLE_TO_GET_CONNECTION_TO_COUCHBASE_BUCKETS = "Unable to get connection to couchbase buckets";
}
I followed Simon's approach and extended the org.springframework.data.couchbase.config.AbstractCouchbaseConfiguration for the #Configuration instead of SpringBootCouchbaseDataConfiguration.
Also, a point worth mentioning is that for some reason having separate Repository packages and having its own #Configuration doesn't really work. I struggled a great deal to try and make it work and eventually settled on having all the Repositories in a single package and ended up having something like the below to map the Entities and Templates.
baseMapping.mapEntity(Prime.class, noSQLSearchDBTemplate())
.mapEntity(PrimeDetailsMaster.class, noSQLSearchDBTemplate())
.mapEntity(HostDetailsMaster.class, noSQLSearchDBTemplate())
.mapEntity(Events.class, eventsTemplate())
.mapEntity(EventRulesMaster.class, eventsTemplate());

Resources