Referencing Variables in Micronaut Caches - caching

I was wondering how to include Path variables in the Micronaut caching annotation.
Example:
#Get("/test/{name}")
public String getName(#PathVariable String name){
return "Hello " + name;
}
Now imagine this would be some computationally expensive operation for a user. Obviously, I have to include the user name in the cache key to be able to retrieve or invalidate it. Sadly I havent found any docs on this, so maybe someone here has a clue.

If you really want to cache the controller endpoint itself (as opposed to a service it is delegating to), you could configure the cache in application.yml:
micronaut:
application:
name: yourapp
caches:
somecachename:
expire-after-write: 30m
maximum-size: 10
Then use that cache:
import io.micronaut.cache.annotation.Cacheable;
import io.micronaut.http.annotation.Controller;
import io.micronaut.http.annotation.Get;
import io.micronaut.http.annotation.PathVariable;
#Controller("/demo")
public class DemoController {
#Get("/test/{name}")
#Cacheable("somecachename")
public String getName(#PathVariable String name) {
System.out.println("Here we are " + name);
return "Hello " + name;
}
}
You will need a dependency on whatever cache you are using, for example, to use caffeine:
implementation("io.micronaut.cache:micronaut-cache-caffeine")

Related

Share a org.eclipse.microprofile.graphql.GraphQLApi in a JAR File

I'm writing a GraphQL api in java. I would like to provide this GraphQL api in a JAR file that this api implementation could be consumed/reused in other Java EE applications running in OpenLiberty 22. This is my api implementation.
import org.eclipse.microprofile.graphql.Description;
import org.eclipse.microprofile.graphql.GraphQLApi;
import org.eclipse.microprofile.graphql.Name;
import org.eclipse.microprofile.graphql.NonNull;
import org.eclipse.microprofile.graphql.Query;
...
#GraphQLApi
#RequestScoped
public class SystemStatusGraphQL {
#Inject
private DbAdapter databaseAdapter;
#Query("system")
#NonNull
#Description("Gets status information about the system")
public SystemStatus getSystemStatus(#Name("name") String name) {
return database.getCurrentStatus(name);
}
}
I deployed this JAR file as maven package and consumed it in my target application and I have two seperate problems or questions now.
How to reuse this API so that #GraphQLApi is recognized by OpenLiberty? I tried to inherit from the api class but OpenLiberty does not load GraphQL endpoint.
public class MySystemStatusGraphQL extends com.test.stystem.status.api.SystemStatusGraphQL {
}
When I paste all the GraphQL stuff provided at the top and only try to reuse the model class SystemStatus, Jandex can not resolve object types. When starting OpenLibertyServer, this error occurs: Class [com.mylib.system.status.database.model.SystemStatus] is not indexed in Jandex. Can not scan Object Type, might not be mapped correctly. This error retains even if I create jandex index file on build and add it as part of the JAR file. The SystemStatus class contains all type definitions as you can see:
#Type("SystemStatus")
#Description("Describes current state of system.")
public class SystemStatus{
#NonNull
#Name("_id")
private String id;
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
#NonNull
#Name("serial")
private String serial;
public String getSerial() {
return this.serial;
}
}
I would prefer to reuse whole api which brings the issue mentioned in question 1. If this is not possible, how can I solve issue mentioned in question 2?

How to read secret key and value from Kubernetes volume mount using Spring Boot

I have mounted one volume which contained username and password inside pod. If I do:
kubectl exec -it my-app -- cat /mnt/secrets-store/git-token
{"USERNAME":"usernameofgit","PASSWORD":"dhdhfhehfhel"}
I want to read this USERNAME and PASSWORD using Spring Boot.
Assuming:
the file (git_token) format is fixed (JSON).
the file may not have an extension suffix (.json).
... we have some Problems!
I tried 2.3.5. Importing Extensionless Files like:
spring.config.import=/mnt/secrets-store/git-token[.json]
But it works only with YAML/.properties yet!(tested with spring-boot:2.6.1))
Same applies to 2.8. Type-safe Configuration Properties. ;(;(
In Spring-Boot we can (out-of-the box) provide JSON-config (only) as SPRING_APPLICATION_JSON environment/command line property, and it has to be the json string, and cannot be a path or file (yet).
The proposed (baeldung) article shows ways to "enable JSON properties", but it is a long article with many details, shows much code and has decent lacks/outdates (#Component on #ConfigurationProperties is rather "unconventional")..
I tried the following (on local machine, under the mentioned assumptions):
package com.example.demo;
import com.fasterxml.jackson.annotation.JsonProperty;
import lombok.Data;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
#SpringBootApplication
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
#Value("""
#{#jacksonObjectMapper.readValue(
T(java.nio.file.Files).newInputStream(
T(java.nio.file.Path).of('/mnt/secrets-store/git-token')),
T(com.example.demo.GitInfo)
)}""" // watch out with #Value and text blocks! (otherwise: No converter found capable of converting from type [com.example.demo.GitInfo] to type [java.lang.String])
)
GitInfo gitInfo;
#Bean
CommandLineRunner runner() {
return (String... args) -> {
System.out.println(gitInfo.getUsername());
System.out.println(gitInfo.getPassword());
};
}
}
#Data
class GitInfo {
#JsonProperty("USERNAME")
private String username;
#JsonProperty("PASSWORD")
private String password;
}
With (only) spring-boot-starter-web and lombok on board, it prints the expected output.
Solution outline:
a pojo for this
the upper case is little problematic, but can be handled as shown.
a (crazy) #Value - (Spring-)Expression, involving:
(hopefully) auto-configured #jacksonObjectMapper bean. (alternatively: custom)
ObjectMapper#readValue (alternatives possible)
java.nio.file.Files#newInputStream (alternatives possible)
java.nio.file.Path#of
When you have your volume mounted, then all you need to do is to read a JSON file from the Spring Boot application. I recommend reading Load Spring Boot Properties From a JSON File.
In short, you can create a class corresponding to your JSON file, something like this one.
#Component
#PropertySource("file:/mnt/secrets-store/git-token")
#ConfigurationProperties
public class GitToken {
private String username;
private String password;
// getters and setters
}
Then, you need to add it to componentScan and you can autowire your class.

Where does the filter for Ehcache 3 simple web page caching call the cache?

I am trying to cache a simple web page in Ehcache. Thanks to some help from another SO post I discovered that I need to implement my own filter based on Ehcache 2 code. When I look at the filter I don't understand it. Where does it ever call the cache to return a value? Here is my implementation (quite possibly wrong):
package com.sentiment360.pulse.cache;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.xml.bind.Element;
import org.ehcache.Cache;
import org.ehcache.CacheManager;
import org.ehcache.config.Configuration;
import static org.ehcache.config.builders.CacheManagerBuilder.newCacheManager;
import org.ehcache.core.Ehcache;
import org.ehcache.event.CacheEvent;
import org.ehcache.event.CacheEventListener;
import org.ehcache.xml.XmlConfiguration;
import javax.servlet.http.HttpServletRequest;
public class SimplePageCachingFilter implements CachingFilter {
public static final String DEFAULT_CACHE_NAME = "SimplePageCachingFilter";
private Logger LOG = Logger.getLogger(this.getClass().getName());
private String cacheName="basicCache";
protected String getCacheName() {
if (cacheName != null && cacheName.length() > 0) {
LOG.log(Level.INFO,"Using configured cacheName of {}.", cacheName);
return cacheName;
} else {
LOG.log(Level.INFO,"No cacheName configured. Using default of {}.", DEFAULT_CACHE_NAME);
return DEFAULT_CACHE_NAME;
}
}
protected CacheManager getCacheManager() {
return CacheManager.getInstance();
}
protected String calculateKey(HttpServletRequest httpRequest) {
StringBuffer stringBuffer = new StringBuffer();
stringBuffer.append(httpRequest.getMethod()).append(httpRequest.getRequestURI()).append(httpRequest.getQueryString());
String key = stringBuffer.toString();
return key;
}
}
See in the super class.
But you do implements CachingFilter ?! Where is that interface? It does look like you were trying to "copy" the previous Ehcache's SimplePageCachingFilter, right? You would also need to port that abstract super class (and maybe read a little about javax.servlet.Filter, in case these aren't entirely clear...)
Now, you may also want to ping the dev team on the Ehcache Dev Google group about this. They should be able to provide pointers and then help with the implementation. Looks like a good idea for a future pull request! :)

Spring cloud config server. Environment variables in properties

I configured Spring Cloud Config server like this:
#SpringBootApplication
#EnableAutoConfiguration
#EnableConfigServer
public class ConfigServer {
public static void main(String[] args) {
SpringApplication.run(ConfigServer.class, args);
}
}
I'm using 'native' profile so properties are picked up from the file system:
server.port=8888
spring.profiles.active=native
spring.cloud.config.server.native.search-locations: classpath:/global
Now the tricky part is that some properties contain environmental variables. Properties in 'global/application-production.properties' are configured like this:
test=${DOCKER_HOST}
When I start up Config Server - everything works fine. However when I access http://localhost:8888/testapp/production I see this:
{
name: "testapp",
profiles: [
"production"
],
label: null,
version: null,
propertySources: [
{
name: "classpath:/global/application-production.properties",
source: {
test: "${DOCKER_HOST}"
}
}
]
}
So value from ENV variable is not replacing ${DOCKER_HOST} put rather returned as is.
But if I access http://localhost:8888/application-production.properties then result is non JSON but rather plain text:
test: tcp://192.168.99.100:2376
Spring documentation says:
The YAML and properties representations have an additional flag (provided as a boolean query parameter resolvePlaceholders) to signal that placeholders in the source documents, in the standard Spring ${…​} form, should be resolved in the output where possible before rendering. This is a useful feature for consumers that don’t know about the Spring placeholder conventions.
For some reason resolvePlaceholders is not working for JSON representation thus server config clients need to be aware of all ENV variables configured on server.
Is it possible to force JSON representation resolvePlaceholders same way as plain text (properties) representation?
I faced the same issue. After looking into Spring Cloud Config Repository I have found the following commit:
Omit system properties and env vars from placeholders in config
It looks like such behavior is not supported.
You can try the Property Overrides feature to override properties from git Environment Repository.
To override property foo at runtime, just set a system property or an environment variable spring.cloud.config.server.overrides.foo before starting the config server.
There was an update in order to accomplish this, in the following merge. 1 I found an implementation for resolvePlaceholders. Which gave me the idea of just creating a new controller which uses the EnvironmentController. This will allow you to resolve configuration, this is a good bootstrap.
import com.fasterxml.jackson.databind.ObjectMapper;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cloud.config.server.environment.EnvironmentController;
import org.springframework.cloud.config.server.environment.EnvironmentRepository;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;
#RestController
#RequestMapping(method = RequestMethod.GET, path = "resolved/${spring.cloud.config.server.prefix:}")
public class ReplacedEnvironmentController {
private EnvironmentController environmentController;
#Autowired
public ReplacedEnvironmentController(EnvironmentRepository repository) {
environmentController = new EnvironmentController(repository, new ObjectMapper());
}
public ReplacedEnvironmentController(EnvironmentRepository repository,
ObjectMapper objectMapper) {
environmentController = new EnvironmentController(repository, objectMapper);
}
#RequestMapping("/{name}/{profiles:.*[^-].*}")
public ResponseEntity<String> resolvedDefaultLabel(#PathVariable String name,
#PathVariable String profiles) throws Exception {
return resolvedLabelled(name, profiles, null);
}
#RequestMapping("/{name}/{profiles}/{label:.*}")
public ResponseEntity<String> resolvedLabelled(#PathVariable String name, #PathVariable String profiles,
#PathVariable String label) throws Exception {
return environmentController.labelledJsonProperties(name, profiles, label, true);
}
}

Is it possible to connect to two different buckets of couchbase in spring boot

I am trying to connect to two different buckets in couchbase using spring boot. But in a single spring boot application the database config only takes a single bucket name.
Is it possible to connect to more than one couchbase bucket in spring-boot?
So it seems you want to use Spring Data Couchbase from within a Spring Boot application, and have (at least) two different repositories backed by two different Bucket?
You'll have to customize your Spring Data configuration programmatically (as opposed to letting Spring Boot do all the heavy lifting), but that's possible.
Spring Boot creates a CouchbaseConfigurer through which it creates default Cluster and Bucket (as tuned in the properties file).
If you have a CouchbaseRepository on your classpath, it'll also attempt to configure Spring Data by instantiating a SpringBootCouchbaseDataConfiguration class.
You can customize that by extending the SpringBootCouchbaseDataConfiguration above in your project, marking it as #Configuration
Once you're ready to customize the Spring Data configuration programmatically, what you need is to create a second Bucket bean, a second CouchbaseTemplate that uses that bucket, and then instruct Spring Data Couchbase on which template to use with which Repository.
To that end, there is a configureRepositoryOperationsMapping(...) method. You can use the parameter of this method as a builder to:
link a specific Repository interface to a CouchbaseTemplate: map
say that any repo with a specific entity type should use a given template: mapEntity
even redefine the default template to use (initially the one created by Spring Boot): setDefault.
This second part is explained in the Spring Data Couchbase documentation.
Probably what you are trying to say is that Spring boot provides pre-defined properties that you can modify, such as: couchbase.cluster.bucket that takes single value and you want to connect to two or more buckets.
In case you will not find a better solution, I can point you to a slightly different approach, and that is to setup your own couchbase connection manager that you can inject anywhere you need.
Here is the example of such #Service that will provider you with two connections to different buckets.
You can modify to suite your needs, it is very small.
import java.util.ArrayList;
import java.util.List;
import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
import org.apache.log4j.Logger;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;
import com.couchbase.client.java.Bucket;
import com.couchbase.client.java.Cluster;
import com.couchbase.client.java.CouchbaseCluster;
import com.couchbase.client.java.env.CouchbaseEnvironment;
import com.couchbase.client.java.env.DefaultCouchbaseEnvironment;
#Service
public class CouchbaseConnectionManager {
private static final int TIMEOUT = 100000;
#Value("#{configProp['couchbase.nodes']}")
private List<String> nodes = new ArrayList<String>();
#Value("#{configProp['couchbase.binary.bucketname']}")
private String binaryBucketName;
#Value("#{configProp['couchbase.nonbinary.bucketname']}")
private String nonbinaryBucketName;
#Value("#{configProp['couchbase.password']}")
private String password;
private Bucket binaryBucket;
private Bucket nonbinaryBucket;
private Cluster cluster;
private static final Logger log = Logger.getLogger(CouchbaseConnectionManager.class);
#PostConstruct
public void createSession() {
if (nodes != null && nodes.size() != 0) {
try {
CouchbaseEnvironment env = DefaultCouchbaseEnvironment.builder().connectTimeout(TIMEOUT).build();
cluster = CouchbaseCluster.create(env, nodes);
binaryBucket = cluster.openBucket(binaryBucketName, password);
nonbinaryBucket = cluster.openBucket(nonbinaryBucketName, password);
log.info(GOT_A_CONNECTION_TO_COUCHBASE_BUCKETS + binaryBucket + " " + nonbinaryBucket);
} catch (Exception e) {
log.warn(UNABLE_TO_GET_CONNECTION_TO_COUCHBASE_BUCKETS);
}
} else {
log.warn(COUCH_NOT_CONFIGURED);
}
}
#PreDestroy
public void preDestroy() {
if (cluster != null) {
cluster.disconnect();
log.info(SUCCESSFULLY_DISCONNECTED_FROM_COUCHBASE);
}
}
public Bucket getBinaryBucket() {
return binaryBucket;
}
public Bucket getNonbinaryBucket() {
return nonbinaryBucket;
}
private static final String SUCCESSFULLY_DISCONNECTED_FROM_COUCHBASE = "Successfully disconnected from couchbase";
private static final String GOT_A_CONNECTION_TO_COUCHBASE_BUCKETS = "Got a connection to couchbase buckets: ";
private static final String COUCH_NOT_CONFIGURED = "COUCH not configured!!";
private static final String UNABLE_TO_GET_CONNECTION_TO_COUCHBASE_BUCKETS = "Unable to get connection to couchbase buckets";
}
I followed Simon's approach and extended the org.springframework.data.couchbase.config.AbstractCouchbaseConfiguration for the #Configuration instead of SpringBootCouchbaseDataConfiguration.
Also, a point worth mentioning is that for some reason having separate Repository packages and having its own #Configuration doesn't really work. I struggled a great deal to try and make it work and eventually settled on having all the Repositories in a single package and ended up having something like the below to map the Entities and Templates.
baseMapping.mapEntity(Prime.class, noSQLSearchDBTemplate())
.mapEntity(PrimeDetailsMaster.class, noSQLSearchDBTemplate())
.mapEntity(HostDetailsMaster.class, noSQLSearchDBTemplate())
.mapEntity(Events.class, eventsTemplate())
.mapEntity(EventRulesMaster.class, eventsTemplate());

Resources