Issue Spring Boot Redis Multiple Applications Consuming Data - spring-boot

I have two applications for which I am using Spring Boot and Redis.
From both the applications I am producing data to Redis.
ISSUE The data produced by Spring Boot Redis Application 1 is not available for Redis Application 2 and vice versa.
Redis is running locally.
Application YAML for both the applications is same -
spring:
redis:
host: localhost
port: 6379
Model Class-
#RedisHash(timeToLive = 300,value = "alerts")
#Data
#NoArgsConstructor
#AllArgsConstructor
public class RedisModel {
#Id
private String id;
private String message;
public RedisModel(String message) {
this.message = message;
}
#Override
public String toString() {
return "RedisModel{" +
"id='" + id + '\'' +
", message='" + message + '\'' +
'}';
}
}
Is there some parameters that are missed??
Please let me know in case of any queries.
Spring Boot - 2.2.0 Version.

I think RedisConfiguration created by spring boot will namespace those cache keys with your application name. Therefore they won't be visible to each other. To get around this, you will have to do your own RedisConfiguration and disable the prefix via disableKeyPrefix() and then set your own prefix with computePrefixWith(...). Here is an example:
#Bean
public RedisCacheManager cacheManager( RedisConnectionFactory redisConnectionFactory,
ResourceLoader resourceLoader ) {
RedisCacheManager.RedisCacheManagerBuilder builder = RedisCacheManager
.builder( redisConnectionFactory )
.cacheDefaults( determineConfiguration( resourceLoader.getClassLoader() ) );
List<String> cacheNames = this.cacheProperties.getCacheNames();
if ( !cacheNames.isEmpty() ) {
builder.initialCacheNames( new LinkedHashSet<>( cacheNames ) );
}
return builder.build();
}
private RedisCacheConfiguration determineConfiguration(
ClassLoader classLoader ) {
if ( this.redisCacheConfiguration != null ) {
return this.redisCacheConfiguration;
}
CacheProperties.Redis redisProperties = this.cacheProperties.getRedis();
RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig();
ObjectMapper mapper = new Jackson2ObjectMapperBuilder()
.modulesToInstall( new SimpleModule().addSerializer( new NullValueSerializer( null ) ) )
.failOnEmptyBeans( false )
.build();
mapper.enableDefaultTyping( ObjectMapper.DefaultTyping.NON_FINAL, JsonTypeInfo.As.PROPERTY );
ObjectMapper mapper = new Jackson2ObjectMapperBuilder()
.modulesToInstall( new SimpleModule().addSerializer( new NullValueSerializer( null ) ) )
.failOnEmptyBeans( false )
.build();
mapper.enableDefaultTyping( ObjectMapper.DefaultTyping.NON_FINAL, JsonTypeInfo.As.PROPERTY );
GenericJackson2JsonRedisSerializer serializer = new GenericJackson2JsonRedisSerializer( mapper );
//get the mapper b/c they registered some internal modules
config = config.serializeValuesWith( RedisSerializationContext.SerializationPair.fromSerializer( serializer ) );
if ( redisProperties.getTimeToLive() != null ) {
config = config.entryTtl( redisProperties.getTimeToLive() );
}
if ( redisProperties.getKeyPrefix() != null ) {
config = config.prefixKeysWith( redisProperties.getKeyPrefix() );
}
if ( !redisProperties.isCacheNullValues() ) {
config = config.disableCachingNullValues();
}
if ( !redisProperties.isUseKeyPrefix() ) {
config = config.disableKeyPrefix();
config = config.computePrefixWith( cacheName -> cacheName + "::" );
}
return config;
}

Actually, Redis is not very suitable for such a situation. A better solution would be using Infinispan

Related

Spring , No "Access-Control-Allow-Origin" after repackaging

I did simple repackaging and that caused Access-Control-Allow-Origin issue with my s3 cloud.
I have a local S3 compatible server to store videos , using Spring , am streaming the videos directly from my local cloud .
Everything is working as expected until I tried to repackage my classes .
I had one package com.example.video with the following classes:
S3Config.java this contain the AmazonS3Client
User.java A model class
VideoController.java Simple controller
VideoStreamingServiceApplication.java application class
When I created new package com.example.s3 package and moved both User.java and S3config.java , i had auto wired issue and that was fixed by using component scan as this answer suggested .
Even after the autowired issue fixed am getting an error when I try to stream .
Access to XMLHttpRequest at "http://localhost/9999/recordins/a.m3u8" fom origin 'null' has been block by CORS policy: No 'Access-Control-Allow-Origin' header is present on the request resource .
Although I do have the header mentioned in my request , Here is my VideoController.java
#RestController
#RequestMapping("/cloud")
#ConfigurationProperties(prefix = "amazon.credentials")
public class VideoCotroller {
#Autowired
private S3Config s3Client;
private String bucketName= "recordings";
Logger log = LoggerFactory.getLogger(VideoCotroller.class);
#Autowired
User userData;
#GetMapping(value = "/recordings/{fileName}", produces = { MediaType.APPLICATION_OCTET_STREAM_VALUE })
public ResponseEntity<StreamingResponseBody> streamVideo(HttpServletRequest request, #PathVariable String fileName) {
try {
long rangeStart = 0;
long rangeEnd;
AmazonS3 s3client = s3Client.getAmazonS3Client();
String uri = request.getRequestURI();
System.out.println("Fetching " + uri);
S3Object object = s3client.getObject("recordings", fileName);
long size = object.getObjectMetadata().getContentLength();
S3ObjectInputStream finalObject = object.getObjectContent();
final StreamingResponseBody body = outputStream -> {
int numberOfBytesToWrite = 0;
byte[] data = new byte[(int) size];
while ((numberOfBytesToWrite = finalObject.read(data, 0, data.length)) != -1) {
outputStream.write(data, 0, numberOfBytesToWrite);
}
finalObject.close();
};
rangeEnd = size - 1;
return ResponseEntity.status(HttpStatus.OK)
.header("Content-Type", "application/vnd.apple.mpegurl")
.header("Accept-Ranges", "bytes")
// HERE IS THE ACCESS CONTROL ALLOW ORIGIN
.header("Access-Control-Allow-Origin", "*")
.header("Content-Length", String.valueOf(size))
.header("display", "staticcontent_sol, staticcontent_sol")
.header("Content-Range", "bytes" + " " + rangeStart + "-" + rangeEnd + "/" + size)
.body(body);
//return new ResponseEntity<StreamingResponseBody>(body, HttpStatus.OK);
} catch (Exception e) {
System.err.println("Error "+ e.getMessage());
return new ResponseEntity<StreamingResponseBody>(HttpStatus.BAD_REQUEST);
}}
If I restore the packages to one , as it was before everything is working fine .
MY QUESTION : Why repackaging caused this issue , any idea how to fix this ?

How to configure proxy credentials for google pub sub gRPC calls?

I am trying to connect to the google cloud platform pub/sub from behind a proxy.
Using Spring lib "org.springframework.cloud:spring-cloud-gcp-starter-pubsub" which uses the google pub sub client, which in order to make the pull call for the subscription uses gRPC calls.
In order to set the proxy I can use GRPC_PROXY_EXP environment variable but I also need credentials to go through this proxy.
I've tries several approaches, including configuring the org.springframework.cloud.gcp.pubsub.support.SubscriberFactory similar to here https://medium.com/google-cloud/accessing-google-cloud-apis-though-a-proxy-fe46658b5f2a
#Bean
fun inboundQuotationsChannelAdapter(
#Qualifier("inboundQuotationsMessageChannel") quotationsChannel: MessageChannel,
mpProperties: ConfigurationProperties,
defaultSubscriberFactory: SubscriberFactory
): PubSubInboundChannelAdapter {
Authenticator.setDefault(ProxyAuthenticator("ala","bala"))
val proxySubscriberFactory: DefaultSubscriberFactory = defaultSubscriberFactory as DefaultSubscriberFactory
proxySubscriberFactory.setCredentialsProvider(ProxyCredentialsProvider(getCredentials()))
val headers = mutableMapOf(Pair("Proxy-Authorization", getBasicAuth()))
proxySubscriberFactory.setChannelProvider(SubscriberStubSettings.defaultGrpcTransportProviderBuilder()
.setHeaderProvider(FixedHeaderProvider.create(headers)).build())
val proxySubscriberTemplate = PubSubSubscriberTemplate(proxySubscriberFactory)
val adapter = PubSubInboundChannelAdapter(proxySubscriberTemplate, mpProperties.gcp.quotationSubscription)
adapter.outputChannel = quotationsChannel
adapter.ackMode = AckMode.MANUAL
adapter.payloadType = ActivityStateChanged::class.java
return adapter
}
#Throws(IOException::class)
fun getCredentials(): GoogleCredentials {
val httpTransportFactory = getHttpTransportFactory(
"127.0.0.1", 3128, "ala", "bala"
)
return GoogleCredentials.getApplicationDefault(httpTransportFactory)
}
fun getHttpTransportFactory(
proxyHost: String?,
proxyPort: Int,
proxyUsername: String?,
proxyPassword: String?
): HttpTransportFactory? {
val proxyHostDetails = HttpHost(proxyHost, proxyPort)
val httpRoutePlanner: HttpRoutePlanner = DefaultProxyRoutePlanner(proxyHostDetails)
val credentialsProvider: CredentialsProvider = BasicCredentialsProvider()
credentialsProvider.setCredentials(
AuthScope(proxyHostDetails.hostName, proxyHostDetails.port),
UsernamePasswordCredentials(proxyUsername, proxyPassword)
)
val httpClient: HttpClient = ApacheHttpTransport.newDefaultHttpClientBuilder()
.setRoutePlanner(httpRoutePlanner)
.setProxyAuthenticationStrategy(ProxyAuthenticationStrategy.INSTANCE)
.setDefaultCredentialsProvider(credentialsProvider)
.setDefaultRequestConfig(
RequestConfig.custom()
.setAuthenticationEnabled(true)
.setProxy(proxyHostDetails)
.build())
.addInterceptorLast(HttpRequestInterceptor { request, context ->
request.addHeader(
BasicHeader(
"Proxy-Authorization",
getBasicAuth()
)
)
})
.build()
val httpTransport: HttpTransport = ApacheHttpTransport(httpClient)
return HttpTransportFactory { httpTransport }
}
Also tried using #GRpcGlobalInterceptor from LogNet
https://github.com/LogNet/grpc-spring-boot-starter
#Bean
#GRpcGlobalInterceptor
fun globalServerInterceptor(): ServerInterceptor {
return GrpcServerInterceptor(configurationProperties)
}
#Bean
#GRpcGlobalInterceptor
fun globalClientInterceptor(): ClientInterceptor {
return GrpcClientInterceptor(configurationProperties)
}
with
class GrpcClientInterceptor(private val configurationProperties: ConfigurationProperties) :
ClientInterceptor {
private val proxyUsername = configurationProperties.http.proxy.username
private val proxyPassword = configurationProperties.http.proxy.password
private val proxyHeaderKey = Metadata.Key.of("Proxy-Authorization", Metadata.ASCII_STRING_MARSHALLER)
private fun getBasicAuth(): String {
val usernameAndPassword = "$proxyUsername:$proxyPassword"
val encoded = Base64.getEncoder().encodeToString(usernameAndPassword.toByteArray())
return "Basic $encoded"
}
override fun <ReqT, RespT> interceptCall(
method: MethodDescriptor<ReqT, RespT>?,
callOptions: CallOptions?, next: Channel
): ClientCall<ReqT, RespT>? {
return object : SimpleForwardingClientCall<ReqT, RespT>(next.newCall(method, callOptions)) {
override fun start(responseListener: Listener<RespT>?, headers: Metadata) {
headers.put(proxyHeaderKey, getBasicAuth())
super.start(object : SimpleForwardingClientCallListener<RespT>(responseListener) {
override fun onHeaders(headers: Metadata) {
super.onHeaders(headers)
}
}, headers)
}
}
}
}
class GrpcServerInterceptor(private val configurationProperties: ConfigurationProperties) :
ServerInterceptor {
private val proxyUsername = configurationProperties.http.proxy.username
private val proxyPassword = configurationProperties.http.proxy.password
override fun <ReqT : Any?, RespT : Any?> interceptCall(
call: ServerCall<ReqT, RespT>?,
headers: io.grpc.Metadata?,
next: ServerCallHandler<ReqT, RespT>?
): ServerCall.Listener<ReqT> {
val proxyHeaderKey = Metadata.Key.of("Proxy-Authorization", Metadata.ASCII_STRING_MARSHALLER)
if (!headers!!.containsKey(proxyHeaderKey))
headers!!.put(proxyHeaderKey, getBasicAuth())
return next!!.startCall(call, headers)
}
private fun getBasicAuth(): String {
val usernameAndPassword = "$proxyUsername:$proxyPassword"
val encoded = Base64.getEncoder().encodeToString(usernameAndPassword.toByteArray())
return "Basic $encoded"
}
}
(also tried the annotation directly on class level - ofc it did not work)
Also tried using #GrpcGlobalServerInterceptor and #GrpcGlobalClientInterceptor from https://github.com/yidongnan/grpc-spring-boot-starter/tree/v2.12.0.RELEASE but this dependency crashed the app entirely
Here you can find an example on how to set the proxy credentials from the Java API documentation to configuring-a-proxy ;
public CloudTasksClient getService() throws IOException {
TransportChannelProvider transportChannelProvider =
CloudTasksStubSettings.defaultGrpcTransportProviderBuilder()
.setChannelConfigurator(
new ApiFunction<ManagedChannelBuilder, ManagedChannelBuilder>() {
#Override
public ManagedChannelBuilder apply(ManagedChannelBuilder managedChannelBuilder) {
return managedChannelBuilder.proxyDetector(
new ProxyDetector() {
#Nullable
#Override
public ProxiedSocketAddress proxyFor(SocketAddress socketAddress)
throws IOException {
return HttpConnectProxiedSocketAddress.newBuilder()
.setUsername(PROXY_USERNAME)
.setPassword(PROXY_PASSWORD)
.setProxyAddress(new InetSocketAddress(PROXY_HOST, PROXY_PORT))
.setTargetAddress((InetSocketAddress) socketAddress)
.build();
}
});
}
})
.build();
CloudTasksSettings cloudTasksSettings =
CloudTasksSettings.newBuilder()
.setTransportChannelProvider(transportChannelProvider)
.build();
return CloudTasksClient.create(cloudTasksSettings);
}
Take into consideration the note where it says that gRPC proxy is currently experimental.
There are two clients that communicate with google. One using "http" and one using "gRPC". (https instead of http is also possible)
The Solution Dan posted is just the solution for gRPC.
Here my Solution for http by using an Apache-Http-Client.
try (InputStream jsonCredentialsFile = json-file as InputStream) {
GoogleCredentials credentials = GoogleCredentials.fromStream(jsonCredentialsFile, new HttpTransportFactory() {
public HttpTransport create() {
DefaultHttpClient httpClient = new DefaultHttpClient();
httpClient.getParams().setParameter(ConnRoutePNames.DEFAULT_PROXY, new HttpHost("myProxyHost", myProxyPort, "http"));
// DefaultHttpClient is deprecated but this recommended solution does not work in this context.
// org.apache.http.impl.client.InternalHttpClient.getParams() always throws UnsupportedOperationException
// HttpClientBuilder builder = HttpClientBuilder.create();
// if (StringUtils.isBlank(proxyServer) || proxyport < 0) {
// builder.setProxy(new HttpHost(proxyServer, proxyport, "http"));
// }
// CloseableHttpClient httpClient = builder.build();
return new ApacheHttpTransport(httpClient);
}
})
.createScoped(Lists.newArrayList("https://www.googleapis.com/auth/cloud-platform"));
DocumentProcessorServiceSettings.Builder dpssBuilder = DocumentProcessorServiceSettings.newBuilder()
.setEndpoint(endpoint)
.setCredentialsProvider(FixedCredentialsProvider.create(credentials));
dpssBuilder.setTransportChannelProvider(transportProvider);
DocumentProcessorServiceClient client = DocumentProcessorServiceClient.create(dpssBuilder.build());
// use the client
}

Problem with start of #KafkaListener (Spring)

What is need
I'm writing an application (Spring + Kotlin) that takes information with Kafka. If I set autoStartup = "true" when declaring a #KafkaListener then the app works fine but only if broker is available. When the broker is unavailable application crashes on start. It's undesirable behavior. The application must work and perform other functions.
What I tried to do
For the escape of crashing application on start somebody on this site in another topic advised setting autoStartup = "false" when declaring a #KafkaListener. And it really helped to prevent crash on start. But now I cannot successfully start KafkaListener manually. In other examples I saw auto wiring of KafkaListenerEndpointRegistry, but when I trying to do it:
#Service
class KafkaConsumer #Autowired constructor(
private val kafkaListenerEndpointRegistry: KafkaListenerEndpointRegistry
) {
IntelliJ Idea warns:
Could not autowire. No beans of 'KafkaListenerEndpointRegistry' type found.
When I try to use KafkaListenerEndpointRegistry without autowiring and perform this code:
#Service
class KafkaConsumer {
private val logger = LoggerFactory.getLogger(this::class.java)
private val kafkaListenerEndpointRegistry = KafkaListenerEndpointRegistry()
#Scheduled(fixedDelay = 10000)
fun startCpguListener(){
val container = kafkaListenerEndpointRegistry.getListenerContainer("consumer1")
if (!container.isRunning)
try {
logger.info("Kafka Consumer is not running. Trying to start...")
container.start()
} catch (e: Exception){
logger.error(e.message)
}
}
#KafkaListener(
id = "consumer1",
topics = ["cpgdb.public.user"],
autoStartup = "false"
)
private fun listen(it: ConsumerRecord<JsonNode, JsonNode>, qwe: Consumer<Any, Any>){
val pay = it.value().get("payload")
val after = pay.get("after")
val id = after["id"].asInt()
val receivedUser = CpguUser(
id = id,
name = after["name"].asText()
)
logger.info("received user with id = $id")
}
}
}
kafkaListenerEndpointRegistry.getListenerContainer("consumer1") always return null. I guess it's because I didn't auto wire kafkaListenerEndpointRegistry. How can I do it? Or if exist another solution of my answer I'll be appreciative any help! Thanks!
There is Kafka config:
#Configuration
#EnableConfigurationProperties(KafkaProperties::class)
class KafkaConfiguration(private val props: KafkaProperties) {
#Bean
fun kafkaListenerContainerFactory(): ConcurrentKafkaListenerContainerFactory<Any, Any> {
val factory = ConcurrentKafkaListenerContainerFactory<Any, Any>()
factory.consumerFactory = consumerFactory()
factory.setConcurrency(1)
factory.setMessageConverter(MessagingMessageConverter())
factory.setStatefulRetry(true)
val retryTemplate = RetryTemplate()
retryTemplate.setRetryPolicy(AlwaysRetryPolicy())
retryTemplate.setBackOffPolicy(ExponentialBackOffPolicy())
factory.setRetryTemplate(retryTemplate)
val handler = SeekToCurrentErrorHandler()
handler.isAckAfterHandle = false
factory.setErrorHandler(handler)
factory.containerProperties.isMissingTopicsFatal = false
return factory
}
#Bean
fun consumerFactory(): ConsumerFactory<Any, Any> {
return DefaultKafkaConsumerFactory(consumerConfigs())
}
#Bean
fun consumerConfigs(): Map<String, Any> {
return mapOf(
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG to props.bootstrap.address,
ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG to JsonDeserializer::class.java,
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG to JsonDeserializer::class.java,
ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG to listOf(MonitoringConsumerInterceptor::class.java),
ConsumerConfig.CLIENT_ID_CONFIG to props.receiver.clientId,
ConsumerConfig.GROUP_ID_CONFIG to props.receiver.groupId,
ConsumerConfig.AUTO_OFFSET_RESET_CONFIG to "earliest",
ConsumerConfig.ISOLATION_LEVEL_CONFIG to "read_committed",
ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG to true
)
}
}
spring boot version: 2.3.0
spring-kafka version: 2.5.3
kafka-clients version: 2.5.0
Just ignore IntelliJ's warning about the auto wiring; the bean does exist; it's just that IntelliJ can't detect it.

spring boot with redis

I worked with spring boot and redis to caching.I can cache my data that fetch from database(oracle) use #Cacheable(key = "{#input,#page,#size}",value = "on_test").
when i try to fetch data from key("on_test::0,0,10") with redisTemplate the result is 0
why??
Redis Config:
#Configuration
public class RedisConfig {
#Bean
JedisConnectionFactory jedisConnectionFactory() {
RedisStandaloneConfiguration redisStandaloneConfiguration = new RedisStandaloneConfiguration("localhost", 6379);
redisStandaloneConfiguration.setPassword(RedisPassword.of("admin#123"));
return new JedisConnectionFactory(redisStandaloneConfiguration);
}
#Bean
public RedisTemplate<String,Objects> redisTemplate() {
RedisTemplate<String,Objects> template = new RedisTemplate<>();
template.setStringSerializer(new StringRedisSerializer());
template.setValueSerializer(new StringRedisSerializer());
template.setConnectionFactory(jedisConnectionFactory());
return template;
}
//service
#Override
#Cacheable(key = "{#input,#page,#size}",value = "on_test")
public Page<?> getAllByZikaConfirmedClinicIs(Integer input,int page,int size) {
try {
Pageable newPage = PageRequest.of(page, size);
String fromCache = controlledCacheService.getFromCache();
if (fromCache == null && input!=null) {
log.info("cache is empty lets initials it!!!");
Page<DataSet> all = dataSetRepository.getAllByZikaConfirmedClinicIs(input,newPage);
List<DataSet> d = redisTemplate.opsForHash().values("on_test::0,0,10");
System.out.print(d);
return all;
}
return null;
The whole point of using #Cacheable is that you don't need to be using RedisTemplate directly. You just need to call getAllByZikaConfirmedClinicIs() (from outside of the class it is defined in) and Spring will automatically check first if a cached result is available and return that instead of calling the function.
If that's not working, have you annotated one of your Spring Boot configuration classes with #EnableCaching to enable caching?
You might also need to set spring.cache.type=REDIS in application.properties, or spring.cache.type: REDIS in application.yml to ensure Spring is using Redis and not some other cache provider.

How to make AuditorAware work with Spring Data Mongo Reactive

Spring Security 5 provides a ReactiveSecurityContextHolder to fetch the SecurityContext from a Reactive context, but when I want to implement AuditorAware and get audition work automatically, but it does not work. Currently I can not find a Reactive variant for AuditorAware.
#Bean
public AuditorAware<Username> auditor() {
return () -> ReactiveSecurityContextHolder.getContext()
.map(SecurityContext::getAuthentication)
.log()
.filter(a -> a != null && a.isAuthenticated())
.map(Authentication::getPrincipal)
.cast(UserDetails.class)
.map(auth -> new Username(auth.getName()))
.switchIfEmpty(Mono.empty())
.blockOptional();
}
I have added #EnableMongoAuduting on my boot Application class.
On the Mongo document class. I added audition related annotations.
#CreatedDate
private LocalDateTime createdDate;
#CreatedBy
private Username author;
When I added a post, the createdDate is filled, but author is null.
{"id":"5a49ccdb9222971f40a4ada1","title":"my first post","content":"content of my first post","createdDate":"2018-01-01T13:53:31.234","author":null}
The complete codes is here, based on Spring Boot 2.0.0.M7.
Update: Spring Boot 2.4.0-M2/Spring Data Common 2.4.0-M2/Spring Data Mongo 3.1.0-M2 includes a ReactiveAuditorAware, Check this new sample, Note: use #EnableReactiveMongoAuditing to activiate it.
I am posting another solution which counts with input id to support update operations:
#Component
#RequiredArgsConstructor
public class AuditCallback implements ReactiveBeforeConvertCallback<AuditableEntity> {
private final ReactiveMongoTemplate mongoTemplate;
private Mono<?> exists(Object id, Class<?> entityClass) {
if (id == null) {
return Mono.empty();
}
return mongoTemplate.findById(id, entityClass);
}
#Override
public Publisher<AuditableEntity> onBeforeConvert(AuditableEntity entity, String collection) {
var securityContext = ReactiveSecurityContextHolder.getContext();
return securityContext
.zipWith(exists(entity.getId(), entity.getClass()))
.map(tuple2 -> {
var auditableEntity = (AuditableEntity) tuple2.getT2();
auditableEntity.setLastModifiedBy(tuple2.getT1().getAuthentication().getName());
auditableEntity.setLastModifiedDate(Instant.now());
return auditableEntity;
})
.switchIfEmpty(Mono.zip(securityContext, Mono.just(entity))
.map(tuple2 -> {
var auditableEntity = (AuditableEntity) tuple2.getT2();
String principal = tuple2.getT1().getAuthentication().getName();
Instant now = Instant.now();
auditableEntity.setLastModifiedBy(principal);
auditableEntity.setCreatedBy(principal);
auditableEntity.setLastModifiedDate(now);
auditableEntity.setCreatedDate(now);
return auditableEntity;
}));
}
}
Deprecated: see the update solution in the original post
Before the official reactive AuditAware is provided, there is an alternative to implement these via Spring Data Mongo specific ReactiveBeforeConvertCallback.
Do not use #EnableMongoAuditing
Implement your own ReactiveBeforeConvertCallback, here I use a PersistentEntity interface for those entities that need to be audited.
public class PersistentEntityCallback implements ReactiveBeforeConvertCallback<PersistentEntity> {
#Override
public Publisher<PersistentEntity> onBeforeConvert(PersistentEntity entity, String collection) {
var user = ReactiveSecurityContextHolder.getContext()
.map(SecurityContext::getAuthentication)
.filter(it -> it != null && it.isAuthenticated())
.map(Authentication::getPrincipal)
.cast(UserDetails.class)
.map(userDetails -> new Username(userDetails.getUsername()))
.switchIfEmpty(Mono.empty());
var currentTime = LocalDateTime.now();
if (entity.getId() == null) {
entity.setCreatedDate(currentTime);
}
entity.setLastModifiedDate(currentTime);
return user
.map(u -> {
if (entity.getId() == null) {
entity.setCreatedBy(u);
}
entity.setLastModifiedBy(u);
return entity;
}
)
.defaultIfEmpty(entity);
}
}
Check the complete codes here.
To have createdBy attribute filled, you need to link your auditorAware bean with the annotation #EnableMongoAuditing
In your MongoConfig class, define your bean :
#Bean(name = "auditorAware")
public AuditorAware<String> auditor() {
....
}
and use it in the annotation :
#Configuration
#EnableMongoAuditing(auditorAwareRef="auditorAware")
class MongoConfig {
....
}

Resources