How to mask sensitive information while logging in spring integration framework - spring-boot

I have requirement to mask sensitive information while logging. We are using wire-tap provided by integration framework for logging and we have many interfaces already designed which logs using wire-tap. We are currently using spring boot 2.1 and spring integration.

I hope that all your integration flows log via the mentioned global single wire-tap.
This one is just a start from another integration flow anyway: it is not just for a channel and logger on it. You really can build a wire-tapped flow any complexity.
My point is that you can add a transformer before logging-channel-adapter and mask a payload and/or headers any required way. The logger will receive already masked data.
Another way is to use some masking functionality in the log-expression. You may call here some bean for masking or a static utility: https://docs.spring.io/spring-integration/reference/html/#logging-channel-adapter

Don't know if this is a fancy approach, but I ended up implementing some sort of "error message filter" to mask headers in case the sensitive one is present (this can be extended to multiple header names, but this gives the idea):
#Component
public class ErrorMessageFilter {
private static final String SENSITIVE_HEADER_NAME = "sensitive_header";
public Throwable filterErrorMessage(Throwable payload) {
if (payload instanceof MessagingException) {
Message<?> failedMessage = ((MessagingException) payload).getFailedMessage();
if (failedMessage != null && failedMessage.getHeaders().containsKey(SENSITIVE_HEADER_NAME)) {
MessageHeaderAccessor headerAccessor = new MessageHeaderAccessor(failedMessage);
headerAccessor.setHeader(SENSITIVE_HEADER_NAME, "XXX");
return new MessagingException(withPayload(failedMessage.getPayload()).setHeaders(headerAccessor)
.build());
}
}
return payload;
}
}
Then, in the #Configuration class, added a way to wire my filter with Spring Integration's LoggingHandler:
#Autowired
public void setLoggingHandlerLogExpression(LoggingHandler loggingHandler, ErrorMessageFilter messageFilter) {
loggingHandler.setLogExpression(new FunctionExpression<Message<?>>((m) -> {
if (m instanceof ErrorMessage) {
return messageFilter.filterErrorMessage(((ErrorMessage) m).getPayload());
}
return m.getPayload();
}));
}
This also gave me the flexibility to reuse my filter in other components where I handle error messages (e.g.: send error notifications to Zabbix, etc.).
P.S.: sorry about all the instanceof and ifs, but at certain layer dirty code has to start.

Related

How can transactions be implemented in spring webflux without r2dbc driver

General problem description
Due to compatibility issues with the provided database I can not use the provided r2dbc driver for the database. The only possible option is using the standard jdbc driver but I have faced some issues getting transactions to work in the spring-weflux/ project reactor context.
Transactions with jdbc usually rely on the requirement of the connection being thread-local. In project reactor Flux/Mono it is not guaranteed that each flux execution is performed in the same thread. Even more i assume one of the major benefits of reactive programming is the ability to switch threads without having to worry about it. For this reason the standard spring jdbc TransactionManager can not be used and for r2dbc a ReactiveTransactionManager is implemented. As I am using jdbc in this case neither can I use the JdbcTransactionManager, nor is a ReactiveTransactionManager available.
First of all: Is there a simple solution to this Problem?
"Hacky" solution
I will now elaborate further on the steps I already took to solve this issue for me. My idea was implementing a custom ReactiveTransactionManager, which is based on the provided JdbcTransactionManager. My assumption was that it would be possible to wrap a transaction around a Mono/Flux this way. The issue is that I did not take into account the issue described above: It works currently only in a ThreadLocal context as the underlying JdbcTransactions still rely on it. Due to this the inner transactions are handled (commit,rollback) individually if the thread is changed in between.
The following class is the implementation of my custom transaction manager to be included in a reactive stream.
public class JdbcReactiveTransactionManager implements ReactiveTransactionManager {
// Jdbc or connection based transaction manager
private final DataSourceTransactionManager transactionManager;
// ReactiveTransaction delegates everything to TransactionStatus.
static class JdbcReactiveTransaction implements ReactiveTransaction {
public JdbcReactiveTransaction(TransactionStatus transactionStatus) {
this.transactionStatus = transactionStatus;
}
private final TransactionStatus transactionStatus;
public TransactionStatus getTransactionStatus() {
return transactionStatus;
}
// [...]
}
#Override
public #NonNull Mono<ReactiveTransaction> getReactiveTransaction(TransactionDefinition definition)
throws TransactionException {
return Mono.just(transactionManager.getTransaction(definition)).map(JdbcReactiveTransaction::new);
}
#Override
public #NonNull Mono<Void> commit(#NonNull ReactiveTransaction transaction) throws TransactionException {
if (transaction instanceof JdbcReactiveTransaction t) {
transactionManager.commit(t.getTransactionStatus());
return Mono.empty();
} else {
return Mono.error(new IllegalTransactionStateException("Illegal ReactiveTransaction type used"));
}
}
#Override
public #NonNull Mono<Void> rollback(#NonNull ReactiveTransaction transaction) throws TransactionException {
if (transaction instanceof JdbcReactiveTransaction t) {
transactionManager.rollback(t.getTransactionStatus());
return Mono.empty();
} else {
return Mono.error(new IllegalTransactionStateException("Illegal ReactiveTransaction type used"));
}
}
The implemented solution works in all scenarios where the tread does not change. But a fixed thread is not what one usually wants to archive using reactive approaches. Therefore the thread must be fixed using publishOn and subscribeOn. This is all very hacky and I myself consider this a good solution but I do not see a better alternative currently. As this is only required for one use case right now I can probably do but I would really like to find a better solution.
Pinning the Thread
The example below shows the situation that I need to use both: publishOn and subscribeOn to pin the thread. If I omit either on of these some statements wont be executed in the same thread. My current assumption is that Netty executes the parsing in a separate thread (or eventloop). Therefore the additional publishOn is required.
public Mono<ServerResponse> allocateFlows(ServerRequest request) {
final val single = Schedulers.newSingle("AllocationService-allocateFlows");
return request.bodyToMono(FlowsAllocation.class)
.publishOn(single) // Why do I need this although I execute subscribeOn later?
.flatMapMany(this::someProcessingLogic)
.concatMapDelayError(this::someOtherProcessingLogic)
.as(transactionalOperator::transactional)
.subscribeOn(single, false)
.then(ServerResponse.ok().build());
}

Spring Reactor and consuming websocket messages

I'm creating a spring reactor application to consume messages from websockets server, transform them and later save them to redis and some sql database, saving to redis and sql database is also reactive. Also, before writing to redis and sql database, messages will be windowed (with different timespans) and aggregated.
I'm not sure if the way I've accomplished what I want to achieve is a proper reactive wise, it means, I'm not losing reactive benefits (performance).
First, let me show you what I got:
#Service
class WebSocketsConsumer {
public ConnectableFlux<String> webSocketFlux() {
return Flux.<String>create(emitter -> {
createWebSocketClient()
.execute(URI.create("wss://some-url-goes-here.com"), session -> {
WebSocketMessage initialMessage = session.textMessage("SOME_MSG_HERE");
Flux<String> flux = session.send(Mono.just(initialMessage))
.thenMany(session.receive())
.map(WebSocketMessage::getPayloadAsText)
.doOnNext(emitter::next);
Flux<String> sessionStatus = session.closeStatus()
.switchIfEmpty(Mono.just(CloseStatus.GOING_AWAY))
.map(CloseStatus::toString)
.doOnNext(emitter::next)
.flatMapMany(Flux::just);
return flux
.mergeWith(sessionStatus)
.then();
})
.subscribe(); //1: highlighted by Intellij Idea: `Calling subsribe in not blocking context`
})
.publish();
}
private ReactorNettyWebSocketClient createWebSocketClient() {
return new ReactorNettyWebSocketClient(
HttpClient.create(),
() -> WebsocketClientSpec.builder().maxFramePayloadLength(131072 * 100)
);
}
}
And
#Service
class WebSocketMessageDispatcher {
private final WebSocketsConsumer webSocketsConsumer;
private final Consumer<String> reactiveRedisConsumer;
private final Consumer<String> reactiveJdbcConsumer;
private Disposable webSocketsDisposable;
WebSocketMessageDispatcher(WebSocketsConsumer webSocketsConsumer, Consumer<String> redisConsumer, Consumer<String> dbConsumer) {
this.webSocketsConsumer = webSocketsConsumer;
this.reactiveRedisConsumer = redisConsumer;
this.reactiveJdbcConsumer = dbConsumer;
}
#EventListener(ApplicationReadyEvent.class)
public void onReady() {
ConnectableFlux<String> messages = webSocketsConsumer.webSocketFlux();
messages.subscribe(reactiveRedisConsumer);
messages.subscribe(reactiveJdbcConsumer);
webSocketsDisposable = messages.connect();
}
#PreDestroy
public void onDestroy() {
if (webSocketsDisposable != null) webSocketsDisposable.dispose();
}
}
Questions:
Is it a proper use of reactive streams? Maybe redis and database writes should be done in flatMap, however IMO they can't as I want them to happen in the background and they will also aggregate messages with different time windows. Also note comment 1 from the code above where idea lints my code, code works however I wonder what this lint may result in? Maybe I should use doOnNext not to call emitter::next but to invoke some dispatcher of messages there with some funcion like doOnNext(dispatcher::dispatchMessage) ?
I want websockets client to start immediately after application is ready and stop consuming messages when application shuts down, are #EventListener(ApplicationReadyEvent.class) and #PreDestroy annotations and code shown above a proper way to handle this scenario in reactive world?
As I said saving to redis and sql database is also reactive, i.e. those saves are also producing Mono<T> is subscribing to those Monos inside subscribe of websockets flux ok or it should be accomplished some other way (comments 2 and 3 in code above)

Spring Clould Stream Resolving Input Channel dynamically based on Message

I need a way of resolving an Inbound Channel dynamically based on the type of the Incoming Message.
I am not looking for any header based solution which is already mentioned in this link
https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/3.0.0.M1/spring-cloud-stream.html#_using_streamlistener_for_content_based_routing
The resolution has to happen based on the type of the message. If there is a custom binding that can be done at application startup to be able to do this, that should be ok; Please give me some samples on how this can be achieved.
There is no such support in Spring Cloud Stream.
The underlying Spring for Apache Kafka project does have support for such scenarios.
See #KafkaListener on a Class.
It requires the payload to have been deserialized by the Kafka deserializer; then the method called depends on the payload type.
It also supports a fallback "default" method.
#KafkaListener(id = "multi", topics = "myTopic")
static class MultiListenerBean {
#KafkaHandler
public void listen(String foo) {
...
}
#KafkaHandler
public void listen(Integer bar) {
...
}
#KafkaHandler(isDefault = true)
public void listenDefault(Object object) {
...
}
}

Spring Boot auto-configured metrics not arriving to Librato

I am using Spring Boot with auto-configure enabled (#EnableAutoConfiguration) and trying to send my Spring MVC metrics to Librato. Right now only my own created metrics are arriving to Librato but auto-configured metrics (CPU, file descriptors, etc) are not sent to my reporter.
If I access a metric endpoint I can see the info generated there, for instance http://localhost:8081/actuator/metrics/system.cpu.count
I based my code on this post for ConsoleReporter. so I have this:
public static MeterRegistry libratoRegistry() {
MetricRegistry dropwizardRegistry = new MetricRegistry();
String libratoApiAccount = "xx";
String libratoApiKey = "yy";
String libratoPrefix = "zz";
LibratoReporter reporter = Librato
.reporter(dropwizardRegistry, libratoApiAccount, libratoApiKey)
.setPrefix(libratoPrefix)
.build();
reporter.start(60, TimeUnit.SECONDS);
DropwizardConfig dropwizardConfig = new DropwizardConfig() {
#Override
public String prefix() {
return "myprefix";
}
#Override
public String get(String key) {
return null;
}
};
return new DropwizardMeterRegistry(dropwizardConfig, dropwizardRegistry, HierarchicalNameMapper.DEFAULT, Clock.SYSTEM) {
#Override
protected Double nullGaugeValue() {
return null;
}
};
}
and at my main function I added Metrics.addRegistry(SpringReporter.libratoRegistry());
For the Librato library I am using in my compile("com.librato.metrics:metrics-librato:5.1.2") build.gradle. Documentation here. I used this library before without any problem.
If I use the ConsoleReporter as in this post the same thing happens, only my own created metrics are printed to the console.
Any thoughts on what am I doing wrong? or what am I missing?
Also, I enabled debug mode to see the "CONDITIONS EVALUATION REPORT" printed in the console but not sure what to look for in there.
Try to make your MeterRegistry for Librato reporter as a Spring #Bean and let me know whether it works.
UPDATED:
I tested with ConsoleReporter you mentioned and confirmed it's working with a sample. Note that the sample is on the branch console-reporter, not the master branch. See the sample for details.

Spring Cloud Stream RabbitMQ

I am trying to understand why I would want to use Spring cloud stream with RabbitMQ. I've had a look at the RabbitMQ Spring tutorial 4 (https://www.rabbitmq.com/tutorials/tutorial-four-spring-amqp.html) which is basically what I want to do. It creates a direct exchange with 2 queues attached and depending on the routing key a message is either routed to Q1 or to Q2.
The whole process is pretty straight forward if you look at the tutorial, you create all the parts, bind them together and youre ready to go.
I was wondering what benefit I would gain in using Sing Cloud Stream and if that is even the use case for it. It was easy to create a simple exchange and even defining destination and group was straight forward with stream. So I thought why not go further and try to handle the tutorial case with stream.
I have seen that Stream has a BinderAwareChannelResolver which seems to do the same thing. But I am struggling to put it all together to achieve the same as in the RabbitMQ Spring tutorial. I am not sure if it is a dependency issue, but I seem to misunderstand something fundamentally here, I thought something like:
spring.cloud.stream.bindings.output.destination=myDestination
spring.cloud.stream.bindings.output.group=consumerGroup
spring.cloud.stream.rabbit.bindings.output.producer.routing-key-expression='key'
should to the trick.
Is there anyone with a minimal example for a source and sink which basically creates a direct exchange, binds 2 queues to it and depending on routing key routes to either one of those 2 queues like in https://www.rabbitmq.com/tutorials/tutorial-four-spring-amqp.html?
EDIT:
Below is a minimal set of code which demonstrates how to do what I asked. I did not attach the build.gradle as it is straight forward (but if anyone is interested, let me know)
application.properties: setup the producer
spring.cloud.stream.bindings.output.destination=tut.direct
spring.cloud.stream.rabbit.bindings.output.producer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.output.producer.routing-key-expression=headers.type
Sources.class: setup the producers channel
public interface Sources {
String OUTPUT = "output";
#Output(Sources.OUTPUT)
MessageChannel output();
}
StatusController.class: Respond to rest calls and send message with specific routing keys
/**
* Status endpoint for the health-check service.
*/
#RestController
#EnableBinding(Sources.class)
public class StatusController {
private int index;
private int count;
private final String[] keys = {"orange", "black", "green"};
private Sources sources;
private StatusService status;
#Autowired
public StatusController(Sources sources, StatusService status) {
this.sources = sources;
this.status = status;
}
/**
* Service available, service returns "OK"'.
* #return The Status of the service.
*/
#RequestMapping("/status")
public String status() {
String status = this.status.getStatus();
StringBuilder builder = new StringBuilder("Hello to ");
if (++this.index == 3) {
this.index = 0;
}
String key = keys[this.index];
builder.append(key).append(' ');
builder.append(Integer.toString(++this.count));
String payload = builder.toString();
log.info(payload);
// add kv pair - routingkeyexpression (which matches 'type') will then evaluate
// and add the value as routing key
Message<String> msg = new GenericMessage<>(payload, Collections.singletonMap("type", key));
sources.output().send(msg);
// return rest call
return status;
}
}
consumer side of things, properties:
spring.cloud.stream.bindings.input.destination=tut.direct
spring.cloud.stream.rabbit.bindings.input.consumer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.input.consumer.bindingRoutingKey=orange
spring.cloud.stream.bindings.inputer.destination=tut.direct
spring.cloud.stream.rabbit.bindings.inputer.consumer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.inputer.consumer.bindingRoutingKey=black
Sinks.class:
public interface Sinks {
String INPUT = "input";
#Input(Sinks.INPUT)
SubscribableChannel input();
String INPUTER = "inputer";
#Input(Sinks.INPUTER)
SubscribableChannel inputer();
}
ReceiveStatus.class: Receive the status:
#EnableBinding(Sinks.class)
public class ReceiveStatus {
#StreamListener(Sinks.INPUT)
public void receiveStatusOrange(String msg) {
log.info("I received a message. It was orange number: {}", msg);
}
#StreamListener(Sinks.INPUTER)
public void receiveStatusBlack(String msg) {
log.info("I received a message. It was black number: {}", msg);
}
}
Spring Cloud Stream lets you develop event driven micro service applications by enabling the applications to connect (via #EnableBinding) to the external messaging systems using the Spring Cloud Stream Binder implementations (Kafka, RabbitMQ, JMS binders etc.,). Apparently, Spring Cloud Stream uses Spring AMQP for the RabbitMQ binder implementation.
The BinderAwareChannelResolver is applicable for dynamically binding support for the producers and I think in your case it is about configuring the exchanges and binding of consumers to that exchange.
For instance, you need to have 2 consumers with the appropriate bindingRoutingKey set based on your criteria and a single producer with the properties(routing-key-expression, destination) you mentioned above (except the group). I noticed that you have configured group for the outbound channel. The group property is applicable only for the consumers (hence inbound).
You might also want to check this one: https://github.com/spring-cloud/spring-cloud-stream-binder-rabbit/issues/57 as I see some discussion around using routing-key-expression. Specifically, check this one on using the expression value.

Resources