Spring-Kafka Sending custom record instead of failed record using DeadLetterPublishingRecoverer to a DLT - spring-boot

I am using DeadLetterPublishingRecoverer to auto send the failed record to DLT. I am trying to send a custom record instead of failed record to DLT. Is it possible to do this. Please help me with the configuration. My DeadLetterPublishingRecoverer config is below.
#Bean
DeadLetterPublishingRecoverer deadLetterPublishingRecoverer(KafkaTemplate<String, byte[]> byteArrayTemplate) {
return new DeadLetterPublishingRecoverer([
(byte[].class) : byteArrayTemplate],)
}

Create a subclass of DeadLetterPublishingRecoverer and override the createProducerRecord() method.
/**
* Subclasses can override this method to customize the producer record to send to the
* DLQ. The default implementation simply copies the key and value from the consumer
* record and adds the headers. The timestamp is not set (the original timestamp is in
* one of the headers). IMPORTANT: if the partition in the {#link TopicPartition} is
* less than 0, it must be set to null in the {#link ProducerRecord}.
* #param record the failed record
* #param topicPartition the {#link TopicPartition} returned by the destination
* resolver.
* #param headers the headers - original record headers plus DLT headers.
* #param data the value to use instead of the consumer record value.
* #param isKey true if key deserialization failed.
* #return the producer record to send.
* #see KafkaHeaders
*/
protected ProducerRecord<Object, Object> createProducerRecord(ConsumerRecord<?, ?> record,
TopicPartition topicPartition, Headers headers, #Nullable byte[] data, boolean isKey) {
In the upcoming 2.7 release, this is changed to
/**
* Subclasses can override this method to customize the producer record to send to the
* DLQ. The default implementation simply copies the key and value from the consumer
* record and adds the headers. The timestamp is not set (the original timestamp is in
* one of the headers). IMPORTANT: if the partition in the {#link TopicPartition} is
* less than 0, it must be set to null in the {#link ProducerRecord}.
* #param record the failed record
* #param topicPartition the {#link TopicPartition} returned by the destination
* resolver.
* #param headers the headers - original record headers plus DLT headers.
* #param key the key to use instead of the consumer record key.
* #param value the value to use instead of the consumer record value.
* #return the producer record to send.
* #see KafkaHeaders
*/
protected ProducerRecord<Object, Object> createProducerRecord(ConsumerRecord<?, ?> record,
TopicPartition topicPartition, Headers headers, #Nullable byte[] key, #Nullable byte[] value) {

Related

Spring security 6.0 AuthorizationFilter - questionable default for shouldFilterAllDispatcherTypes

I spent a few hours today on a migration issue to Spring security 6.0 by replacing the deprecated authorizeRequests() method with authorizeHttpRequests(). I learned that under the hood, this implies replacing the FilterSecurityInterceptor with the new AuthorizationFilter in the security chain.
However, I got some unexpected results already for my unauthenticated register endpoint, that uses a JPA-validated #Valid request body and also answers with BadRequest = 400, if you try to register a user that already exists in the database.
When moving towards AuthorizationFilter, a valid register request still worked as expected, but the error cases (validation failure as well as already existing user) both replied with Unauthorized = 401, which is not acceptable for an unauthenticated endpoint...
I could solve this (eventually !) by chaining
.shouldFilterAllDispatcherTypes(false)
to authorizeHttpRequests().
But now I started to wonder, if the new default behaviour makes sense...
The rather unspectacular code snippets are:
The controller mapped call, where the service can throw a #ResponseStatus(HttpStatus.BAD_REQUEST) annotated UserAlreadyExistsException:
#PostMapping("/api/register")
public ResponseEntity<Void> registerUser(#Valid #RequestBody UserDto userDto) {
service.registerUser(mapper.toEntity(userDto));
return ok().build();
}
The relevant part of the SecurityFilterChain bean:
#Bean
public SecurityFilterChain securityFilterChain(HttpSecurity http,
AuthenticationManager authenticationManager) throws Exception {
http.authenticationManager(authenticationManager)
//.authorizeRequests() <-- deprecated, but working, using SecurityFilterInterceptor
.authorizeHttpRequests()
.shouldFilterAllDispatcherTypes(false) // without this line weird behavior since default is true
.requestMatchers(HttpMethod.POST,"/api/register").permitAll()
// ... more requestMatchers and other stuff
}
So I digged deeper into the AuthorizationFilter - and there already the Javadoc is contradictory, if you look at the following snippet from AuthorizationFilter of spring security 6.0.1. The default of the first, new method contradicts the 3 method defaults below:
/**
* Sets whether to filter all dispatcher types.
* #param shouldFilterAllDispatcherTypes should filter all dispatcher types. Default
* is {#code true}
* #since 5.7
*/
public void setShouldFilterAllDispatcherTypes(boolean shouldFilterAllDispatcherTypes) {
this.observeOncePerRequest = !shouldFilterAllDispatcherTypes;
this.filterErrorDispatch = shouldFilterAllDispatcherTypes;
this.filterAsyncDispatch = shouldFilterAllDispatcherTypes;
}
//...
/**
* Sets whether this filter apply only once per request. By default, this is
* <code>true</code>, meaning the filter will only execute once per request. Sometimes
* users may wish it to execute more than once per request, such as when JSP forwards
* are being used and filter security is desired on each included fragment of the HTTP
* request.
* #param observeOncePerRequest whether the filter should only be applied once per
* request
*/
public void setObserveOncePerRequest(boolean observeOncePerRequest) {
this.observeOncePerRequest = observeOncePerRequest;
}
/**
* If set to true, the filter will be applied to error dispatcher. Defaults to false.
* #param filterErrorDispatch whether the filter should be applied to error dispatcher
*/
public void setFilterErrorDispatch(boolean filterErrorDispatch) {
this.filterErrorDispatch = filterErrorDispatch;
}
/**
* If set to true, the filter will be applied to the async dispatcher. Defaults to
* false.
* #param filterAsyncDispatch whether the filter should be applied to async dispatch
*/
public void setFilterAsyncDispatch(boolean filterAsyncDispatch) {
this.filterAsyncDispatch = filterAsyncDispatch;
}
Even worse, there seems to be a related vulnerability to bypass authorization as described in the link below, if you use the default. So I am wondering, if the default=true for shouldFilterAllDispatcherTypes is making sense - or do I miss a point here?
https://security.snyk.io/vuln/SNYK-JAVA-ORGSPRINGFRAMEWORKSECURITY-3092126
I'm not sure that answers your question but you are using AuthorizationManagerRequestMatcherRegistry and you are looking at AuthorizationFilter. Check this link here:
source
/**
* Sets whether all dispatcher types should be filtered.
* #param shouldFilter should filter all dispatcher types. Default is {#code true}
* #return the {#link AuthorizationManagerRequestMatcherRegistry} for further
* customizations
* #since 5.7
*/
public AuthorizationManagerRequestMatcherRegistry shouldFilterAllDispatcherTypes(boolean shouldFilter) {
this.shouldFilterAllDispatcherTypes = shouldFilter;
return this;
}

Is there a way to call a transformer method to change the payload in spring integration flow

In the below code I'm transforming the payload to a different payload and sending it as a request body to a POST API call. I want to outsource that transformation through a external method call. Is that possible?
#Bean
public IntegrationFlow flow3(){
return integrationFlowDefinition -> integrationFlowDefinition
.channel(c -> c.executor(Executors.newCachedThreadPool())).log()
// .split("payload.employee")
// .transform(Transformers.toJson()).log()
.transform(Transformers.fromJson(Map.class)).log("json payload to Map object")
.<Map<String, String>, Map<String,String>>transform(
payload -> {
payload.put("name","Somnath Mukhopadhyay");
payload.put("company","xyz");
// payload.put("salary", "20000");
return payload;
}
).log("Modifying the payload")
.transform(Transformers.toJson()).log("modified Map object to JSON")
.enrichHeaders(headerEnricherSpec -> headerEnricherSpec.header("ContentType","application/json"))
.handle(Http.outboundGateway("http://localhost:8888/Employee")
.httpMethod(HttpMethod.POST)
.expectedResponseType(String.class)
)
.log("Getting response back from flow3");
}
There is this one for you:
/**
* Populate the {#code MessageTransformingHandler} for the {#link MethodInvokingTransformer}
* to invoke the service method at runtime.
* #param service the service to use.
* #param methodName the method to invoke.
* #return the current {#link BaseIntegrationFlowDefinition}.
* #see MethodInvokingTransformer
*/
public B transform(Object service, String methodName) {
Read javadocs for those DSL operators.

Handle "source cannot be null" with #RequestBody in Spring Boot Controller

I am new to Spring Boot and I have trouble figuring out how to handle java.lang.IllegalArgumentException: source cannot be null exception on my login controller.
My app keep crashing whenever I try to log with an unexisting email adress and I do not where where to search.
My controller login is:
#PostMapping(value = "/login", consumes = "application/json", produces = "text/plain")
public ResponseEntity<String> loginClient(#RequestBody ClientDto clientDto) {
ClientDto client = modelMapper.map(repo.findByEmail(clientDto.getEmail()), ClientDto.class);
Client clientDB = repo.findByEmail(client.getEmail());
return new ResponseEntity<String>("Vous ĂȘtes connectĂ©" + clientDB.getNom(), HttpStatus.CREATED);
//return new ResponseEntity<String>("Fail", HttpStatus.NOT_FOUND);
}
I tried to add an if(clientDto == null) but it did not help.
I have also tried #RequestBody(required=false) and the app is still crashing.
When I read the doc of the #RequestBody they mention this exception but I do not really understand how to parameter it.
#Target(ElementType.PARAMETER)
#Retention(RetentionPolicy.RUNTIME)
#Documented
public #interface RequestBody {
/**
* Whether body content is required.
* <p>Default is {#code true}, leading to an exception thrown in case
* there is no body content. Switch this to {#code false} if you prefer
* {#code null} to be passed when the body content is {#code null}.
* #since 3.2
*/
boolean required() default true;
}
What does mean this and how am I suppose to switch to this:
Switch this to {#code false} if you prefer
* {#code null} to be passed when the body content is {#code null}.

InResponseToField of the Response doesn't correspond to sent message: SAML error SpringSecurity - 4.2.13-RELEASE

My web application is deployed on Amazon ECS and uses an ALB and access this application from a bastion host. I am using Okta for SSO. The login page is redirected successfully to Okta and after authentication when the request comes back to the application server, I get the following error -
Caused by: org.opensaml.common.SAMLException: InResponseToField of the Response doesn't correspond to sent message a491gda80cgh3a2b5bb3j8ebd515d2
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.processAuthenticationResponse(WebSSOProfileConsumerImpl.java:139)
I am using a CustomSAMLContextProvider and setting the MessageStorageFactory to EmptyStorageFactory as suggested in other answers.
I am not sure why this check is still happening.
Here is my custom SAMLContextProviderImpl class -
public class SAMLMultipleEndpointContextProvider extends SAMLContextProviderImpl {
/**
* Creates a SAMLContext with local entity values filled. LocalEntityId is set to server name of the request. Also
* request and response must be stored in the context as message transports.
*
* #param request request
* #param response response
* #return context
* #throws MetadataProviderException in case of metadata problems
*/
#Override
public SAMLMessageContext getLocalEntity(HttpServletRequest request, HttpServletResponse response) throws MetadataProviderException {
SAMLMessageContext context = new SAMLMessageContext();
populateGenericContext(request, response, context);
populateLocalEntityId(context, request.getServerName());
populateLocalContext(context);
return context;
}
/**
* Creates a SAMLContext with local entity and peer values filled. LocalEntityId is set to server name of the
* request. Also request and response must be stored in the context as message transports. Should be used when both
* local entity and peer entity can be determined from the request.
*
* #param request request
* #param response response
* #return context
* #throws MetadataProviderException in case of metadata problems
*/
#Override
public SAMLMessageContext getLocalAndPeerEntity(HttpServletRequest request, HttpServletResponse response) throws MetadataProviderException {
SAMLMessageContext context = new SAMLMessageContext();
populateGenericContext(request, response, context);
populateLocalEntityId(context, request.getServerName());
populateLocalContext(context);
populatePeerEntityId(context);
populatePeerContext(context);
return context;
}
/**
* Populate LocalEntityId with retrieved entityId from metadata manager using given localAlias parameter value.
*/
#Override
public void populateLocalEntityId(SAMLMessageContext context, String localAlias) throws MetadataProviderException {
String entityId = metadata.getEntityIdForAlias(localAlias);
QName localEntityRole = SPSSODescriptor.DEFAULT_ELEMENT_NAME;
if (entityId == null) {
throw new MetadataProviderException("No local entity found for alias " + localAlias + ", verify your configuration.");
} else {
logger.debug("Using SP {} specified in request with alias {}", entityId, localAlias);
}
context.setLocalEntityId(entityId);
context.setLocalEntityRole(localEntityRole);
}
/**
* Disable the check for InResponseToField from SSO message response.
*/
#Override
public void setStorageFactory(SAMLMessageStorageFactory storageFactory) {
super.setStorageFactory(new EmptyStorageFactory());
}
}
In order to comply with the rules defined in the SAML spec, the SAML response has to be validated against the SAML AuthNRequest in SP-initiated SSO flow. By default Spring SAML stores the SAML AuthNRequest in memory, hence the HTTP POST request containing the SAML response as payload MUST hit the same JVM where the AuthNRequest was created. If the LB can not guarantee stickiness, then you need to implement a message store ( org.springframework.security.saml.storage.SAMLMessageStorage , org.springframework.security.saml.storage.SAMLMessageStorageFactory) that can share the messages with multiple instances. Make sure that you delete the message from the store after consumption to circumvent replay attacks as SAML resonses are meant for one-time-usage.

Inbound http message validation with JSR303

I'm using Spring Integration to receive an http message and then put it in a channel and do some transformations.
I read the documentation (https://docs.spring.io/spring-integration/reference/html/http.html) and it will look like:
#Bean
public HttpRequestHandlingMessagingGateway inbound() {
HttpRequestHandlingMessagingGateway gateway =
new HttpRequestHandlingMessagingGateway(true);
gateway.setRequestMapping(mapping());
gateway.setRequestPayloadType(SomeBean.class);
gateway.setRequestChannelName("httpRequest");
return gateway;
}
I want to validate the payload using JSR 303 bean validation (https://beanvalidation.org/1.0/spec/), is it possible? What is the best way?
Thanks in advance!
There is a dedicated paragraph about validation: https://docs.spring.io/spring-integration/reference/html/http.html#http-validation. So, you just need to use a setValidator() of that gateway:
/**
* Specify a {#link Validator} to validate a converted payload from request.
* #param validator the {#link Validator} to use.
* #since 5.2
*/
public void setValidator(Validator validator) {
this.validator = validator;
}
The validation API comes from Spring Framework: https://docs.spring.io/spring/docs/5.2.4.RELEASE/spring-framework-reference/core.html#validation

Resources