Is there a way to call a transformer method to change the payload in spring integration flow - spring

In the below code I'm transforming the payload to a different payload and sending it as a request body to a POST API call. I want to outsource that transformation through a external method call. Is that possible?
#Bean
public IntegrationFlow flow3(){
return integrationFlowDefinition -> integrationFlowDefinition
.channel(c -> c.executor(Executors.newCachedThreadPool())).log()
// .split("payload.employee")
// .transform(Transformers.toJson()).log()
.transform(Transformers.fromJson(Map.class)).log("json payload to Map object")
.<Map<String, String>, Map<String,String>>transform(
payload -> {
payload.put("name","Somnath Mukhopadhyay");
payload.put("company","xyz");
// payload.put("salary", "20000");
return payload;
}
).log("Modifying the payload")
.transform(Transformers.toJson()).log("modified Map object to JSON")
.enrichHeaders(headerEnricherSpec -> headerEnricherSpec.header("ContentType","application/json"))
.handle(Http.outboundGateway("http://localhost:8888/Employee")
.httpMethod(HttpMethod.POST)
.expectedResponseType(String.class)
)
.log("Getting response back from flow3");
}

There is this one for you:
/**
* Populate the {#code MessageTransformingHandler} for the {#link MethodInvokingTransformer}
* to invoke the service method at runtime.
* #param service the service to use.
* #param methodName the method to invoke.
* #return the current {#link BaseIntegrationFlowDefinition}.
* #see MethodInvokingTransformer
*/
public B transform(Object service, String methodName) {
Read javadocs for those DSL operators.

Related

Handle "source cannot be null" with #RequestBody in Spring Boot Controller

I am new to Spring Boot and I have trouble figuring out how to handle java.lang.IllegalArgumentException: source cannot be null exception on my login controller.
My app keep crashing whenever I try to log with an unexisting email adress and I do not where where to search.
My controller login is:
#PostMapping(value = "/login", consumes = "application/json", produces = "text/plain")
public ResponseEntity<String> loginClient(#RequestBody ClientDto clientDto) {
ClientDto client = modelMapper.map(repo.findByEmail(clientDto.getEmail()), ClientDto.class);
Client clientDB = repo.findByEmail(client.getEmail());
return new ResponseEntity<String>("Vous ĂȘtes connectĂ©" + clientDB.getNom(), HttpStatus.CREATED);
//return new ResponseEntity<String>("Fail", HttpStatus.NOT_FOUND);
}
I tried to add an if(clientDto == null) but it did not help.
I have also tried #RequestBody(required=false) and the app is still crashing.
When I read the doc of the #RequestBody they mention this exception but I do not really understand how to parameter it.
#Target(ElementType.PARAMETER)
#Retention(RetentionPolicy.RUNTIME)
#Documented
public #interface RequestBody {
/**
* Whether body content is required.
* <p>Default is {#code true}, leading to an exception thrown in case
* there is no body content. Switch this to {#code false} if you prefer
* {#code null} to be passed when the body content is {#code null}.
* #since 3.2
*/
boolean required() default true;
}
What does mean this and how am I suppose to switch to this:
Switch this to {#code false} if you prefer
* {#code null} to be passed when the body content is {#code null}.

Spring-Kafka Sending custom record instead of failed record using DeadLetterPublishingRecoverer to a DLT

I am using DeadLetterPublishingRecoverer to auto send the failed record to DLT. I am trying to send a custom record instead of failed record to DLT. Is it possible to do this. Please help me with the configuration. My DeadLetterPublishingRecoverer config is below.
#Bean
DeadLetterPublishingRecoverer deadLetterPublishingRecoverer(KafkaTemplate<String, byte[]> byteArrayTemplate) {
return new DeadLetterPublishingRecoverer([
(byte[].class) : byteArrayTemplate],)
}
Create a subclass of DeadLetterPublishingRecoverer and override the createProducerRecord() method.
/**
* Subclasses can override this method to customize the producer record to send to the
* DLQ. The default implementation simply copies the key and value from the consumer
* record and adds the headers. The timestamp is not set (the original timestamp is in
* one of the headers). IMPORTANT: if the partition in the {#link TopicPartition} is
* less than 0, it must be set to null in the {#link ProducerRecord}.
* #param record the failed record
* #param topicPartition the {#link TopicPartition} returned by the destination
* resolver.
* #param headers the headers - original record headers plus DLT headers.
* #param data the value to use instead of the consumer record value.
* #param isKey true if key deserialization failed.
* #return the producer record to send.
* #see KafkaHeaders
*/
protected ProducerRecord<Object, Object> createProducerRecord(ConsumerRecord<?, ?> record,
TopicPartition topicPartition, Headers headers, #Nullable byte[] data, boolean isKey) {
In the upcoming 2.7 release, this is changed to
/**
* Subclasses can override this method to customize the producer record to send to the
* DLQ. The default implementation simply copies the key and value from the consumer
* record and adds the headers. The timestamp is not set (the original timestamp is in
* one of the headers). IMPORTANT: if the partition in the {#link TopicPartition} is
* less than 0, it must be set to null in the {#link ProducerRecord}.
* #param record the failed record
* #param topicPartition the {#link TopicPartition} returned by the destination
* resolver.
* #param headers the headers - original record headers plus DLT headers.
* #param key the key to use instead of the consumer record key.
* #param value the value to use instead of the consumer record value.
* #return the producer record to send.
* #see KafkaHeaders
*/
protected ProducerRecord<Object, Object> createProducerRecord(ConsumerRecord<?, ?> record,
TopicPartition topicPartition, Headers headers, #Nullable byte[] key, #Nullable byte[] value) {

InResponseToField of the Response doesn't correspond to sent message: SAML error SpringSecurity - 4.2.13-RELEASE

My web application is deployed on Amazon ECS and uses an ALB and access this application from a bastion host. I am using Okta for SSO. The login page is redirected successfully to Okta and after authentication when the request comes back to the application server, I get the following error -
Caused by: org.opensaml.common.SAMLException: InResponseToField of the Response doesn't correspond to sent message a491gda80cgh3a2b5bb3j8ebd515d2
at org.springframework.security.saml.websso.WebSSOProfileConsumerImpl.processAuthenticationResponse(WebSSOProfileConsumerImpl.java:139)
I am using a CustomSAMLContextProvider and setting the MessageStorageFactory to EmptyStorageFactory as suggested in other answers.
I am not sure why this check is still happening.
Here is my custom SAMLContextProviderImpl class -
public class SAMLMultipleEndpointContextProvider extends SAMLContextProviderImpl {
/**
* Creates a SAMLContext with local entity values filled. LocalEntityId is set to server name of the request. Also
* request and response must be stored in the context as message transports.
*
* #param request request
* #param response response
* #return context
* #throws MetadataProviderException in case of metadata problems
*/
#Override
public SAMLMessageContext getLocalEntity(HttpServletRequest request, HttpServletResponse response) throws MetadataProviderException {
SAMLMessageContext context = new SAMLMessageContext();
populateGenericContext(request, response, context);
populateLocalEntityId(context, request.getServerName());
populateLocalContext(context);
return context;
}
/**
* Creates a SAMLContext with local entity and peer values filled. LocalEntityId is set to server name of the
* request. Also request and response must be stored in the context as message transports. Should be used when both
* local entity and peer entity can be determined from the request.
*
* #param request request
* #param response response
* #return context
* #throws MetadataProviderException in case of metadata problems
*/
#Override
public SAMLMessageContext getLocalAndPeerEntity(HttpServletRequest request, HttpServletResponse response) throws MetadataProviderException {
SAMLMessageContext context = new SAMLMessageContext();
populateGenericContext(request, response, context);
populateLocalEntityId(context, request.getServerName());
populateLocalContext(context);
populatePeerEntityId(context);
populatePeerContext(context);
return context;
}
/**
* Populate LocalEntityId with retrieved entityId from metadata manager using given localAlias parameter value.
*/
#Override
public void populateLocalEntityId(SAMLMessageContext context, String localAlias) throws MetadataProviderException {
String entityId = metadata.getEntityIdForAlias(localAlias);
QName localEntityRole = SPSSODescriptor.DEFAULT_ELEMENT_NAME;
if (entityId == null) {
throw new MetadataProviderException("No local entity found for alias " + localAlias + ", verify your configuration.");
} else {
logger.debug("Using SP {} specified in request with alias {}", entityId, localAlias);
}
context.setLocalEntityId(entityId);
context.setLocalEntityRole(localEntityRole);
}
/**
* Disable the check for InResponseToField from SSO message response.
*/
#Override
public void setStorageFactory(SAMLMessageStorageFactory storageFactory) {
super.setStorageFactory(new EmptyStorageFactory());
}
}
In order to comply with the rules defined in the SAML spec, the SAML response has to be validated against the SAML AuthNRequest in SP-initiated SSO flow. By default Spring SAML stores the SAML AuthNRequest in memory, hence the HTTP POST request containing the SAML response as payload MUST hit the same JVM where the AuthNRequest was created. If the LB can not guarantee stickiness, then you need to implement a message store ( org.springframework.security.saml.storage.SAMLMessageStorage , org.springframework.security.saml.storage.SAMLMessageStorageFactory) that can share the messages with multiple instances. Make sure that you delete the message from the store after consumption to circumvent replay attacks as SAML resonses are meant for one-time-usage.

Inbound http message validation with JSR303

I'm using Spring Integration to receive an http message and then put it in a channel and do some transformations.
I read the documentation (https://docs.spring.io/spring-integration/reference/html/http.html) and it will look like:
#Bean
public HttpRequestHandlingMessagingGateway inbound() {
HttpRequestHandlingMessagingGateway gateway =
new HttpRequestHandlingMessagingGateway(true);
gateway.setRequestMapping(mapping());
gateway.setRequestPayloadType(SomeBean.class);
gateway.setRequestChannelName("httpRequest");
return gateway;
}
I want to validate the payload using JSR 303 bean validation (https://beanvalidation.org/1.0/spec/), is it possible? What is the best way?
Thanks in advance!
There is a dedicated paragraph about validation: https://docs.spring.io/spring-integration/reference/html/http.html#http-validation. So, you just need to use a setValidator() of that gateway:
/**
* Specify a {#link Validator} to validate a converted payload from request.
* #param validator the {#link Validator} to use.
* #since 5.2
*/
public void setValidator(Validator validator) {
this.validator = validator;
}
The validation API comes from Spring Framework: https://docs.spring.io/spring/docs/5.2.4.RELEASE/spring-framework-reference/core.html#validation

Validate request param names for optional fields - spring boot

If you have an endpoint described like this :
#GetMapping("/hello")
public String sayHello(#RequestParam(name= "my_id", required=false) myId, #RequestParam(name= "my_name") myName){
// return something
}
Issue:
The consumers of this endpoint could send invalid param names in the request;
/hello?my_name=adam&a=1&b=2 and it will work normally.
Goal:
Be able to validate optional request parameters names, using a proper way to do it.
Actual solution:
I've implemented a solution (For me it's not the right solution), where I've extends a HandlerInterceptorAdapter and register it in the WebMvcConfigurer :
/**
*
* Interceptor for unknown request parameters
*
*/
public class MethodParamNamesHandler extends HandlerInterceptorAdapter {
#Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
ArrayList<String> queryParams = Collections.list(request.getParameterNames());
MethodParameter[] methodParameters = ((HandlerMethod) handler).getMethodParameters();
Map<String, String> methodParametersMap = new HashMap<>();
if(Objects.nonNull(methodParameters)){
methodParametersMap = Arrays.stream(methodParameters)
.map(m -> m.getParameter().getName())
.collect(Collectors.toMap(Function.identity(),Function.identity()));
}
Set<String> unknownParameters = collectUnknownParameters(methodParametersMap, queryParams);
if(!CollectionUtils.isEmpty(unknownParameters)){
throw new InvalidParameterNameException("Invalid parameters names", unknownParameters);
}
return super.preHandle(request,response,handler);
}
/**
* Extract unknown properties from a list of parameters
* #param allParams
* #param queryParam
* #return
*/
private Set<String> collectUnknownParameters(Map<String, String> allParams, List<String> queryParam){
return queryParam.stream()
.filter(param -> !allParams.containsKey(param))
.collect(Collectors.toSet());
}
}
Drawbacks:
The drawbacks of this solution is that it's based on method parameters,
It will take the name myId instead of taking my_id, this could have a workaround by transforming the uppercase word to snake-case; but this is not a good solution because you can have something like this : sayHello(#RequestParam(name= "my_id", required=false) anotherName).
Question:
Is there a proper way to achieve this ?

Resources