Order in which filters are applied to Inbound ftp adapters in spring integration - ftp

I am using spring integration inbound channel adapter as below
inboundAdapter(csf).preserveTimestamp(true)//
.remoteDirectory(feed.getRemoteDirectory())//
.regexFilter(feed.getRegexFilter())// regex expression
.filter(ftpRemoteFileFilter)// remote filter
.deleteRemoteFiles(feed.getDeleteRemoteF
So I am using a remote filter and the out of the box regex filter . I wanted to know what is the order in which the regex filter and the remote filter are applied . From initial analysis looks like the regex filter comes first , can some one tell me the clas where this decision is made so I can be sure .
If there is no way of knowing the only other alternative will be to use the
CompositeFileListFilter .

The code you looking for is in the FtpInboundChannelAdapterSpec and looks like:
#Override
public FtpInboundChannelAdapterSpec regexFilter(String regex) {
return filter(composeFilters(new FtpRegexPatternFileListFilter(regex)));
}
#SuppressWarnings("unchecked")
private CompositeFileListFilter<FTPFile> composeFilters(FileListFilter<FTPFile> fileListFilter) {
CompositeFileListFilter<FTPFile> compositeFileListFilter = new CompositeFileListFilter<>();
compositeFileListFilter.addFilters(fileListFilter,
new FtpPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(), "ftpMessageSource"));
return compositeFileListFilter;
}
So, as you see when you declare regexFilter, it is composed together with the FtpPersistentAcceptOnceFileListFilter to the the CompositeFileListFilter, where regexFilter is definitely first. And it is first because FtpPersistentAcceptOnceFileListFilter is persistence and that wouldn't be good to store files which might not match the regexp afterwards.
If you need some more complicated logic, you really should go CompositeFileListFilter way and inject it via filter() option only. I mean you have to combine your regexpFilter into the CompositeFileListFilter instead of regexFilter().
Note: after moving Java DSL into Core in 5.0, the .filter() option looks like:
public S filter(FileListFilter<F> filter) {
this.synchronizer.setFilter(filter);
return _this();
}
It overrides any previously provided filters, including regexp. That is done to avoid confusing with the chain of .filter() in favor of CompositeFileListFilter or ChainFileListFilter configured externally.

Related

Using ReactiveSecurityContextHolder inside a Kotlin Flow

I'm working on a Spring Boot (2.2) project using Kotlin, with CouchDB as (reactive) database, and in consequence, async DAO (either suspend functions, or functions returning a Flow). I'm trying to setup WebFlux in order to have async controllers too (again, I want to return Flows, not Flux). But I'm having troubles retrieving my security context from ReactiveSecurityContextHolder.
From what I've read, unlike SecurityContextHolder which is using ThreadLocal to store it, ReactiveSecurityContextHolder relies on the fact that Spring, while making a subscription to my reactive chain, also stored that context inside this chain, thus allowing me to call ReactiveSecurityContextHolder.getContext() from within the chain.
The problem is that I have to transform my Mono<SecurityContext> into a Flow at some point, which makes me loose my SecurityContext. So my question is: is there a way to have a Spring Boot controller returning a Flow while retrieving the security context from ReactiveSecurityContextHolder inside my logic? Basically, after simplification, it should look like this:
#GetMapping
fun getArticles(): Flow<String> {
return ReactiveSecurityContextHolder.getContext().flux().asFlow() // returns nothing
}
Note that if I return the Flux directly (skipping the .asFlow()), or add a .single() or .toList() in the end (hence using a suspend fun), then it works fine and my security context is returned, but again that's not what I want. I guess the solution would be to transfer the context from the Flux (initial reactive chain from ReactiveSecurityContextHolder) to the Flow, but it doesn't seem to be done by default.
Edit: here is a sample project showcasing the problem: https://github.com/Simon3/webflux-kotlin-sample
What you really try to achieve is accessing your ReactorContext from inside a Flow.
One way to do this is to relax the need for returning a Flow and return a Flux instead. This allows you to recover the ReactorContext and pass it to the Flow you are going to use to generate your data.
#ExperimentalCoroutinesApi
#GetMapping("/flow")
fun flow(): Flux<Map<String, String>> = Mono.subscriberContext().flatMapMany { reactorCtx ->
flow {
val ctx = coroutineContext[ReactorContext.Key]?.context?.get<Mono<SecurityContext>>(SecurityContext::class.java)?.asFlow()?.single()
emit(mapOf("user" to ((ctx?.authentication?.principal as? User)?.username ?: "<NONE>")))
}.flowOn(reactorCtx.asCoroutineContext()).asFlux()
}
In the case when you need to access the ReactorContext from a suspend method, you can simply get it back from the coroutineContext with no further artifice:
#ExperimentalCoroutinesApi
#GetMapping("/suspend")
suspend fun suspend(): Map<String,String> {
val ctx = coroutineContext[ReactorContext.Key]?.context?.get<Mono<SecurityContext>>(SecurityContext::class.java)?.asFlow()?.single()
return mapOf("user" to ((ctx?.authentication?.principal as? User)?.username ?: "<NONE>"))
}

How to modify prometheus exposed metric names using Actuator in Spring-boot 2

I am using Actuator in springboot 2 to expose /actuator/prometheus endpoint from which a prometheus instance will pull metrics.
Everithing works perfect except because I am in need of tweak the metric names. I mean not the suffix (_count, _total, _bucket, ...) which are meaningful for Prometheus but something like:
http_server_requests_seconds_count -> http_server_requests_count
http_server_requests_seconds_max -> latency_seconds_max
http_server_requests_seconds_sum -> latency_seconds_sum
http_server_requests_seconds_bucket -> latency_seconds_bucket
Is there any better approach to this?
P.S.
I know i can use
management.metrics.web.server.requests-metric-name=different
to get
different_seconds_count
different_seconds_max
different_seconds_sum
different_seconds_bucket
but it would be difficult to:
1º remove the _seconds suffix
2º use a different base name for only one of them
I am guessing i could write an alternative PrometheusRenameFilter but not sure how to configure it to the default registry.
you can override this method and update the naming convention:
#Configuration
public class MetricsConfiga {
#Bean
MeterRegistryCustomizer<MeterRegistry> configurer(String applicationName) {
return (registry) -> registry.config().namingConvention(new NamingConvention() {
#Override
public String name(String name, Meter.Type type, String baseUnit) {
return "PREFIX" + Arrays.stream(name1.split("\\."))
.filter(Objects::nonNull)
.collect(Collectors.joining("_"));
}
});
}
}
Now I know how I can customize the global registry:
e.g. to set a custom meter filter:
#Configuration
public class MetricsConfig {
#Bean
MeterRegistryCustomizer<MeterRegistry> metricsConfig() {
return registry -> registry.config().meterFilter(new CustomRenameFilter());
}
}
However, setting a custom rename filter in the registry only allow to rename the base metric name.
It does not act on the suffixes nor allow to act on specific metric belonging a set e.g. generated by the summary.
with a custom NamingConvention I can add suffixes to convention base name ... I could even alter existing suffixes or replace convention base name.
Finally please note that Histogram prometheus metric type expects the creation of
<basename>_bucket
<basename>_sum
<basename>_count
with those specific names so it might be incorrect to tweak the component in the way I want because that would be a diferent component.

Is paging broken with spring data solr when using group fields?

I currently use the spring data solr library and implement its repository interfaces, I'm trying to add functionality to one of my custom queries that uses a Solr template with a SimpleQuery. it currently uses paging which appears to be working well, however, I want to use a Group field so sibling products are only counted once, at their first occurrence. I have set the group field on the query and it works well, however, it still seems to be using the un-grouped number of documents when constructing the page attributes.
is there a known work around for this?
the query syntax provides the following parameter for this purpose, but it would seem that Spring Data Solr isn’t taking advantage of it. &group.ngroups=true should return the number of groups in the result and thus give a correct page numbering.
any other info would be appreciated.
There are actually two ways to add this parameter.
Queries are converted to the solr format using QueryParsers, so it would be possible to register a modified one.
QueryParser modifiedParser = new DefaultQueryParser() {
#Override
protected void appendGroupByFields(SolrQuery solrQuery, List<Field> fields) {
super.appendGroupByFields(solrQuery, fields);
solrQuery.set(GroupParams.GROUP_TOTAL_COUNT, true);
}
};
solrTemplate.registerQueryParser(Query.class, modifiedParser);
Using a SolrCallback would be a less intrusive option:
final Query query = //...whatever query you have.
List<DomainType> result = solrTemplate.execute(new SolrCallback<List<DomainType>>() {
#Override
public List<DomainType> doInSolr(SolrServer solrServer) throws SolrServerException, IOException {
SolrQuery solrQuery = new QueryParsers().getForClass(query.getClass()).constructSolrQuery(query);
//add missing params
solrQuery.set(GroupParams.GROUP_TOTAL_COUNT, true);
return solrTemplate.convertQueryResponseToBeans(solrServer.query(solrQuery), DomainType.class);
}
});
Please feel free to open an issue.

use camel case serialization only for specific actions

I've used WebAPI for a while, and generally set it to use camel case json serialization, which is now rather common and well documented everywhere.
Recently however, working on a much larger project, I came across a more specific requirement: we need to use camel case json serialization, but because of backward compatibility issues with our client scripts, I only want it to happen for specific actions, to avoid breaking other parts of the (extremely large) website.
I figure one option is to have a custom content type, but that then requires client code to specify it.
Is there any other option?
Thanks!
Try this:
public class CamelCasingFilterAttribute : ActionFilterAttribute
{
private JsonMediaTypeFormatter _camelCasingFormatter = new JsonMediaTypeFormatter();
public CamelCasingFilterAttribute()
{
_camelCasingFormatter.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver();
}
public override void OnActionExecuted(HttpActionExecutedContext actionExecutedContext)
{
ObjectContent content = actionExecutedContext.Response.Content as ObjectContent;
if (content != null)
{
if (content.Formatter is JsonMediaTypeFormatter)
{
actionExecutedContext.Response.Content = new ObjectContent(content.ObjectType, content.Value, _camelCasingFormatter);
}
}
}
}
Apply this [CamelCasingFilter] attribute to any action you want to camel-case. It will take any JSON response you were about to send back and convert it to use camel casing for the property names instead.

Jersey and Odata Key Path Param format

I have a RESTful api using Jersey right now, and am converting it to be OData standard compliant. There are a few things I have not converted yet, but will get there, and is not important at this moment. One of the things I need to convert that is important is the key path params. Odata has the standard of making the key wrapped in parenthesis. So in this example myapi.com/product(1) - is the OData call to get a product whose id is 1. Currently that is possible in my system with this myapi.com/product/1
When I add the parenthesis to the path parameter I get a 404 error. My class level path is #Path("/product") and my method level path is #Path("({id})"), and use to be #Path("/{id}"). I've tried adding the parenthesis as part of the variable planning to strip them off in the method, and I've tried formatting the id with some regex #Path("{id : regex stuff}"), and neither works.
If I make my method path parameter like this #Path"/({id})") - so the call is myapi.com/product/(1), it works fine. The parenthesis is not the issue obviously. It seems the Jersey splits the uri into chunks using the forward slashes for the routing, and sense there is no forward slash between the id an root resource name, then nothing is found. It makes sense.
Is there a way to change Jerseys method of matching uri strings with some regex or something? Has anyone used Jersey with Odata? I would rather not use odata4j just for the resolution to this issue, it seems like there should be a way to get this to work.
What I did:
Based on Pavel Bucek's answer I did implement a ContainrRequestFilter independently to the filter I use for security. In my case I didn't look to see if existed, I just tried to do the replace.
try
{
String uriString = request.getRequestUri().toString();
uriString = uriString.replaceAll("(\(|\)\/?)", "/");
request.setUris(request.getBaseUri(), new URI(uriString));
} catch (final Exception e)
{
}
return request;
I think that the easiest way how to handle this "protocol" would be introducing ContainerRequestFilter, which would replace "()$" with "/$" in the incoming URI. So you will be able to serve OData and standard REST request in one app.
See http://jersey.java.net/nonav/apidocs/1.11/jersey/com/sun/jersey/spi/container/ContainerRequestFilter.html
Simple filter I used to test this case:
rc.getProperties().put(ResourceConfig.PROPERTY_CONTAINER_REQUEST_FILTERS, new ContainerRequestFilter() {
#Override
public ContainerRequest filter(ContainerRequest request) {
try {
if(request.getRequestUri().toString().endsWith("(1)")) {
request.setUris(
request.getBaseUri(),
new URI(request.getRequestUri().toString().replace("(1)", "/1")));
}
} catch (Exception e) {
}
return request;
}
});
both
curl "http://localhost:9998/helloworld(1)"
curl "http://localhost:9998/helloworld/1"
hit same Resource method now. (Obviously you'll need to improve current filter to be able to handle various values, but it should work for you).

Resources