Is there a way to process file by detecting ACK files - spring

I have an application and periodically files are coming in one folder.
There is always two files that comes, one is named ACK + Name of the file and is empty, and the other one is just the name of the file (this one is the data file).
I heard from some people that there is a way in Camel to process my file by detecting the ACK.
What I'm current doing is to detect the ACK file and then trigger a process that will get the data file and process it. But with this I can't have working unit test for my code.
But if it's possible I'd prefer to have a route that detect my ACK but trigger the process with the data file.
Is this possible ?
Here is my actual route:
#Component
public class MyRoute extends RouteBuilder {
public static final String ROUTE_NAME = "myRoute";
private final Processor myProcessor;
#Autowired
public MyRoute(#Qualifier("my.processor") Processor myProcessor) {
this.myProcessor= myProcessor;
}
#Override
public void configure() throws Exception {
from("file://{{data.input.dir}}?moveFailed=errors&delete=true&include=ACK.*").routeId(ROUTE_NAME)
.choice()
.when(header("CamelFileName").startsWith("ACK"))
.process(myProcessor)
.end();
}
}
EDIT:
Found the solution using the doneFileName option

As you found out by yourself, Camel can handle this automatically with the doneFileName option.
You don't have to process the ACK file at all.
But as a consequence: if an ACK file is missing, the data file is not processed since Camel treats datafiles without done-file as still in process of transfer/writing.

Related

Is there a way to create an annotation that extends the #ConditionalOn or is interpreted the same way in spring like the #Component annotation?

We have several implementations that use the same code base but we want them to do different things: i.e. rest access, administration via rest, indexing, archiving, and queue based.
In our infrastructure build out we want certain things to be accessible but other things not to be, such as in our administration/rest/indexing and archiving we don't want to buildout threads to monitor and handle queue requests, or in our indexing and archiving we want those processes, but we don't want the rest or queue build out.
So, I was wondering if there is a way to "extend" #ConditionalOnExpression with like an #ConditionalOnRest that extends #ConditionalOnExpression so we don't have to include the expression on each Component/RestController which we may have to change in a bunch of places or could be screwed up because it is functionally compile time checked and DRY.
You can achieve this in two ways.
Custom Annotation
#Target(ElementType.TYPE)
#Retention(RetentionPolicy.RUNTIME)
#ConditionalOnExpression() // add your expression
public #interface ConditionalOnRest {
}
Custom Condition
class RestCondition implements Condition {
#Override
public boolean matches(ConditionContext context, AnnotatedTypeMetadata metadata) {
return // Your condition logic here
}
}
Then you use it as follows:
#Conditional(RestCondition.class)

Quarkus Logging transaction id

My application has several JAX-RS API, all of them are getting a transaction id as header, is there way we can access transaction id into Jboss Logger? we tried MDC but does that not help. Basically I am looking efficient way to add transaction id into each log.
You did not mention how you actually do the logging: explicit log. statements in the code, or some CDI/JAXRS interceptors...
A common way to achieve the desired functionality is to define a filter/interceptor on the boundary layer (JAX-RS in your case), that extracts the relevant request data and stores it in a context thats available to logger during execution of that request. Which is exactly what JAX-RS filters and MDC are for.
A simple example:
#Provider
public class TransactionLoggingFilter implements ContainerRequestFilter, ContainerResponseFilter {
#Context
HttpServerRequest request;
#Override
public void filter(ContainerRequestContext context) {
MDC.put("transactionId", request.getHeader("transactionId"));
}
#Override
public void filter(ContainerRequestContext requestContext, ContainerResponseContext responseContext) throws IOException {
MDC.remove("transactionId");
}
}
With that, you will store the value of your transactionId header in MDC scope before each HTTP request is processed and remove it after processing is complete.
Note: if you have other JAX-RS filters, you might need to configure priorities correctly (so that your logging extraction filter runs before others for example) see documentation
MDC scope is bound the thread that executes the request(careful if you use Quarkus reactive, make sure that it is propagated correctly) and will be passed to the logger impl with every log invocation.
To actually print out the value from MDC in your logs, you need to modify the Quarkus log format via:
quarkus.log.console.format=%d{HH:mm:ss} %-5p %X{transactionId} [%c{2.}] (%t) %s%e%n
You can access any MDC scope var with expression %X{var_name}.
See Quarkus documentation on logging for more info.

guava eventbus post after transaction/commit

I am currently playing around with guava's eventbus in spring and while the general functionality is working fine so far i came across the following problem:
When a user want's to change data on a "Line" entity this is handled as usual in a backend service. In this service the data will be persisted via JPA first and after that I create a "NotificationEvent" with a reference to the changed entity. Via the EventBus I send the reference of the line to all subscribers.
public void notifyUI(String lineId) {
EventBus eventBus = getClientEventBus();
eventBus.post(new LineNotificationEvent(lineId));
}
the eventbus itself is created simply using new EventBus() in the background.
now in this case my subscribers are on the frontend side, outside of the #Transactional realm. so when I change my data, post the event and let the subscribers get all necessary updates from the database the actual transaction is not committed yet, which makes the subscribers fetch the old data.
the only quick fix i can think of is handling it asynchronously and wait for a second or two. But is there another way to post the events using guava AFTER the transaction has been committed?
I don't think guava is "aware" of spring at all, and in particular not with its "#Transactional" stuff.
So you need a creative solution here. One solution I can think about is to move this code to the place where you're sure that the transaction has finished.
One way to achieve that is using TransactionSyncrhonizationManager:
TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronization(){
void afterCommit(){
// do what you want to do after commit
// in this case call the notifyUI method
}
});
Note, that if the transaction fails (rolls back) the method won't be called, in this case you'll probably need afterCompletion method. See documentation
Another possible approach is refactoring your application to something like this:
#Service
public class NonTransactionalService {
#Autowired
private ExistingService existing;
public void entryPoint() {
String lineId = existing.invokeInTransaction(...);
// now you know for sure that the transaction has been committed
notifyUI(lineId);
}
}
#Service
public class ExistingService {
#Transactional
public String invokeInTransaction(...) {
// do your stuff that you've done before
}
}
One last thing I would like to mention here, is that Spring itself provides an events mechanism, that you might use instead of guava's one.
See this tutorial for example

Spring Boot - Camel - Tracking an exchange all the way through

We are trying to setup a very simple auditing database table for a very complex Spring Boot, Camel application with many routes (mostly internal routes using seda://)...the idea being we record in the database table each route's processing outcome. Then when issues arise we can login to the database, query the table and pinpoint exactly where the issue happened. I thought I could just use the exchange-id as the unique tracking identifier, but quickly learned that all the seda:// routes make new exchanges, or at least that's what I'm seeing (camel version 2.24.3). Frankly, I don't care what we use for the unique identifier...I can generate a UUID easily enough and the use the exchange.setProperty("id-unique", UUID).
I did manage to get something to work using the exchange.setProperty("id-exchange", exchange.getExchangeId()) and have it persist the unique identifier thru the routes...(I did read that certain pre-defined route prefixes such as jms:// will not persist exchange properties though). The thought being, the very first Processor places the exchangeId (unique-id) on the exchange properties, my tracking logic is in a processor that I can include as part of the Route's definition :
#Override
public void configure() throws Exception {
// EVENTS : Collect statistics from Camel events
this.getContext().getManagementStrategy().addEventNotifier(this.camelEventNotifier);
// INITIAL : ${body} exchange coming from a simple URL endpoint
// POST request with an XML Message...simulates an MQ
// message from Central MQ. The Web/UI service places the
// message onto the camel route using producerTemplate.
from("direct:" + Globals.ROUTEID_LBR_INTAKE_MQ)
.routeId(Globals.ROUTEID_LBR_INTAKE_MQ)
.description("Loss Backup Reports MQ XML inbound messages")
.autoStartup(false)
.process(processor)
.process(getTrackingProcessor())
.to("seda:" + Globals.ROUTEID_LBR_VALIDATION)
.end();
}
This Proof-of-Concept (POC) allowed me to at least get things tracking like we want...note multiple rows with the same unique identifier :
ID_ROW ID_EXCHANGE PROCESS_GROUP PROCESS_STEP RESULTS_STEP RESULTS_MESSAGE
1 ID-LIBP45P-322256M-1603188596161-4-6 Loss Backup Reports lbr-intake-mq add lbr-intake-mq
2 ID-LIBP45P-322256M-1603188596161-4-6 Loss Backup Reports lbr-validation add lbr-intake-mq
Thing is, this POC is proving to be rigid and difficult to record outcomes such as SUCCESS versus EXCEPTION.
My question is, has anyone done anything like this? And if so, how was it implemented? Or is there a fancy way in Camel to handle this that I just couldn't find on the web?
My other ideas were :
Set an old fashion Abstract TrackerProcessor class that all my tracked Processors extend. Then just have a handful of methods in there to create, update, etc... Each processor then just calls inherited methods to create and manage the audit entries. The advantage here being the exchange is readily available with all the data involved to store in the database table.
#Component
public abstract class ProcessorAbstractTracker implements Processor {
#Override
abstract public void process(Exchange exchange) throws Exception;
public void createTracker ( Exchange exchange ) {
}
public void updateTracker ( Exchange exchange, String theResultsMessage, String theResultsStep ) {
}
}
Set an #Autowired Bean that every tracked Camel Processor wires in and put the tracking logic in the bean. This seems to be simple and clean. My only concern/question here is how to scope the bean (maybe prototype)...since there would be many routes utilizing the bean concurrently, is there any chance we get mixed processing values...
#Autowired
ProcessorTracker tracker;
Other ideas?
tia, adym

Possible bug in ResourceHttpMessageConverter

I've been experiencing a strange problem using the ResourceHttpMessageConverter in the latest Spring 3.2.4 version. I have an annotated controller that returns a Resource, in specific a UrlResource. This UrlResource is nothing more than a Request to another remote server that serves a pdf file. Usually the pdf is a small file (less than 1MB) but under some circumstances is larger. In case the file is large the client that contacts to my Controller can't download the file resulting in a connection closed error. The code I am using is the following
#Controller
#PreAuthorize(value = "isAuthenticated()")
public class TestController {
#ResponseBody
#RequestMapping(value="/report/", method = RequestMethod.GET,
produces = "application/pdf")
public Resource getReport() {
//Ignore the getResource method, it is not the problem
//this method returns an object of type UrlResource
return this.getResource();
}
}
Although there is a workaround using the StreamUtils class to copy the InputStream from the UrlResource to the OutputStream of the HttpServletResponse, I wanted to know for sure if there is anything else I could do to avoid that, and rely on the Spring MessageConverter infrastructure rather than reimplementing the same logic in my controller. Is there any spring developer around that can point me in the right direction if it is possible, or if this is a bug let me know so I can report it. Thanks!

Resources