Apache Camel ftp component sortBy file:modified - sorting

Using Apache Camel plugin for Grails. Consuming ftp endpoint and wish to process files via modified date. This is not working as expected using "...&sortBy=file:modified" url param. It ignores the date and sorts by the filename. I've tried several versions like "reverse:file:modified" and "date:file:yyyyMMddmmssSSS". Platform is Grails 2.3.5 running on Linux.
TIA,
Eric

"sortBy=file:modified;file:name" works fine if you do not use "maxMessagesPerPoll=1". ;)
Thanks.

If you want to sort by oldest modified file, then you need to use sortBy=file:modified
If you want to sort by last modified file, then you need to use sortBy=reverse:file:modified

Related

Mule ESB: How to get the number of files in a FTP directory using MEL?

I am looking to get the number of files from a FTP directory which I am able to do it using Java in Groovy Script. I tried using message.inboundAttachments.size() which was not of help.
Is there a way to get that using MEL (Mule Expression Language)?
I am not sure about MEL but you can write a java class to do the needful.
For help refer below link
http://www.codejava.net/java-se/networking/ftp/java-ftp-example-calculate-total-sub-directories-files-and-size-of-a-directory
With an open source non Mule ftp connector you can list directories. The result is a collection, so you can use #[payload.size()].
See https://github.com/rbutenuth/ftp-client-connector

How can I batch Kafka reads to Elasticsearch

I'm not too familiar with Kafka but I would like to know what's the best way to
read data in batches from Kafka so I can use Elasticsearch Bulk Api to load the data faster and reliably.
Btw, am using Vertx for my Kafka consumer
Thank you,
I cannot tell if this is the best approach or not, but when I started looking for similar functionality I could not find any readily available frameworks. I found this project:
https://github.com/reachkrishnaraj/kafka-elasticsearch-standalone-consumer/tree/branch2.0
and started contributing to it as it was not doing everything I wanted, and was also not easily scalable. Now the 2.0 version is quite reliable and we use it in production in our company processing/indexing 300M+ events per day.
This is not a self-promotion :) - just sharing how we do the same type of work. There might be other options right now as well, of course.
https://github.com/confluentinc/kafka-connect-elasticsearch
Or You can try this source
https://github.com/reachkrishnaraj/kafka-elasticsearch-standalone-consumer
Running as a standard Jar
**1. Download the code into a $INDEXER_HOME dir.
**2. cp $INDEXER_HOME/src/main/resources/kafka-es-indexer.properties.template /your/absolute/path/kafka-es-indexer.properties file - update all relevant properties as explained in the comments
**3. cp $INDEXER_HOME/src/main/resources/logback.xml.template /your/absolute/path/logback.xml
specify directory you want to store logs in:
adjust values of max sizes and number of log files as needed
**4. build/create the app jar (make sure you have MAven installed):
cd $INDEXER_HOME
mvn clean package
The kafka-es-indexer-2.0.jar will be created in the $INDEXER_HOME/bin. All dependencies will be placed into $INDEXER_HOME/bin/lib. All JAR dependencies are linked via kafka-es-indexer-2.0.jar manifest.
**5. edit your $INDEXER_HOME/run_indexer.sh script: -- make it executable if needed (chmod a+x $INDEXER_HOME/run_indexer.sh) -- update properties marked with "CHANGE FOR YOUR ENV" comments - according to your environment
**6. run the app [use JDK1.8] :
./run_indexer.sh
I used spark streaming and the it was quite a simple implementation using Scala.

hbase custom filter not working

I'm trying to create a custom filter on hbase 0.98.1 in standalone mode on ubuntu 14.04.
I created a class extending FilterBase. I put the jar in HBASE_HOME/lib. Looking in the logs, I see that my jar is in the path.
Then I have a java client that first makes a get with a columnPrfixFilter, then it makes a get with my custom filter. The columnPrefixFilter works perfectly fine. With my filter, nothing happens. The client freeze for 10 minutes and close the connexion.
I don't see any thing in the log.
Could you please give me some hint on what and where to check ?
regards,
EDIT:
It turns out to be a protc vesrion conflict. I generated java class form proto file with protoc 2.4.0 and in my filter I was using protobuf-java 2.5.0
I aligned to 2.5.0 and it's now working fine.

Embedded MongoDB instance?

I've been using MongoDB for a little tool that I'm building, but I have two problems that I don't know if I can "solve". Those problems are mainly related with having to start a MongoDB server (mongod).
The first is that I have to run two commands every time that I want to use it (mongod and my app's command) and the other is testing. For now, I'm using different collections for "production" and "test", but it would be better to have just an embedded / self-contained instance that I can start and drop whenever I want.
Is that possible? Or should I just use something else, like SQLite for that?
Thanks!
Another similar project is https://github.com/Softmotions/ejdb.
The query syntax is similar to mongodb.
We use this at work - https://github.com/flapdoodle-oss/embedmongo.flapdoodle.de - to fire up embedded Mongo for integration tests. Has worked really well.
I haven't tried it, but I just found this Ruby implementation of an embedded MongoDB: https://github.com/gdb/embedded-mongo

Change trace file in H2

My application uses H2 but already has a log file (ex: abc.log)
Now, I'm trying to make even the H2 to write logs/errors to that file (abc.log) so if something goes wrong an user has only 1 file to send to me (not abc.log AND abc.db.trace file)
Is there a way to achieve that?
You can configure H2 to use SL4FJ as follows:
jdbc:h2:~/test;TRACE_LEVEL_FILE=4
The logger name is h2database.
Ok the solution was to simple for me to believe it but the only thing I had to do is to add
slf4j-api-1.7.2.jar
and
slf4j-jdk14-1.7.2.jar
in my app's classpath.
As SLF4J will (first search and then) discover by itself what underlying logging framework to use it is simply a matter of placing the right implementation.
One warning, it seems that SLF4J can not use more than one frameworks at a time so this solution work ONLY if you have a single existing framework.

Resources