How to route based on content with high perfomance? - apache-nifi

In nifi, if I am listening to Kafka from single topic and based on the routing logic it'll call the respective process group.
However, in RouteOnContent processor, if we give regular expression for checking the occurance of string will it affect performance or how to achieve the a good performance while routing based on condition.

It would be more efficient to do some split at KSQL / Stream Processing level into different topics and have Nifi reading from different topics?

Running a regex on the content of each message is an inefficient approach, consider if you can modify your approach to one of the following:
Have your Producers write the necessary metadata into a Kafka Header which can use a much more efficient RouteOnAttribute processor in NiFi. This is still message-at-a-time which has throughput limitations
If your messages conform to a schema, use the more efficient KafkaRecord processors in NiFi with a QueryRecord approach which will significantly boost throughput
If you cannot modify the source data and the regex logic is involved, it may be more efficient to use a small Kafka Streams app to split the topic before processing the data further downstream

Related

In nifi usgae of Evaluate jsonpath processor will it affect performance impact because of attribute creation

I'm trying to integrate nifi REST API's with my application. So by mapping input and output from my application, I am trying to call nifi REST api for flow creation. So, in my use case most of the times I will extract the JSON values and will apply expression languages.
So, for simplifying all the use-cases I am using evaluate JSONpath processor for fetching all attributes using jsonpath and apply expression language function on that in extract processor. Below is the flow diagram regarding that.
Is it the right approach because for JSON to JSON manipulation having 30 keys this is the simplest way, and as I am trying to integrate nifi REST API's with my application I cannot generate JOLT transformation logic dynamically based on the user mapping.
So, in this case, does the usage of evaluating JSONpath processor creates any performance issues for about 50 use case with different transformation logic because as I saw in documentation attribute usage creates performance(regarding memory) issues.
Your concern about having too many attributes in memory should not be an issue here; having 30 attributes per flowfile is higher than usual, but if these are all strings between 0 - ~100-200 characters, there should be minimal impact. If you start trying to extract KB worth of data from the flowfile content to the attributes on each flowfile, you will see increased heap usage, but the framework should still be able to handle this until you reach very high throughput (1000's of flowfiles per second on commodity hardware like a modern laptop).
You may want to investigate ReplaceTextWithMapping, as that processor can load from a definition file and handle many replace operations using a single processor.
It is usually a flow design "smell" to have multiple copies of the same flow process with different configuration values (with the occasional exception of database interaction). Rather, see if there is a way you can genericize the process and populate the relevant values for each flowfile using variable population (from the incoming flowfile attributes, the variable registry, environment variables, etc.).

How can NiFi handle burst data?

If the submitted data to NiFi are not coming in a steady flow (but on bursty) how can NiFi handle them? Does it use a message broker to buffer them? I haven't seen anything like this in its documentation.
NiFi connections (the links between processors) have the capability of buffering FlowFiles (the unit of data that NiFi handles, basically content + metadata about that content), and NiFi also has the feature of backpressure, the ability of a processor to "tell" the upstream flow that it cannot handle any more data at a particular time. The relevant discussion from User Guide is here.
Basically you can set up connection(s) to be as "wide" as you expect the burst to be, or if that is not prudent, you can set it to a more appropriate value, and NiFi will do sort of a "leaky bucket with notification" approach, where it will handle what data it can, and the framework will handle scheduling of upstream processors based on whether they would be able to do their job.
If you are getting data from a source system that does not buffer data, then you can suffer data loss when backpressure is applied; however that is because the source system must push data when NiFi can not prudently accept it, and thus the alterations should be made on the source system vs the NiFi flow.

Logstash/not logstash for kafka-elasticsearch integration?

I read that elasticsearch rivers/river plugins are deprecated. So we cannot directly have elasticsearch-kafka integration. If we want to do this then we need to have some java(or any language) layer in between that puts the data from kafka to elastic search using its apis.
On the other hand – if we have kafka-logstash-elasticsearch – that we get rid of the above middle layer and achieve that through logstash with just configuration. But I am not sure if having logstash in between is an overhead or not?
And is my undertsanding right?
Thanks in advance for the inputs.
Regards,
Priya
Your question is quite general. It would be good to understand your architecture, its purpose and assumptions you made.
Kafka, as it is stated in its documentation, is a massively scalable publish-subscribe messaging system. My assumption would be that you use it to as a data broker in your architecture.
Elasticsearch on the other hand, is a search engine, hence I assume that you use it as a data access/searching/aggregation layer.
These two separate systems require connectors to create a proper data-pipeline. That's where Logstash comes in. It allows you to create data streaming connection between, in your case, Kafka and Elasticsearch. It also allows you to mutate the data on the fly, depending on your needs.
Ideally, Kafka uses raw data events. Elasticsearch stores documents which are useful to your data consumers (web or mobile application, other systems etc.), so can be quite different to the raw data format. If you need to modify the data between its raw form, and ES document, that's where Logstash might be handy (see filters stage).
Another approach could be to use Kafka Connectors, building custom tools e.g. based on Kafka Streams or Consumers, but it really depends on the concepts of your architecture - purpose, stack, data requirements and more.

#Storm: how to setup various metrics for the same data source

I'm trying to setup Storm to aggregate a stream, but with various (DRPC available) metrics on the same stream.
E.g. the stream is consisted of messages that have a sender, a recipient, the channel through which the message arrived and a gateway through which it was delivered. I'm having trouble deciding how to organize one or more topologies that could give me e.g. total count of messages by gateway and/or by channel. And besides the total, counts per minute would be nice too.
The basic idea is to have a spout that will accept messaging events, and from there aggregate the data as needed. Currently I'm playing around with Trident and DRPC and I've came up with two possible topologies that solve the problem at this stage. Can't decide which approach is better, if any?!
The entire source is available at this gist.
It has three classes:
RandomMessageSpout
used to emit the messaging data
simulates the real data source
SeparateTopology
creates a separate DRPC stream for each metric needed
also a separate query state is created for each metric
they all use the same spout instance
CombinedTopology
creates a single DRPC stream with all the metrics needed
creates a separate query state for each metric
each query state extracts the desired metric and groups results for it
Now, for the problems and questions:
SeparateTopology
is it necessary to use the same spout instance or can I just say new RandomMessageSpout() each time?
I like the idea that I don't need to persist grouped data by all the metrics, but just the groupings we need to extract later
is the spout emitted data actually processed by all the state/query combinations, e.g. not the first one that comes?
would this also later enable dynamic addition of new state/query combinations at runtime?
CombinedTopology
I don't really like the idea that I need to persist data grouped by all the metrics since I don't need all the combinations
it came as a surprise that the all the metrics always return the same data
e.g. channel and gateway inquiries return status metrics data
I found that this was always the data grouped by the first field in state definition
this topic explains the reasoning behind this behaviour
but I'm wondering if this is a good way of doing thins in the first place (and will find a way around this issue if need be)
SnapshotGet vs TupleCollectionGet in stateQuery
with SnapshotGet things tended to work, but not always, only TupleCollectionGet solved the issue
any pointers as to what is correct way of doing that?
I guess this is a longish question / topic, but any help is really appreciated!
Also, if I missed the architecture entirely, suggestions on how to accomplish this would be most welcome.
Thanks in advance :-)
You can't actually split a stream in SeparateTopology by invoking newStream() using the same spout instance, since that would create new instances of the same RandomMessageSpout spout, which would result in duplicate values being emitted to your topology by multiple, separate spout instances. (Spout parallelization is only possible in Storm with partitioned spouts, where each spout instance processes a partition of the whole dataset -- a Kafka partition, for example).
The correct approach here is to modify the CombinedTopology to split the stream into multiple streams as needed for each metric you need (see below), and then do a groupBy() by that metric's field and persistentAggregate() on each newly branched stream.
From the Trident FAQ,
"each" returns a Stream object, which you can store in a variable. You can then run multiple eaches on the same Stream to split it, e.g.:
Stream s = topology.each(...).groupBy(...).aggregate(...)
Stream branch1 = s.each(...)
Stream branch2 = s.each(...)
See this thread on Storm's mailing list, and this one for more information.

Talend Open Studio for ESB 5.2 Route to Job Optimisation/Performance Issue

Using Talend ESB 5.2.0, I want to create a mediation route that will call a processing job on the payload of an inbound request to a CXF messaging endpoint, however my current implementation is suffering some performance issues with large payloads.
I’ve investigated the issue and found that the bottleneck is in marshalling my inbound XML payload from the tRouteInput component to the internal row structure for processing, using a tXMLMap.
Is it possible, using a built-in type converter in the route, to marshal the internal row structure from the route and stream through POJOs or transport objects that are cheaper to process in the job? Or is there a better way to marshal XML to Talend’s internal row structure from a route using a less expensive transform?
Any thoughts would be welcome.
Cheers,
mids
It turns out that the issue was caused by the format of the inbound XML payload - having more than one loop element mapping to separate output flows from the tXMLMap generates relative links for each item for each output flow, enabling more advanced processing involving the loops if required.
This caused the large memory overhead that led to poor throughput.
Not requiring any more advanced processing in the XML to Talend row conversion, we overcame this issue by splitting the payload to its distinct loop elements using tReplicate and tExtractXMLField components before mapping out of the XML in separate tXMLMaps to avoid the auto-generation of those links.- mids

Resources