NiFi DistributedMapCacheServer and flowfiles on different nodes - apache-nifi

Hi I am using NiFi DistributedMapCacheServer to keep track of processed files in my flow. The issue is that we are working in a cluster and to leverage it we are using load balancing in queues so Flowfiles are not on the same node. Once they are arriving to Put/GetDistributedMapCache that is using DistributedMapCacheClient with fixed name of one of the hosts it only works when arriving Flowfile is on the same node as the one specified in DistributedMapCacheClient- for others we are getting:
FetchDistributedMapCache[id=d4713096-5ae5-1cb4-b777-202948e39e50] Unable to communicate with cache when processing StandardFlowFileRecord[uuid=5b1e8092-5bc5-4213-97a3-fa023691973f,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1587393798960-14, container=default, section=14], offset=983015, length=5996],offset=0,name=bf15d684-4100-4aa5-9fb5-fa0ddb21b140,size=5996] due to No route to host: java.net.NoRouteToHostException: No route to host
Is there any way to set up DMC server/client to work in such case, or can I somehow route all flowfiles to explicitly given node?

This means the hostname/ip-address that you specified in the DistributedMapCacheClient for the location of the server is unreachable by the other nodes in your cluster. Your nodes must be able to communicate since you have a cluster, so you just need to set this to the correct value.

Related

Elasticsearch - two out of three nodes instant shutdown case

We have a small Elasticsearch cluster for 3 nodes: two in one datacenter and one in another for disaster recovery reasons. However, if the first two nodes fail simultaneously, the third one won't work either - it will just throw "master not discovered or elected yet".
I understand that this is intended - this is how Elasticsearch cluster should work. But is there some additional special configuration that I don't know to keep the third single node working, even if in the read-only mode?
nope, there's not. as you mentioned it's designed that way
you're probably not doing yourselves a lot of favours by running things across datacentres like that. network issues are not kind on Elasticsearch due to it's distributed nature
Elasticsearch runs in distributed mode by default. Nodes assume that there are or will be a part of the cluster, and during setup nodes try to automatically join the cluster.
If you want your Elasticsearch to be available for only node without the need to communicate with other Elasticsearch nodes. It works similar to a standalone server. To do this we can tell Elasticsearch to work in local only (disable network)
open your elasticsearch/config/elasticsearch.yml and set:
node.local: true

Apache Nifi - Flowfiles are stuck in queue

The flow files are stuck in the queue(Load balance by attribute) and are not read by the next downstream processor(MergeRecord with CSVReader and CSVRecordSetWriter). From the Nifi UI, it appears that flow files are in the queue but when tried to list queue it says "Queue has no flow files". Attempting to empty queue also gives the exact message. Nifi Logs doesn't have any exceptions related to the processor. There are around 80 flow files in queue.
I have tried below action items but all in vain:
Restarting the downstream and upstream(ConvertRecord) processor.
Disabled and enabled CSVReader and CSVRecordSetWriter.
Disabled load balancing.
Flow file expiration set to 3 sec.
Screenshot:
Flowfile:
MergeRecord properties:
CSVReader Service:
CSVRecordSetWriter:
Your merge record processor is running only on the primary node, and likely all the files are on other nodes (since you are load balancing). NiFi is not aware enough to notice that the downstream processor is only running on the primary, so it does not automatically rebalance everything to the primary node. Simply changing the MergeRecord to run on all nodes will allow the files to pass through.
Alas, I have not found a way to get all flow files back on the primary node, you can use the "Single Node" load balance strategy to get all the files on the same node, but it will not necessarily be the primary.
This is probably because the content of the flowfile was deleted. However, the entry of it is still present in the flow-file registry.
if you have a dockerized nifi setup and if you dont have a heavy production flow, you can stop your nifi flow and delete everything in the _*repository folders (flowfile-repository, content repository etc)
(provided you have all you directories mounted and no other data loss is at risk)
Let me know if you need further assistance
You have a miss configuration in the way you load balance your FlowFiles. To check that stop your MergeRecord processor to be able to check and view what's inside your queue.
In the modal window displayed you can check where are your flowfiles waiting, it's highly probable that your FlowFiles are in fact on one of the other node(s) but since the MergeRecord is running on the primary node then it has nothing in its Queue.

ElasticSearch Clusters Setting

Does anyone know how to tell Elastic Search to stop node to node communications and then restart it..In my system I would like to tell it to stop until a certain condition then restart the communications ( synchronize data)
By node to node communications, do you mean data synchronization and shard relocations?
If yes, you can do it by setting cluster.routing.allocation.enable to none using cluster settings API.
If you don't mean data synchronization, you can achieve this by blocking the port 9300 (or which ever port ES is using for internal communication).
Please note that any node leaves the cluster will cause the elasticsearch to rebalance the shards and replica. The overall cluster loading increases when any node is lost since the cluster needs to fulfill the shard and replica settings by copying existing data to rest of nodes. Therefore, if the operation happens often, the considerable extra space will be consumed for additional shards and replicas.
If you fully understand the impact, you can try the shard allocation filtering. For example, exclude the host ip 10.0.0.1 from the cluster:
PUT _cluster/settings
{
"transient" : {
"cluster.routing.allocation.exclude._ip" : "10.0.0.1"
}
}
Other than ip, you can use node name or host name to exclude the node as well.
You can find the full documentation here: https://www.elastic.co/guide/en/elasticsearch/reference/current/allocation-filtering.html

How to setup Elasticsearch client nodes?

I have couple of Elasticsearch questions regarding client node:
Can I say: any nodes as long as they are opening HTTP port, I can treat them as "client" nodes, because we can do search/index through this node.
Actually we treat the node as client node when the cluster=false and data=false, if I set up 10 client nodes, do I need to route in my client side, I mean if I specify clientOne:9200 in my code as ES portal, then would clientOne forward other HTTP requests to other client nodes, otherwise, clientOne would be under very high pressure. i.e do they communicate with each other between client nodes?
When I specify client nodes in ES cluster, should I close other nodes' HTTP port? Because we can only query client nodes.
Do you think it's necessary to set up both data node and client node in the same machine, or just setup data node acts as client node as well, anyways it's in the same machine?
If the ES cluster would be heavily/frequently indexed while less searched, then I don't have to set up client node, because client node good for gathering data, right please?
For general search/index purpose should I use http port or tcp port, what's the difference in clients perspective please?
Yes, you can send queries via http to any node that has port 9200 open.
With node.data: false and node.master: false, you get a "client node". These are useful for offloading indexing and search traffic from your data nodes. If you have 10 of them, you would want to put a load balancer in front of them.
Closing the data node's http port (http.enabled: false) would keep them from serving client requests (probably good), though it would also prevent you from curl'ing them directly for stats, etc.
Client nodes are useful (see #2), so I wouldn't route traffic directly to your data nodes. Whether you run both a client and data node on the same piece of hardware would be dependent on the config of that machine (do you have sufficient RAM, etc).
Client node are also useful for indexing, because they know which data node should receive the data for storage. If you sent an indexing request to a random data node instead, the odds would be high that it would have to redirect that request to another node. That's a waste of time and resources, if you can create client nodes.
Having your clients join the cluster might give them access to more information about the cluster, but using http gives them a more generic "black box" interface. With http, you also don't have to keep your clients at the same version as your ES nodes.
Hope that helps.

How to handle url change when a node dies?

I am new to elasticsearch. I have a cluster with 3 nodes on a same machine. To access each node I have separate url as the port changes(localhost:9200, localhost:9201, localhost:9202).
Now the question I have is that suppose my node 1(i.e. master node) dies then elasticsearch engine handle the situation very well and makes node 2 as master node but how does my application know that a node died and now I need to hit node 2 with port 9201?
Is there a way using which I always hit a single URL and internally it figures out which node to hit?
Thanks,
Pratz
The client search nodes with a discovery module. The name of the cluster in your clients configuration is important to get this working.
With a correct configuration (on client and cluster) you can bring a single node down without any (negative) effect on your client.
See the following links:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery.html
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-zen.html

Resources