nifi ingestion with 10,000+ sensor data? - apache-nifi

I am planning to use nifi to ingest data from more than 10,000 sensors. There are 50-100 types of sensors which will send a specific metric to nifi.
I am pondering over whether I should assign 1 port number to listen to all the sensors or I should assign 1 port for each type of sensor to facilitate my data pipeline. which is the better option?
Is there a upper limit of the no of ports which I can "listen" using nifi?

#ilovetolearn
NiFi is such a powerful tool. You can do either of your ideas, but I would recommend to do what is easier for you. If you have data source sensors that need different data flows, use different ports. However, if you can fire everything at a single port, I would do this. This makes it easier to implement, consistent, easier to support later, and easier to scale.
In large scale highly available NiFi, you may want a Load Balancer to handle the inbound data. This would push the sensor data toward a single host:port on the LB appliance, that then directs to NiFi with 3-5-10+ nodes.

I agree with the other answer that once scaling comes into play, an external load balancer in front of NiFi would be helpful.
In regards to the flow design, I would suggest using a single exposed port to ingest all the data, and then use RouteOnAttribute or RouteOnContent processors to direct specific sensor inputs into different flow segments.
One of the strengths of NiFi is the generic nature of flows given sufficient parameterization, so taking advantage of flowfile attributes to handle different data types dynamically scales and performs better than duplicating a lot of flow segments to statically handle slightly differing data.
The performance overhead to run multiple ingestion ports vs. a single port and routed flowfiles is substantial, so this will give you a large performance improvement. You can also organize your flow segments into hierarchical nested groups using the Process Group features, to keep different flow segments cleanly organized and enforce access controls as well.
2020-06-02 Edit to answer questions in comments
Yes, you would have a lot of relationships coming out of the initial RouteOnAttribute processor at the ingestion port. However, you can segment these (route all flowfiles with X attribute in "family" X here, Y here, etc.) and send each to a different process group which encapsulates more specific logic.
Think of it like a physical network: at a large organization, you don't buy 1000 external network connections and hook each individual user's machine directly to the internet. Instead, you obtain one (plus redundancy/backup) large connection to the internet and use a router internally to direct the traffic to the appropriate endpoint. This has management benefits as well as cost, scalability, etc.
The overhead of multiple ingestion ports is that you have additional network requirements (S2S is very efficient when communicating, but there is overhead on a connection basis), multiple ports to be opened and monitored, and CPU to schedule & run each port's ingestion logic.
I've observed this pattern in practice at scale in multinational commercial and government organizations, and the performance improvement was significant when switching to a "single port; route flowfiles" pattern vs. "input port per flow" design. It is possible to accomplish what you want with either design, but I think this will be much more performant and easier to build & maintain.

Related

Highload data update architecture

I'm developing a parcels tracking system and thinking of how to improve it's performance.
Right now we have one table in postgres named parcels containing things like id, last known position, etc.
Everyday about 300.000 new parcels are added to this table. The parcels data is took from external API. We need to track all parcels positions as accurate as possible and reduce time between API calls about specific parcel.
Given such requirements what could you suggest about project architecture?
Right now the only solution I can think of is producer-consumer pattern. Like having one process selecting all records from parcel table in the infinite loop and then distribute fetching data task with something like Celery.
Majors downsides of this solution are:
possible deadlocks, as fetching data about the same task can be executed at the same time on different machines.
need in control of queue size
This is a very broad topic, but I can give you a few pointers. Once you reach the limits of vertical scaling (scaling based on picking more powerful machines) you have to scale horizontally (scaling based on adding more machines to the same task). So for being able to design scalable architectures you have to learn about distributed systems. Here some topics to look into:
Infrastructure & processes for hosting distributed systems, such as Kubernetes, Containers, CI/CD.
Scalable forms of persistence. For example different forms of distributed NoSQL like key-value stores, wide-column stores, in-memory databases and novel scalable SQL stores.
Scalable forms of data flows and processing. For example event driven architectures using distributed message- / event-queues.
For your specific problem with packages I would recommend to consider a key-value store for your position data. Those could scale to billions of insertions and retrievals per day (when querying by key).
It also sounds like your data is somewhat temporary and could be kept in an in-memory hot-storage while the package is not delivered yet (and archived afterwards). A distributed in-memory DB could scale even further in terms insertion and queries.
Also you probably want to decouple data extraction (through your api) from processing and persistence. For that you could consider introducing stream processing systems.

Use case of service bus in microservice architecture

I am trying to learn architecting an business application adhering microservices fundamentals and its considerations. I have come across a question to which I am bit confused.
In a microservice architecture having multiple microservices with their own DB if data needs to be shared among each others then what should be the proffered way, service bus or calling them via HttpClient ?
I know that with message queue through service bus whenever a message is needed to be shared with others one micro service can publish this message and all subscriber then can retrieve the same, but in this case if that information needs to be stored in other microservice application's DB too, would that not become the redundant data?
So isn't enough to read the data simply via HttpClient whenever needed.
Looking forward to see the replies, thanks for the help in advance.
It depends upon the other factor like latency, redundancy and availability. Both options works keeping redundant data or REST call whenever we need data.
Points that work against direct HTTP Clients calls are -
It impact availability. It reduce overall availability if the system.
It impact performance and latency. Support there is an operation from service A that need data from service B. Frequency of the operation is very high. In that case, it reduce performance and increase latency as well as response time.
It doesn't support JOINs. So, you have to manipulate data. That also impact performance.
Points that work against message bus approach/event driven -
Duplicate data - So, increase complexity of the system to keep the same in sync.
It reduce consistency of the system. Now, system is eventual consistent.
In system design, no option is incorrect. All options have some pros and some cons so choose wisely according to your requirement and system.

Load Balancing to Maximize Local Server Cache

I have a single-server system that runs all kinds of computations on user data, accessible via REST API. The computations require that large chunks of the user data are in memory during the computation. To do this efficiently, the system includes an in-memory cache, so that multiple requests on the same data chunks will not need to re-read the chunks from storage.
I'm now trying to scale the system out, since one large server is not enough, and I also want to achieve active/active high availability. I'm looking for the best practice to load balance between the multiple servers, while maximizing the efficiency of the local cache already implemented.
Each REST call includes a parameter that identifies which chunk of data should be accessed. I'm looking for a way to tell a load balancer to route the request to a server that has that chunk in cache, if such a server exists - otherwise just use a regular algorithm like round robin (and update the routing table such that the next requests for the same chunk will be routed to the selected server).
A bit more input to consider:
The number of data chunks is in the thousands, potentially tens of thousands. The number of servers is in the low dozens.
I'd rather not move to a centralized cache on another server, e.g. Redis. I have a lot of spare memory on the existing machines that I'd like to utilize since the computations are mostly CPU-bound. Also, I'd prefer not re-implement another custom caching layer.
My servers are on AWS so a way to implement this in ELB is fine with me, but open to other cloud-agnostic solutions. I could in theory implement a system that updates rules on an AWS application load balancer, but it could potentially grow to thousands of rules (one per chunk) and I'm not sure that will be efficient.
Since requests using the same data chunk can come from multiple sources, session-based stickiness is not enough. Some of these operations are write operations, and I'd really not want to deal with cross-server synchronization. All the operations on a single chunk should be routed to the single server that has that chunk in memory.
Any ideas are welcome! Thanks!

How to distribute data and computation to maximize locality?

Please bear with me, this is a basic architectural question for my first attempt at a "big data" project, but I believe your answers will be of general interest to anyone who is starting out in this field.
I've googled and read the high-level descriptions of Kafka, Storm, Memcached, MongoDB, etc., but now that I'm ready to dig in to start designing my app, I still need some further insight on how in fact the data should be distributed and shared.
The performance of my app is critical, so one objective is to somehow maximize the locality of the data in the RAM of the machines doing the distributed calculations. I need advice for this part of the design.
If my app had some clear criteria for a priori sharding the data and distributing the calculations (such as geographical regions or company divisions) then the solution would be obvious. But unfortunately my app's data access patterns are dynamic and depend on the results of previous calculations.
My app is an analysis program with distinct stages. In the first stage, all the data is accessed once and a metric is calculated for each data object. In the second stage, a subset of the data objects may be accessed, with the probability of access being proportional to each data object's metric that was calculated in the previous stage. In the final stage, a relatively small subset of data objects will be accessed many times for many calculations.
At all stages, it is required that the calculations be distributed across several servers. The calculations are embarassingly parallel, and each distributed calculation only needs to access a few data objects. It is also required that the number of servers can be specified before the app runs (for example, run on one server, or run on fifty servers).
It seems to me that I need some mechanism that distributes the appropriate data objects to the appropriate compute servers, as opposed to just blindly fetching the data from some database service (whether centralized or distributed). Also, it seems to me that some sort of smart caching system might be appropriate, since the data access pattern depends on the previous calculations and cannot be predicted a priori. But as far as I can tell, Memcached is not such a system because the sharding is determined a priori.
I've read many times that the operating system cache performs better than any monkeying around that we may try. I think the ideal solution is that each compute server's RAM cache somehow captures the data objects' dynamic access patterns, but it's not clear to me how this would work with a NoSQL or Memcached service.
Thanks for bearing with me this far. I realize this is a basic question, but the answer eludes me so far. I can't resolve the dynamic access patterns of my app with the a priori sharding of the NoSQL/Memcached packages. Any advice would be greatly appreciated.
I recommend you to take a look at http://tarantool.org. Shard to maximize locality for the most common data access pattern, use Lua for local computations, and net.box to issue a remote RPC when calculation needs to continue on another node. All data is stored in RAM, if you write your computation code carefully it could take advantage of the Just In Time compiler.

Determine Request Latency

I'm working on creating a version of Pastry natively in Go. From the design [PDF]:
It is assumed that the application
provides a function that allows each Pastry node to determine the “distance” of a node
with a given IP address to itself. A node with a lower distance value is assumed to be
more desirable. An application is expected to implements this function depending on its
choice of a proximity metric, using network services like traceroute or Internet subnet
maps, and appropriate caching and approximation techniques to minimize overhead.
I'm trying to figure out what the best way to determine the "proximity" (i.e., network latency) between two EC2 instances programmatically from Go. Unfortunately, I'm not familiar enough with low-level networking to be able to differentiate between the different types of requests I could use. Googling did not turn up any suggestions for measuring latency from Go, and general latency techniques always seem to be Linux binaries, which I'm hoping to avoid in the name of fewer dependencies. Any help?
Also, I note that the latency should be on the scale of 1ms between two EC2 instances. While I plan to use the implementation on EC2, it could hypothetically be used anywhere. Is latency generally so bad that I should expend the effort to ensure the network proximity of two nodes? Keep in mind that most Pastry requests can be served in log base 16 of the number of servers in the cluster (so for 10,000 servers, it would take approximately 3 requests, on average, to find the key being searched for). Is the latency from, for example, EC2's Asia-Pacific region to EC2's US-East region enough to justify the increased complexity and the overhead introduced by the latency checks when adding nodes?
A common distance metric in networking is to count the number of hops (node-hops in-between) a packet needs to reach its destination. This metric was also mentioned in the text you quoted. This could give you adequate distance values even for the low-latency environment you mentioned (EC2 “local”).
For the go logic itself, one would think the net package is what you are looking for. And indeed, for latency tests (ICMP ping) you could use it to create an IP connection
conn, err := net.Dial("ip4", "127.0.0.1")
create your ICMP package structure and data, and send it. (See Wikipedia page on ICMP; IPv6 needs a different format.) Unfortunately you can’t create an ICMP connection directly, like you can with TCP and UDP, thus you will have to handle the package structure yourself.
As conn of type Conn is a Writer, you can then pass it your data, the ICMP data you defined.
In the ICMP Type field you can specify the message type. Values 8, 1 and 30 are the ones you are looking for. 8 for your echo request, the reply will be of type 1. And maybe 30 gives you some more information.
Unfortunately, for counting the network hops, you will need the IP packet header fields. This means, you will have to construct your own IP packets, which net does not seem to allow.
Checking the source of Dial(), it uses internetSocket, which is not exported/public. I’m not really sure if I’m missing something, but it seems there is no simple way to construct your own IP packets to send, with customizable header values. You’d have to further check how DialIP sends packages with internetSocket and duplicate and adapt that code/concept. Alternatively, you could use cgo and a system library to construct your own packages (this would add yet more complexity though).
If you are planning on using IPv6, you will (also) have to look into ICMPv6. Both packages have a different structure over their v4 versions.
So, I’d suggest using simple latency (timed ping) as a simple(r) implementation and then add node-hops at a later time/afterwards, if you need it. If you have both in place, maybe you also want to combine those 2 (less hops does not automatically mean better; think long overseas-cables etc).

Resources