I'm currently analysing Apache Storm if it is usable as Stream Processing Framework for me. It looks really nice, but what worries me, is the scaling.
As far as I understood it, scaling is done by rebalancing.
e.g. If I wan't to add a new server to the cluster, I have to increase the workers. But when I do so with
storm rebalance storm_example -n 4
all the bolts and spouts stop working while it is rebalancing. But what I want is more like:
Add the Server, add a new worker on it, and when new Data arrive, also consider this new one to work off the data
Do I just don't get the idea of Storm or is that not possible with it.
I had the similar requirement and as per my research it is not possible. In my case we ended up creating a new storm cluster without disturbing the existing one. We were (are) trying to assign servers/workers based on the load to storm to avoid AWS cost.
It would be interesting to know if we can do so.
Related
We have producers that are sending the following to Kafka:
topic=syslog, ~25,000 events per day
topic=nginx, ~5,000 events per day
topic=zeek.xxx.log, ~100,000 events per day (total). In this last case there are 20 distinct zeek topics, such as zeek.conn.log and zeek.http.log
kafka-connect-elasticsearch instances function as consumers to ship data from Kafka to Elasticsearch. The hello-world Sink configuration for kafka-connect-elasticsearch might look like this:
# elasticsearch.properties
name=elasticsearch-sink
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=24
topics=syslog,nginx,zeek.broker.log,zeek.capture_loss.log,zeek.conn.log,zeek.dhcp.log,zeek.dns.log,zeek.files.log,zeek.http.log,zeek.known_services.log,zeek.loaded_scripts.log,zeek.notice.log,zeek.ntp.log,zeek.packet_filtering.log,zeek.software.log,zeek.ssh.log,zeek.ssl.log,zeek.status.log,zeek.stderr.log,zeek.stdout.log,zeek.weird.log,zeek.x509.log
topic.creation.enable=true
key.ignore=true
schema.ignore=true
...
And can be invoked with bin/connect-standalone.sh. I realized that running or attempting to run tasks.max=24 when work is performed in a single process is not ideal. I know that using distributed mode would be a better alternative, but am unclear on the performance-optimal way to submit connectors to distributed mode. Namely,
In distributed mode, would I still want to submit just a single elasticsearch.properties through a single API call? Or would it be best to break up multiple .properties configs + connectors (e.g. one for syslog, one for nginx, one for zeek.**) and submit them separately?
I understand that tasks be equal to the number of topics x number of partitions, but what dictates the number of workers?
Is there anywhere in the documentation that walks through best practices for a situation such as this where there is a noticeable imbalance of throughput for different topics?
In distributed mode, would I still want to submit just a single elasticsearch.properties through a single API call?
It'd be a JSON file, but yes.
what dictates the number of workers?
Up to you. JVM usage is one factor that you can monitor and scale on
Not really any documentation that I am aware of
Our Apache Storm topology listens messages from Kafka using KafkaSpout and after doing lot of mapping/reducing/enrichment/aggregation etc. etc finally inserts data into Cassandra. There is another kafka input where we receive user queries for data if topology finds a response then it sends that onto a third kafka topic. Now we want to write E2E test using Junit in which we can directly programmatically insert data into topology and then by inserting user query message, we can assert on third point that response received on our query is correct.
To achieve this, we thought of starting EmbeddedKafka and CassandraUnit and then replacing actual Kafka and Cassandra with them and then we can start topology in the context of this single Junit test.
But our approach doesn't fit well with JUnit because it makes these tests too bulky. Starting kafka, cassandra and topology all are time taking and consume lot of resource. Is there anything in Apache Storm which can support kind of testing we are planning to write?
There are a number of options you have here, depending on what kind of slowdown you can live with:
As you mentioned, you can start Kafka, Cassandra and the topology. This is the slowest option, and the "most realistic".
Start Kafka and Cassandra once, and reuse them for all the tests. You can do the same with the Storm LocalCluster. It is likely faster to clear Kafka/Cassandra between each test (e.g. deleting all topics) instead of restarting them.
Replace the Kafka spouts/bolts and Cassandra bolt with stubs in test. Storm has a number of tools built in for stubbing bolts and spouts, e.g. the FixedTupleSpout, FeederSpout, the tracked topology and completable topology functionality in LocalCluster. This way you can insert some fixed tuples into the topology, and do asserts about which tuples where sent to the Cassandra bolt stub. There's examples of some of this functionality here and here
Finally you can of course unit test individual bolts. This is the fastest kind of test. You can use Testing.testTuple to create test tuples to pass to the bolt.
According to NiFi's homepage, it "supports powerful and scalable directed graphs of data routing, transformation, and system mediation logic".
I've been playing with NiFi in the last couple of months and can't help wondering why not using it also for scheduling batch processes.
Let's say I have a use case in which data flows into Hadoop, processed by a series of Hive \ MapReduce jobs, then exported to some external NoSql database to be used by some system.
Using NiFi in order to ingest and flow data into Hadoop is a use case that NiFi was built for.
However, using Nifi in order to schedule the jobs on Hadoop ("Oozie-like") is a use case I haven't encountered others implementing, and since it seems completely possible to implement, I'm trying to understand if there are reasons not to do so.
The gains of doing it all on NiFi is that one will get a visual representation of the entire course of data, from source to destination in one place. In case of complicated flows it is very important for maintenance.
In other words - my question is: Are there reasons not to use NiFi as a scheduler\coordinator for batch processes? If so - what problems may arise in such a use case?
PS - I've read this: "Is Nifi having batch processing?" - but my question aims to a different sense of "batch processing in NiFi" than the one raised in the attached question
You are correct that you would have a concise visual representation of all steps in the process were the schedule triggers present on the flow canvas, but NiFi was not designed as a scheduler/coordinator. Here is a comparison of some scheduler options.
Using NiFi to control scheduling feels like a "hammer" solution in search of a problem. It will reduce the ease of defining those schedules programmatically or interacting with them from external tools. Theoretically, you could define the schedule format, read them into NiFi from a file, datasource, endpoint, etc., and use ExecuteStreamCommand, ExecuteScript, or InvokeHTTP processors to kick off the batch process. This feels like introducing an unnecessary intermediate step however. If consolidation & visualization is what you are going for, you could have a monitoring flow segment ingest those schedule definitions from their native format (Oozie, XML, etc.) and display them in NiFi without making NiFi responsible for defining and executing the scheduling.
As part of the development of streamparse, we have a BatchingBolt that processes tuples in batches. It's intended for use with things like databases that are more performant when you send things in batches.
I've recently proposed switching our BatchingBolt implementation over from using a timer/thread approach to using tick tuples; however, one of my fellow devs pointed out that with our current approach the final batch will definitely get processed when a topology is shutdown (and it's in the inactive state), whereas that isn't explicitly documented anywhere about tick tuples.
Therefore, my question is this: does Storm continue sending tick tuples to bolts after a kill/deactivate has been issued, while it is in the waiting/inactive period? The topology lifecycle docs don't make it clear.
http://mail-archives.apache.org/mod_mbox/storm-user/201506.mbox/%3CCAF5108ijGpdMeax1LaKQ1MG6MSZQF=YM=vO8AacmN0RUiNfNkQ#mail.gmail.com%3E
AFAIK, "setup-tick!" is called from start of executor (which schedules tick timer for each executor), and tick tuples will be emitted unless worker is going to be shutdown.
In short, your fellow is correct.
I'm trying to figure out how to restore the state of a storm bolt intance during failover. I can persist the state externally (DB or file system), however once the bolt instance is restarted I need to point to the specific state of that bolt instance to recover it.
The prepare method of a bolt receives a context, documented here http://nathanmarz.github.io/storm/doc/backtype/storm/task/TopologyContext.html
What is not clear to me is - is there any piece of this context that uniquely identifies the specific bolt instance so I can understand which persistent state to point to? Is that ID preserved during failover? Alternatively, is there any variable/object I can set for the specific bolt/instance that is preserved during failover? Any help appreciated!
br
Sib
P.S.
New to stackoverflow so pls bear with me...
You can probably look for Trident Its basically an abstraction built on top of storm . The documentation says
Trident has first-class abstractions for reading from and writing to stateful sources. The state can either be internal to the topology – e.g., kept in-memory and backed by HDFS – or externally stored in a database like Memcached or Cassandra
In case of any fail over it says
Trident manages state in a fault-tolerant way so that state updates are idempotent in the face of retries and failures.
You can go through the documentation for any further clarification.
Tx (and credit) to Storm user group!
http://mail-archives.apache.org/mod_mbox/storm-user/201312.mbox/%3C74083558E4509844944FF5CF2BA7B20F1060FD0E#ESESSMB305.ericsson.se%3E
In original Storm, both spout and bolt are stateless. Storm can managed to restart nodes but it will require some effort to restore the states of nodes. There are two solutions that I can think of:
If a message fails to process, Storm will replay it from ROOT of the topology and the logic of replay has to be implemented by user. So in this case I would like to put more state information (e.g. the ID of some external state storage and id of this task) in messages.
Or you can use Trident. It can provides txid to each transaction and simplify storage process.
I'm OK with first solution because my app doesn't require transactional operations and I have a better understanding of the original Storm (Storm generates simpler log than Trident does).
You can use the task ID.
Task ids are assigned at topology creation and are static. If a task dies/restarts or gets reassigned somewhere else, it will still have the same id.