Now there is a problem that puzzles me. How should I count the execution times of bolt and spout in Storm? I have tried to use ConcurrentHashmap (considering multithreading), but it can't be done on multiple machines. Can you help me solve this problem?
Considering your question i think you are trying to keep a track of number of tuple got executed and not the amount of time bolt or spout takes to execute one tuple.
You can use metices with graphite for visualisation. It gives a time series data.
Database can also be used for the same purpose.
Related
I want to use Apache Storm in one of my project. I have a concern regarding its parallelism technique. By definition we can give hints on how many instances of the components we want to run.
For example if there are 4 executors running the same spout, which itself is supposed to read data from external source and transform it into tuples, how does Storm ensures that no two or many spout get the same data.
Help would be appreciated.
I am a beginner of storm. Storm's creator created a very impressive method to check every Bolts in topology, which is using XOR.
But I start wondering why he just not use a counter. When a Bolts is successfully executed, the counter will minus one. So when the counter equal with 0, means the whole task is completetly.
Thanks
I believe one can reason why counters are not only inefficient but an incorrect acker tracker mechanism in an always running topology.
Storm tuple topology in itself can be a complex DAG. When a bolt receives ack from multiple downstream sources, what is it to do with the counters? Should it increment them, should it always decrement them? In what order?
Storm tuples have random message Ids. Counters will be finite. A topology runs forever emitting billions of tuples. How will you map the 673686557th tuple to a counter id? With XOR, you only have a single state to maintain and broadcast.
XOR operations are hardware instructions that execute extremely efficiently. Counters are longs which require huge amounts of storage. They have overflow problems and defeat the original requirement of a solution with a low space overhead.
I have already read related materials about storm parallel but still keep something unclear. Suppose we take Tweets processing as an example. Generally what we are doing is retrieving tweets streaming, counting numbers of words of each tweets and write the numbers into a local file.
My question is how to understand the value of the parallelism of spouts as well as bolts. Within the function of builder.setSpout and builder.setBolt we can assign the parallel value. But in the case of word counting of tweets is it correct that only one spout should be set? More than one spouts are regarded as copies of the first same spout by which identical tweets flow into several spouts. If that is the case what is the value of setting more than one spouts?
Another unclear point is how to assign works to bolts? Is the parallel mechanism achieve in the way of Storm will find currently available bolts to process a next emitting spout? I revise the basic tweets counting code so the final counting results will be written into a specific directory, however, all results are actually combined in one file on nimbus. Therefore after processing data on supervisors all results will be sent back to nimbus. If this is true what is the communication mechanism between nimbus and supervisors?
I really want to figure out those problems!!! Do appreciate for the help!!
Setting the parallelism for spouts larger than one, required that the user code does different things for different instances. Otherwise (as you mentioned already), data is just sent through the topology twice. For example, you can have a list of ports you want to listen to (or a list of different Kafka topics). Thus, you need to ensure, that different instanced listen to different ports or topics... This can be achieved in open(...) method by looking at topology metadata like own task ID, and dop. As each instance has a unique ID, you can partition your ports/topics such that each instance picks different ports/topics from the overall list.
About parallelism: this depends on the connection pattern you are using when pluging your topology together. For example, using shuffleGrouping results in a round robin distribution of your emitted tuples to the consuming bolt instances. For this case, Storm does not "look" if any bolt instance is available for processing. Tuples are simply transfered and buffered at the receiver if necessary.
Furthermore, Nimbus and Supervisor only exchange meta data. There is not dataflow (ie, flow of tuples) between them.
In some cases like "Kafka's Consumer Group" you have queue behaviour - which means that if one consumer read from the queue, other consumer will read different message from the queue.
This will distribute read load from the queue across all workers.
In those cases you can have multiple spout reading from the queue
I want to fire multiple web requests in parallel and then aggregate the data in a storm topology? which of the following way is preferred
1) create multiple threads within a bolt
2) Create multiple bolts and create a merging bolt to aggregate the data.
I would like to create multiple threads within a bolt because merging data in another bolt is not a simple process. But i see there are some concerns around that I found on internet
https://mail-archives.apache.org/mod_mbox/storm-user/201311.mbox/%3CCAAYLz+pUZ44GNsNNJ9O5hjTr2rZLW=CKM=FGvcfwBnw613r1qQ#mail.gmail.com%3E
but didn't get clear reason why not to create multiple threads. Any pointers will help.
On a side note does that mean i should not use java8's capabilities of parallel streams as well as mentioned in https://docs.oracle.com/javase/tutorial/collections/streams/parallelism.html?
Increase number of tasks for the bolt, its like spawning multiple instances of the same. And also increase the number of executors (threads) to handle them evenly.
Make sure #executors <= #tasks. Storm will do the rest for you.
As per the given link the capacity of a bolt is the percentage of time spent in executing. Therefore this value should always be smaller than 1. But in my topology I have observed that it is coming over 1 in some cases. How is it possible and what does it mean ?
http://i.stack.imgur.com/rwuRP.png
It means that your bolt is running over capacity and your topology will fall behind in processing if the bolt is unable to catch up.
When you see a bolt that is running over (or close to over) capacity, that is your clue that you need to start tuning performance and tweaking parallelism.
Some things you can do:
Increase the parallelism of the bolt by increasing the number of executors & tasks.
Do some simple profiling within your slow bolts to see if you have a performance problem.
You can get more detail about what's happens in your bolts using Storm Metrics
https://storm.apache.org/documentation/Metrics.html