We have a requirement to disable spout for a specific interval (9:00 p.m to 9:00 a.m) every day. Currently we have written code in Spout that checks whether current time lies in that duration, if yes then do nothing, but this approach calls next tuple method continuously. Is there any better way to do the same (using config etc)?
There is no better way. And even if the Spout is called over and over again, Storm will apply a sleep penalty if no output is emitted on a next() call, thus, a "busy wait situation" is avoided.
If you want to improve if the waiting penalty, you can implement an own ISpoutWaitStrategy and register for a topology via parameter topology.spout.wait.strategy (see default.yaml).
What Matthias has suggested, will work well. Alternatively, you can also consider deactivating topology for this duration. Nimbus client can be used to programmatically deactivate the topology. nextTuple wouldn't be called on spout if the topology is deactivated. However, it will turn off all the spouts of topology which you may not want.
Related
If I have a simple Apache Storm topology with a spout (set to a parallelism of 2) running on two separate nodes. How can I write a method that will be run once, and only once, at the start of the topology before any processing of tuples has begun?
Any implementation of a singleton/static class, or synchronized method alone will not work, as the two instances are running on separate nodes.
Perhaps there are some Storm methods that I can use to decide if I'm the first Spout to be instantiated, and run only then? I tried playing around with the getThisTaskId() & getThisWorkerTasks() methods, but was unsuccessful.
NOTE: The parallelism of 2 is to keep things simple. A solution should work for any number of nodes/workers.
Edit: Thought of an easier solution. I'll leave the original answer below in case it is helpful.
You can use TopologyContext.getThisTaskIndex to do this. If you make your spout open method run the code only if TopologyContext.getThisTaskIndex == 0, then your code will run only once, before any tuples are emitted.
If the worker that ran this code crashes, the code will be run again when the spout instance with task index 0 is restarted. In order to fix this, you can use Zookeeper to store state that should carry over across restarts, e.g. put a flag in Zookeeper once the only-once code has run, and have the spout open check that the flag is not set before running the code.
You can use TopologyContext.getStormId to get a constant unique string to identify the topology, so you can tell whether the flag was set by this topology or a previous deployment.
Original answer:
The easiest way to run some code only once on deployment of a topology, is to call the code when you submit the topology. You can call the only-once code at the same time as you wire your topology with TopologyBuilder. This will only get run once. The downside is it will run on the machine you're calling storm jar from.
If you for some reason can't do it this way or need to run the code from one of the worker nodes, there isn't anything built in to Storm to allow you to do this. The reason there isn't such a mechanism is that it requires extra coordination between the worker JVMs, and I don't think anyone has needed something like this.
The best option for you would probably be to look at Zookeeper/Curator to do this coordination (see https://curator.apache.org/curator-recipes/index.html). This should allow you to make only one worker in the cluster run your code. You'll have to consider what should happen if the worker chosen to run your code crashes/stalls.
Storm already uses Zookeeper for coordination, so you can just connect to that cluster.
I am reading the storm applied book. I found the following code snippet in the book
LocalCluster lc = new LocalCluster()
lc.submitTopology("GitHub-commit-count-topology"), config, topology);
Utils.sleep(TEN_MINUTES)
lc.killTopology("GitHub-commit-count-topology")
lc.shutdown()
So this code will submit the topology for execution wait for fixed 10 minutes and then kill the topology. But this is odd. How can I say. submitTopology wait for it to complete and completed. kill and shutdown.
Like in Akka Streams we get Future[Done] and we just wait on that future to complete. (rather than fixed 10 minutes).
You can do this with https://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/Testing.java#L376.
The reason this isn't used in some cases is that it requires every spout in the topology to implement the CompletableSpout interface https://github.com/apache/storm/blob/4137328b75c06771f84414c3c2113e2d1c757c08/storm-client/src/jvm/org/apache/storm/testing/CompletableSpout.java.
Most Storm spouts never reach a point where they're "done" (since it's a stream processing framework, not a batch processing framework), so there's no way to tell when the topology is finished. For example, if you're consuming messages from a Kafka topic, the producers may at any point add more messages to the topic, so how will the consumer determine it is finished consuming?
CompletableSpout exists mostly to ease testing, because it's then possible for a spout to say whether it is done. The completeTopology method I linked can then use this extra feature to tell whether all spouts in the topology are "done", and can stop the topology after that.
If the spout you're using in a test doesn't implement CompletableSpout (which most spouts don't), there's no way to tell when the topology is finished in general. In many cases you can still do better than the example you linked, e.g. if my topology is supposed to write 10 messages to a queue in the test, I can make the test end once 10 messages have been written to the queue.
To relate to Akka streams, I'm not really familiar with them, but looking at the introductory documentation, you could consider CompletableSpouts to be similar to bounded Sources (eg. a Source(1 to 100)), while "normal" spouts are unbounded Sources (e.g. a Source.repeat(1)).
We have an use case where we do not want to run storm topology continuously. Instead, there are set of inputs( 10K+) that should be processed at the specified time, Spout continuously emits these inputs and get processed by rest of the bolts in the topology. Once all the inputs are processed, there is nothing to emit from nextTuple in my spout.
At this time we wanted our topology to go to sleep and restart the process everyday night 12:00 am.
Is there any property to set in the storm config to run the topology once a day and sleep after processing is done and start at the specified time?
I'm not aware of a feature like what you're asking for. Storm isn't a batch processing system, it's meant to be running continuously. Consider if Storm is a great fit for this use case.
That said, you should be able to implement what you want. You could put in an "I'm done" message at the end of your spout input. When the spout hits that message and all other pending messages are acked, it could use the Nimbus client to kill or deactivate the topology (depending on whether you want to kill or deactivate), see https://stackoverflow.com/a/37134473/8845188. Then the final step would be using your favorite scheduling software to resubmit/reactivate the topology every day at midnight.
That can be met checking if a determined bolt is processing, if the bolt have any tuples in the queue to be inserted yet or something like that.
What I want, in resume, is know, in any way, if a topology has done it's work yet or no.
I know it sounds contradictory, since a topology should never have the work done, but I'm using it to do tests and in the beginning I have not a non-stop stream of data, but a finite amount of data.
To check the running topologies and their statuses you can run:
{dir/to/storm}/bin/storm list
You can also navigate to the running storm UI and check topologies/logs from there.
If you want to check if the work has been performed on a tuple then you can add your own logging. I have added some logic to print out how many tuples are processed each second which I find useful.
U can check it from the storm UI which I think is the most easiest way.
Right now Storm Spouts have an open method to configure them and Bolts have a prepare method. Is there any way to make all the Spout instances wait for all the prepare methods on the Bolts listening to them to finish?
I have a case where I would like to pass some config info to the bolts on the fly (since this config info changes all the time). I've read in some places that we should use Zookeeper or an in-memory key-value storage like redis to do this. My worry though is, what happens if the Bolts aren't ready to process data from Spouts yet, and the Spouts start emitting tuples? Is there a way to make the Spouts wait for an update from the Bolts saying they're ready?
I found a slightly more elegant solution for this (I think). The problem was that certain bolts needed config info in order to process incoming tuples. I figured out Storm's capability to replay tuples, so now my bolts listen for updates from one spout, and tuples from the other. As long as I dont receive updates, I keep failing the tuples and having the spout replay them after a configurable amount of time.
Yes, you can use Redis to store your configuration then read it from the prepare method.
The prepare method is invoked by the worker process which start processing tuples after finishing. Actually, I think that no tuple is emitted until all components of a worker process are ready. http://nathanmarz.github.io/storm/doc-0.8.1/index.html
Finally, you can have an additional spout which look up for configuration changes. Then, if a newer configuration is available it is send to your bolts via named streams.
You don't have to worry about this. Storm framework loads Bolt before Spout. Storm loads the bolts in reverse order. Bolts towards the end of the topology are loaded before the bolts in the middle of the topology and in the end, Spout gets loaded.