Worker node execution in Apach-Strom - apache-storm

Storm topology is been deployed using Storm command on machine X. Worker nodes are running on Machine Y.
Once topology has been deployed, this is ready to process tuples and workers are processing request and response.
Can anyone please suggest that how do Worker node identify work and data, as I am not sure how worker node has access of code which is not at all deployed by developer?
If code to topology is accessible to Worker Nodes, can you please where is the location of this and also suggest execution of Worker nodes?

One, your asking a fairly complex question. I've been using Storm for awhile and don't understand much about how it works internally. Here is a good article talking about the internals of Storm. It's over two years old but should still be highly relevant. I believe that Netty is now used as the internal messaging transport, it's mentioned as being experimental in the article.
As far as code being run on worker nodes, there is an configuration in storm.yaml,
storm.local.dir
When uploading the Topology, I believe it copies the jar to that location. So every different worker machine will have the necessary jar in it's configured storm.local.dir. So even though you only upload the one machine, Storm will distributed it to the necessary workers. (That's from memory and I'm not in a spot to test it at the moment. )

Related

Deploy 2 different topologies on a single Nimbus with 2 different hardware

I have 2 sets of storm topologies in use today, one is up 24/7, and does it's own work.
The other, is deployed on demand, and handles a much bigger loads of data.
As of today, we have N supervisors instances, all from the same type of hardware (CPU/RAM), I'd like my on demand topology to run on stronger hardware, but as far as I know, there's no way to control which supervisor is assigned to which topology.
So if I can't control it, it's possible that the 24/7 topology would assign one of the stronger workers to itself.
Any ideas, if there is such a way?
Thanks in advance
Yes, you can control which topologies go where. This is the job of the scheduler.
You very likely want either the isolation scheduler or the resource aware scheduler. See https://storm.apache.org/releases/2.0.0-SNAPSHOT/Storm-Scheduler.html and https://storm.apache.org/releases/2.0.0-SNAPSHOT/Resource_Aware_Scheduler_overview.html.
The isolation scheduler lets you prevent Storm from running any other topologies on the machines you use to run the on demand topology. The resource aware scheduler would let you set the resource requirements for the on demand topology, and preferentially assign the strong machines to the on demand topology. See the priority section at https://storm.apache.org/releases/2.0.0-SNAPSHOT/Resource_Aware_Scheduler_overview.html#Topology-Priorities-and-Per-User-Resource.

Can the same Zookeeper instance be used by number of services?

Is one Zookeeper installation good enough to be used by Hadoop Kafka and Storm clusters?
I want to deploy all on one test environment and try playing with those technologies,
can I use one zookeeper installation for that? same znode could be dedicated for number of services?
Yes, you can use a single zookeeper installation to support more than one cluster and indeed different types of clusters. This has been the case for a long time - here's a link to a good discussion on it from 2009: http://zookeeper-user.578899.n2.nabble.com/Multiple-ZK-clusters-or-a-single-shared-cluster-td3277547.html
For testing this is fine (and even to run it on one ZK server). For production use though you'll want at least a 3 node cluster. And you should think carefully about running everything off of a single cluster.
The reason is that if you run multiple Hadoop, Storm and Kafka clusters off of a single ZK cluster, that one set of servers becomes a single point of failure for all of your distributed systems. You can beef up the ZK setup with more than 3 servers (let's say 7) so that it can handle multiple failures, but if someone were to accidentally bring ZK down all your distributed environments would come down too.
Some argue that you would be better off with more isolation between systems. I think it varies by use case but I'd be careful about putting all of your eggs in one ZK basket.

how YARN manages endless jobs like Storm

Couple of days ago Yahoo posted about Storm-on-YARN project http://developer.yahoo.com/blogs/ydn/storm-yarn-released-open-source-143745133.html that makes possibility to run Storm on YARN.
That's big improvement, however I have two questions regarding to running tasks like Storm with YARN. Tasks like Storm don't have some limit on execution time... I mean, when you run Storm you expect it will work days or months - listen queue or whatever.
I mean there are set of tasks that don't have limitation in time execution (I'd like to report 0% progress)
1) what's about timeout? regular M/R is killed when it hangs on, how to prevent it? I walked through the code, but didn't find any special code
2) also, MR1 has queue where jobs waited for execution: when cluster finish one job, it picked up next job from queue. What about YARN? if I will push endless Storm-like jobs A, and the job B, will job B be executed?
Sorry, if my questions seem ridiculous, maybe I miss/don't understand something
Hadoop's JobTracker was(is) responsible for both cluster resources and the application lifecycle. YARN is only responsible for managing cluster resources and the application lifecycle is the responsibility of the application.
This change means that YARN can be used to manage any distributed paradigm. MR2 is of course the initial implementation ( map/reduce over YARN) but you can see some other implementations like the Storm-on-YARN you mentioned or HortonWorks intention to integrate SQL in hadoop etc.
You can take a look at a library called Weave from continuuity that provides a simple API for building distributed apps on YARN

Spring Batch Parallel Job Scaling

I'm currently working on a Spring Batch POC and have got a pretty good handle on most of the actual Spring Batch features. I've currently got a program that uses Spring Integration to receive an HttpRequest and use message channels to eventually send the job executions to the job launcher in a queue. What we'd really like to do is implement some kind of "scheduler/load balancer" (not quite sure what to call it) before the job launcher that will look at the currently running worker nodes and the size of the input file and make a decision on how many worker nodes the job should be allowed. We would probably also want to be able to change the amount of worker nodes a job has while it is running to allow more jobs to run.
The idea is that we'd have a server running that could accept many job requests at any time, and a large cluster of machines that jobs will be partitioned onto. We'd like to be able to scale horizontally, so whenever the server isn't busy it can make full use of the hardware, as well as being able to make sure that small jobs don't get constantly blocked by larger jobs.
From my research it seems like we'd have to implement another framework to do this (do GridGain and Hadoop allow this?), but I figured I'd ask to see what people recommended to do something like this, and if there's a way to do it without implementing another large framework.
Sorry if anything is unclear or confusing, I'm just a lowly intern who started learning Spring and Spring Batch last month and I'm far from completely understanding everything, especially this scaling stuff. Just ask and I'll try to clear things up.
Thanks for any help!
Take a look at the 'spring-batch-integration' project under the spring-batch-admin umbrella project https://github.com/SpringSource/spring-batch-admin
It has a number of examples of using spring-integration to distribute work to other nodes. IN particular see the chunk and partition packages. Just swap out the spring integration channels with jms channel adapters. By distributing work partitions via JMS, you can scale out the number of worker nodes as needed.
There are a number of threads on this subject in the spring integration forum; search for 'PartitionHandler'.
Hope that helps.

Can hadoop be used as a distributed queue server?

I'm thinking of learning hadoop but not sure if it'll solve my problem. Basically I have a job with a queue and a bunch of workers. Each worker does a small amount of work and then either saves the results(if successful) or sends it back to the queue for further processing. My problem is scalable, is limited by the bandwidth on the network(ec2) which will never keep up with multiple cpu's crunching the data. I thought maybe I could run my jobs in Java in a hadoop cluster and have hadoop distribute the work via a queue. Would this be a better approach? I am correct in assuming hadoop can a queue and try to run jobs as locally as possible to minimize bandwidth usage and maximize cpu usage? My program is very cpu bound but most of my recent problems with its performence are related to passing work over a network(I want to keep the work as local as possible), but the difference between the hadoop tutorials I see and my problem is that in the tutorials all the work is known in advance while my program is generating new work for its self constantly(until its finally done). Would this work and would it help me reduce the impact of passing messages over a network?
Sorry I'm new to hadoop and wanted to know if it could solve my problem.
Hadoop is all about running jobs in a batch-like mode over a large data set. It's hard to get it to have some sort of queue-like behavior, but not impossible. There is Apache ZooKeeper, which will give you synchronization to build a queue if you need it.
There are plenty of tools to solve the problem it looks like you are trying to solve. I suggest taking a look at RabbitMQ. If you use python, Celery is quite fantastic.

Resources