What is an appharbor worker...exactly? - appharbor

The Appharbor pricing page defines a worker something you increase to "Improve the reliability and responsiveness of your website". But in trying to compare price with others such as aws, I am having a hard time defining what a worker is exactly.
Anyone have a better definition than "more is better"?

From this thread:
AppHarbor is a multitenant platform and we're running multiple
application on each application server. A worker is an actual worker
process that is limited in terms of the amount of resources it can
consume.
...
2 workers will always be on two different machines. We're probably
going to reuse machines when you scale to more than that and increase
process limits instead (this could yield better performance as you
need to populate fewer local cache etc.)

Related

Microservices interdependency

One of the benefits of Microservice architecture is one can scale heavily used parts of the application without scaling the other parts. This supposedly provides benefits around cost.
However, my question is, if a heavily used microservice is dependent on other microservice to do it's work wouldn't you have to scale the other services as well seemingly defeating the purpose. If a microservice is calling other micro service at real time to do it's job, does it mean that Micro service boundaries are not established correctly.
There's no rule of thumb for that.
Scaling usually depends on some metrics and when some thresholds are reached then new instances are created. Same goes for the case when they are not needed anymore.
Some services are doing simple, fast tasks, like taking an input and writing it to the database and others may be longer running task which can take any amount of time.
If a service that needs scale is calling a service that can easily handle heavy loads in a reliable way then there is no need to scale that service.
That idea behind scaling is to scale up when needed in order to support the loads and then scale down whenever loads get in the regular metrics ranges in order to reduce the costs.
There are two topics to discuss here.
First is that usually, it is not a good practice to communicate synchronously two microservices because you are coupling them in time, I mean, one service has to wait for the other to finish its task. So normally it is a better approach to use some message queue to decouple the producer and consumer, this way the load of one service doesn't affect the other.
However, there are situations in which it is necessary to do synchronous communication between two services, but it doesn't mean necessarily that both have to scale the same way, for example: if a service has to make several calls to other services, queries to database, or other kind of heavy computational tasks, and one of the service called only do an array sorting, probably the first service has to scale much more than the second in order to process the same number of request because the threads in the first service will be occupied longer time than the second

Distributed calculation on Cloud Foundry with help of auto-scaling

I have some computation intensive and long-running task. It can easily be split into sub-tasks and also it would be kind of easy to aggregate the results later on. For example Map/Reduce would work well.
I have to solve this on Cloud Foundry and there I want to get advantage from autos-caling, that is creation of additional instances due to high CPU loads. Normally I use Spring boot for developing my cf apps.
Any ideas are welcome of how to divide&conquer in an elastic way on cf. It would be great to have as many instances created as cf would do, without needing to configure the amount of available application instances in the application. Also I need to trigger the creation of instances by loading the CPUs to provoke auto-scaling.
I have to solve this on Cloud Foundry
It sounds like you're on the right track here. The main thing is that you need to write your app so that it can coexist with multiple instances of itself (or perhaps break it into a primary node that coordinates work and multiple worker apps). However you architect the app, being able to scale up instances is critical. You can then simply cf scale to add or remove nodes and increase capacity.
If you wanted to get clever, you could set up a pipeline to run your jobs. Step one would be to scale up the worker nodes of your app, step two would be to schedule the work to run, step three would be to clean up and scale down your nodes.
I'm suggesting this because manual scaling is going to be the simplest path forward (please read on for why).
and there I want to get advantage from autos-caling, that is creation of additional instances due to high CPU loads.
As to autoscaling, I think it's possible but I also think it's making the problem more complicated than it needs to be. Auto scaling by CPU on Cloud Foundry is not as simple as it seems. The way Linux reports CPU usage, you can exceed 100%, it's 100% per CPU core. Pair this with the fact that you may not know how many CPU cores are on your Cells (like if you're using a public CF provider), the fact that the number of cores could change over time (if your provider changes hardware), and that makes it's difficult to know at what point you should scale your application.
If you must autoscale, I would suggest trying to autoscale on some other metric. What metrics are available, will depend on the autoscaler tool you are using. The best would be if you could have some custom metric, then you could use work queue length or something that's relevant to your application. If custom metrics are not supported, you could always hack together your own autoscaler that does work with metrics relevant to your application (you can scale up and down by adjusting the instance cound of your app using the CF API).
You might also be able to hack together a solution based on the metrics that your autoscaler does provide. For example, you could artificially inflate a metric that your autoscaler does support in proportion to the workload you need to process.
You could also just scale up when your work day starts and scale down at the end of the day. It's not dynamic, but it simple and it will get you some efficiency improvements.
Hope that helps!

Heroku, RabbitMQ and many workers. What is the best architecture?

I am looking for the best approach to handle the following scenario:
I have multiple edge devices publishing sensor data to a RabbitMq broker. The broker will experience an overall workload of ~500 messages per seconds. Then there is a python worker dyno who consumes one sensor reading at a time, applies a filter on it (which can take up to 5-15ms) and publishes the result to another topic.
Of course one worker is not enough to serve all requests, so I need a proper scaling. I use a queue to make sure each sensor reading is consumed only once!
My questions are:
Do I scale horizontally and just start as many dynos as necessary to handle all requests in the RabbitMQ queue? Seems simple but more expensive.
Or would it be better to have less dynos but more threads running on each dyno, and using e.g. celery?
Or is there a load balancer that consumes 1 item out of the queue and schedules a dyno dynamically?
Something totally different?
option 1 or 2 are your best bets
i don't think option 3 exists without tying directly into the heroku API, and writing a ton of code for yourself... but that is overkill for your needs, IMO
between 1 & 2, the choice would depend on whether or not you want to grow the ability to handle more messages without re-deploying your code.
option 1 is generally my preference because i can just add a new dyno instance and be done. takes 10 seconds.
option 2 might work if you don't mind adjusting your code and redeploying. it will add extra time and effort for the tradeoff of cost.
but at some point, option 2 will need to turn into option 1 anyways, as you can only do so much work on a dyno to begin with. you will run into limitations on threads, with dynos. and then you'll be scaling out with dynos.
It seems with GuvScale you can scale the workers consuming massages from RabbitMQ

Finish sidekiq queues much quicker

I reached a point now, where is taking to long for a queue to finish, because new jobs are added to that queue.
What are the best options to overcome this problem.
I already use 50 processors, but I noticed that if I open more, it will take longer for jobs to finish.
My setup:
nginx,
unicorn,
ruby-on-rails 4,
postgresql
Thank you
You need to measure where you are constrained by resources.
If you're seeing things slow down as you add more workers you're likely blocked by your database server. Have you upgraded your Redis server to handle this amount of load? Where are you storing the scraped data to? Can that system handle the increased write load?
If you were blocked on CPU or I/O, you should see the amount of work through the system scale linearly as you add more workers. Since you're seeing things slow down when you scale out, you should measure where your problem is. I'd recommend instrumenting NewRelic for your worker processes and measuring where the time is being spent.
My guess would be that your Redis instance can't handle the load to manage the work queue with 50 worker processes.
EDIT
Based on your comment, it sounds like you're entirely I/O Bound doing web scraping. In that case, you should be increasing the concurrency option for each Sidekiq worker using the -c option to spawn more threads. Having more threads will allow you to continue processing scraping jobs even when scrapers are blocked on network I/O.

Windows Azure: Parallelization of the code

I have some matrix multiplication operation. I want to parallelize the execution of those operations through multiple processors.. This can be done on high performance computing cluster using MPI (Message Passing Interface).
Like wise, can I do some parallelization in the cloud using multiple worker roles. Is there any means for doing that.
The June release of the Azure SDK and Tools, version 1.2, now supports .NET 4. You can now take advantage of the parallel extensions included with .NET 4. This includes Parallel.ForEach() and Parallel.For(), as examples.
The Task Parallel Library (TPL) will only help you on a single VM - it's not going to help divide your work across multiple VMs. So if you set up, say, a 2-, 4-, or 8-core VM, you should really see significant performance gains with parallel execution.
Now: if you wanted to divide work up across instances, you'll need to create a way of assigning work to each instance. For example: set up one worker role as a coordinator vm, and another worker role, with n instances, as the computation vm. Then, have the coordinator vm enumerate all instances of the computation vm and divide up work n ways. Send send 1/n work messages to each instance over WCF calls over an internal endpoint. Each vm instance processes each work message (potentially with the TPL as well) and stores its result in either blob or table storage, accessible to all instances.
In addition to message passing, Azure Queues are perfect for this situation as each worker role can read from the queue for work to be performed rather than dealing with iteration. This is a much less brittle approach as the number of workers may be changing dynamically as you scale.

Resources