I'm evaluating using SignalR in an app hosted at AppHarbor running on 2+ instances (web workers) but reading around it looks like thsi won't work:
SignalR wiki says that scaling in a web farm is still in development (and 2+ web workers sounds like web farm to me). Another question here on StackOverflow says it won't work on more than one iss server.
On the other side, on AppHarbor support site they say it works great without giving much info thought (didn't answer to all questions like # of simultaneous connection, limits of load balancers etc).
Can someone confirm if SignalR is the right path to take on AppHarbor?
Thanks!
David Fowler is working on a Redis message store for SignalR. The code is on Github and I believe it is what will let SignalR apps scale to multiple AppHarbor instances.
Updated links for 2014:
SignalR's Redis page on GitHub, which is currently just a URL to a Redis doc on ASP.Net website.
Related
My Heroku-app is being used by other people, on other websites then mine.
Is there a way that only my site can use the app?
I have a small site so i use a free account, this way my free "dynos" are gone very quick.
You have some options...
If your app or api is being used by javascript web apps in the browser then setting a CORS heading specifying your top level domain should do the trick.
If your app or api is being consumed by other servers or non-browser based processes then specifying an authentication process such as http basic (user/password) should restrict access to the set of clients that you control. If your service is successful then congratulations! Maybe you should scale up and start charging?
It seems like your goal is to stay in the free tier at Heroku.
Heroku starts your dynos triggered by the request coming in on their router mesh. This means every authentication or blocking technique inside your application will still lead to the dynos being started (that includes CORS).
Heroku itself doesn't give you configuration for their routing in the free/low-price tiers. If you pay for it, there is private spaces.
One possible solution is to have another layer over your app that does the authentication.
For example this could be:
cloudflare
Amazon CloudFront (not sure, with the Web Application Firewall)
other CDN
These will likely have a free tier that's enough for you, but also be rather complex to setup for a beginner.
I hope I could help you a little
I've visited Google Cloud full day "training" and I still don't get if there is any REAL advantage in using it for my Laravel applications.
Currently I'm using a VPS server from local hosting provider, and I can deploy a laravel application in 5 ssh commands. In few minutes it becomes live.
I've searched through options on how to do it on GoogleCloud. All of them are for minimum 2 hours of tedious reading/clicking and none offers deploying straight from a git repository, thus no continuous integration.
Please help me understand what is the advantage of paying 5 times more and configuring 20 times longer in GoogleCloud versus VPS?
Thx
It all depends on what you try to achieve by deploying your Laravel app. If you expect little traffic and of a constant nature, you local hosting provider is fine. Do you expect your app to scale for millions of requests per second? Would you value resilience? In this case, you are better off doing a little extra effort and deploying in the cloud. You may gather more detail from the "Building Scalable and Resilient Web Applications on Google Cloud Platform" online document.
While reading a blog I found that ASP.NET Web API could be self hosted. There are loads of links telling how to self host Web API but I could not find any link explaining when it makes sense over IIS. Could someone please point me to couple of scenarios where self hosting of Web API is more suited than IIS hosting.
Thanks,
Ravi
Generally speaking, you would use self hosted Web API to get a better performance and to get rid of unnecessary pipelines of IIS. Additionally, you get better control over handling http requests, configuration and so forth. Since you have less dependencies on other apps your deployment and troubleshooting gets easier and less complicated.
Having said that, you have to write code to handle everything, even simple things (such as returning static files) that are simply done by IIS.
Thanks to the ASP.net Core, you'd be able to host your apps on Linux, MacOS and Windows. So, going cross platform would be another reason for using self hosted apps.
In an answer to another question, it's noted that "Apps deployed to the hosted servers with 'meteor deploy' do not yet have any guarantees or SLAs about scaling." So that rules out the possibility of using their hosted servers if I want to be sure I can fully scale, now.
The answer further notes that "A server bundle generated with 'meteor bundle' is basically a single process app. It is up to you wire it up to multiple instances, or however you want to implement auto-scaling."
After reading that, I'm still very unclear on the question of scaling. On Heroku, I assume I can run "meteor bundle" single process apps in dynos. But if I use many dynos, each running a Meteor server bundle, is Meteor designed so that they can be wired up so that they are all synchronized with the same data (even if there's a lag)?
Answering my own question, the Meteor team has announced a roadmap which includes the scalability plans, for inclusion in Meteor 1.0.
Meteor is still very young platform. Before scalability,personally i would put question of security, as Meteor right now is having no security model in public release. Also no mention of security in Meteor docs, but Meteor team has confirmed that they are working on it, and future release will have it. Have a look here: https://stackoverflow.com/questions/10100813/when-can-we-expect-data-validation-and-security-in-meteor
So I think you and I (for security implementation) have to wait for more releases and perhaps before 1.0 scalability will be handled internally, or atlease they should have documentation explaining how to do that.
To get some idea about, how scalability will be handled and to get better picture on it, I think someone from meteor team should answer about scalability.
You can deploy meteor apps into Heroku but you need to stick with 1 dyno. Because Heroku does not support WebSockets or Sticky Sessions.
So you need to find another PAAS provider. Nodejitsu is a good option.If you wan't to scale into multiple instances, you need to find a way to sync write operations between instances.
Then You'll need Meteor Cluster - http://goo.gl/2aHJ2
I recently asked a similar question (Which PaaS would be best for a Meteor JS app that needs to be scalable?), and one of the answers explained the Heroku situation very well (I thought) - see https://stackoverflow.com/a/16468418/2311632 . It is also pointed out (https://stackoverflow.com/a/16468609/2311632) that one could deploy on meteor.com. While scaling is still on the roadmap, presumably they have or are addressing some scaling issues 'in-house', or can otherwise keep their service at the cutting edge of what's possible in scaling for Meteor Apps. Otherwise, you could go with EC2 and scale vertically (boost the power of a single instance) until Meteor hits the mark with official scaling solutions. Getting set up with EC2 is new to me, but this answer (https://stackoverflow.com/a/16468826/2311632) looks like a good starting point. I haven't tried it yet, but likely will soon.
Maybe I am missing something about Scrapy, but here is what I am about to do:
I have created a website based on the information I am crawling from Internet using Scrapy Crawl Spiders. However, I am stuck in how to get my website going live. I am considerring web hosting but most of the service providers do not allow install those scripts on their server. Of couse I can rent a server but it is too expensive for me at the moment. Could anyone please shed some light on this if you have similar experience. The website is based on ASP.NET so will need the webhosting supports MS SQL, ASP.NET as well as Scrapy. Is there something in the scrapy so I can get the spiders running without installing? Much appreciated.
Cheers,
Ray
You would need a hosting service where you could install the scrapyd service so that you can automate your screen scraping. I've never done it as I am just getting started playing around with Scrapy, but here is the information on scrapyd: http://readthedocs.org/docs/scrapy/en/latest/topics/scrapyd.html
You might want to look at Virtual Dedicated Servers for hosting as they are cheaper than co-located or dedicated servers but give you more control than shared hosting.
I have found a lot of success in deploying and running my spiders periodically using Heroku, for free. You can read about the steps here
Alternatively, we could use Scrapyd to host your spiders and actually send requests, alongside with ScrapydWeb.