So far, I have been able to run my scrapyRT project on Heroku (cloud platform). Next step is running scrapyRT in parallel on a Heroku's single node (1 dyno), as I want to use my computational resources effectively.
One option is using gunicorn, as it is able to run various WSGI HTTP servers in parallel. Unfortunately, scrapyRT is based on Twisted and it does not support WSGI natively. As far as I know, twisted can serve WSGI applications (please check
https://twistedmatrix.com/documents/10.0.0/web/howto/web-in-60/wsgi.html), but I would like to know if following this website is a good idea for adapting scrapyRT in a WSGI application or I can find some design restriction.
Related
I am building a small application with dash and flask. Where my user can upload his csv/excel file and have a look at the graphs being generated.
I assume the size of each excel file could be around hardly 50MB max / week.
I have 'ZERO' knowledge on servers and deployment etc. Can anyone guide or enlighten me on this area. Also this application is just for an internal purpose so we are not allowed to go easy on the budget.
My random google searches gave me options like,
1. AWS
2. Heroku
Which would be a right option and why ? Considering price and ease of use.
Thanks !
I will share some of mine web dev knowledge, so.. in my company we use flask for all server dev, using many of his libs(like marshmallow, sqlalchemy, etc) and making improvements to them, flask offers you a big flexibility and fast development, but your request thread is poor, so i highly recommend to use a load balancer, the most famous load balancer for flask is Gunicorn, is easy to set and use. For Http server we use Nginx, its like Apache, but make to work with Websockets more easy, and to use with Gunicorn just make a proxy. For the Host, we use AWS, and work very fine for big and little applications, but your application is small and your budget too, so i recommend use the pythonanywhere server, its easy to use and optimized for python webservers. And for frontend we use Vue.js framework, makes our page more beautiful and fast to dev.
How many requests can Heroku's "Vegur" http proxy handle for a simple "hello world" before hitting the limits (if any)?
Will setting up nginx with ec2 micro, serving same index.html, allow more Throughput ?
Does heroku throttle the requests per dyno?
Heroku Dynos are all small processes running on EC2 machines behind the scenes. Therefore, it will almost always be more performant to run identical code on an EC2 server directly as opposed to Heroku, because when you're using Heroku you're sharing a server with other developers.
With that said, Heroku isn't really about having the fastest server -- it's about simplifying your entire development and deployment stack as much as possible to:
Avoid downtime.
Force you to architect code properly.
Make it easier to scale your project as it grows.
etc.
I'm setting up my front-end application to use continuous integration in CircleCI. Unit tests work fine, but end-to-end tests are not.
The problem is that it requires the backend (API) server to be running, and ours is in another completely different application. So, what is the best way to setup this backend server (thinking about CI)?
I thought about uploading it on heroku, but then I'd have to keep manually updating the code via git. Another option was to download the code to the CI VM and run the server directly there, but it is just too much work (install ruby, postgres, gems...), and it doesn't seem in no way the best option.
Have anyone passed through the same situation? How do you guys usually deal with this kind of situations?
I ended up doing everything inside the CI. I made some custom scripts that configure the backend project every time the test suite is ran. Also, I cached the folder with the backend code and the gems (which was taking ~2min to install).
The configuring part now adds ~20 seconds to the total time, so it wasn't a big deal. Although I still think that this is probably not the best way to do this, it has some advantages, such as not worrying about updating the backend code (it pulls from master automatically) or its database (it runs rake db:reset after updating the code).
Assuming the API server is running somewhere, configure the front-end application to point there while in the test/CI environment, at least to start out. If there are multiple API environments, choose the one the most closely matches the front-end environment (e.g. dev, staging, etc).
It gets more complicated if/when you need to run the e2e tests each time the API is built or match up specific build versions of the front-end and the API. In that case you will have to run the API server as part of the test.
I am very new to ruby and I wonder, Is it possible to have my ruby script deployed on a server?
Or I should have to use Rails?
As I can understood that Rails is not part of the core Ruby lang, and Ruby have server functionality even without Rails. (as in Java, PHP, etc..)
EDIT:
I have a Ruby script - acts as a cmd-line passed program - and I want to deploy it to an external (or even internal) server the way CGI scripts/programs used to do.
Yes, you can deploy any Ruby application, not just Rails apps obviously. Take a look at Capistrano.
Deployment and serving are two different things however. If you're looking for Ruby HTTP servers look at Unicorn, Thin, WEBrick, Puma.
If you want a fully-fledged solution try Heroku which handles both the deployment and web serving parts.
There are many tools to deploy Ruby projects, but you can do it pretty much manually.
I also found it very hard to find an easy-to-go solution and I think this is a very annoying gap in RoR framework.
I've been working in a solution to deploy a project to a server using Git, like the Heroku toolbelt (google it, is a really nice tool). The main concept is: you use Git to push your project and the server does everything else! Here you can see my project: https://github.com/sentient06/RDH/.
But please, don't focus on that. Instead, read the way I came to all information in the wiki: https://github.com/sentient06/RDH/wiki.
It is a bit outdated, but I can summarize here to you:
First, setup your server. This is the most boring part, you must setup all configuration, security measures, remote access, etc, etc.
If you don't have a server, you can hire one specially for RoR applications. There are a few good out there and each has a different deployment workflow. But supposing you decide o setup yourself:
I suggest you have any Linux or Unix system, server version. Then install Ruby Version Manager, then Ruby and then Rails. Then install a server application. I suggest Thin, but lots of people use Unicorn or Apache or other servers. Dig a little bit on the internet, find an easy to use solution. If you do not use Apache, though, you will need a "reverse proxy" too, so you can redirect all requests on ports 80, 8080, etc, to your applications. I suggest Nginx (I don't like Apache, I think is too overkill).
Now, everything done, the deploy process can be done more or less like this:
1 - Commit everything in a way your files are updated in the server;
2 - In the server, cd to the directory of your application and execute these commands:
$ bundle package
$ bundle install --deployment
$ RAILS_ENV=production rake db:migrate
$ rake assets:precompile
3 - Restart the server and, if necessary, the reverse proxy.
Dig on the internet to understand each command. These will pretty much force your application into production mode, reduce the space used by your javascript and CSS, migrate your production database and install the bundles. Production RoR is not so different from development RoR, it is just more compact and faster.
I do hope these informations are useful.
Good luck!
Update:
I forgot to mention, check ruby-toolbox, it has some really useful statistics and information on how often Rails technologies are being updated. They have many categories, this one is on deployment automation, give it a look: https://www.ruby-toolbox.com/categories/deployment_automation.
Cheers!
Is Node.js quicker and more scalable than Apache? Are there any performance figures to back up Node.js's performance for a web application over Apache?
UPDATE: Ok maybe my question (above) is confusing because I am a little confused as to how Node.js sits within a web stack. Under what circumstances should I consider using Node.js instead of a more traditional stack like PHP, MySQL and Apache - or does Node.js play it's part in this stack?
Node.js is a framework particularly well suited for writing high performance web applications without having to understand how to implement concurrency at a low level. It is a framework for writing server-side JavaScript apps using non-blocking IO: passing continuations to IO calls rather than waiting on results. Node.js provides a system API (filesystem access, network access, etc.) where all of the API calls take a continuation which the runtime will execute later with the result, rather than block and return the result to the original caller.
You can use by itself, if you like. But you might want a dedicated reverse proxy in front of Node.js: something like Apache, Nginx, LigHTTPD, etc. Or, for clustering a bigger app, you might want something like HAProxy in front of multiple running Node.js app servers.
There is a recent (July 28th, published 30th) Google Tech Talk about Node.js where there are a few performance numbers and where he also talks about scaling.