I'm thinking to build an HTTP upload server in Ruby for my project. So far, I'm looking at setting up a Rack server to run with "Rainbow!" or a sinatra server with Rack middleware. The server is required to support HTTP uploads with multipart and chunking. Is this a good choice?
I'd love to see some examples how to set up a simple HTTP upload server but I couldn't find anywhere on the 'net.
Since a file upload can take a while, an important point of uploading files in Ruby are blocking processes while a file is uploaded. You might want to look into projects that are based on EventMachine and/or Goliath to achieve non-blocking processing of HTTP requests. Some ideas here:
http://isurfsoftware.com/blog/2011/05/12/2011-experimenting-with-goliath-and-EventMachine/
https://gist.github.com/rweald/969981
http://www.bigfastblog.com/rubys-eventmachine-part-3-thin
Related
According to the definition, the puma is kind of web server and the rack is an interface between web server and application server.
But, lots of videos mention that rack is a interface between web framework and web server. So can I interpret that we use web framework to build our application, so the rack is an interface between web framework and web server?
Another question is that if puma is kind of web server, can I use Apache or Nginx to replace it?
Puma is an application server, more specifically a Rack app server. (There are more than just Puma: Unicorn, Passenger etc. There are also application servers for different interfaces; for example, Tomcat and JBoss are Java application servers.) An application server accepts a HTTP request, parses it into a structure in the application's language, hands it off to the application, and awaits a response object which it then returns to the client.
Nginx/Apache are general purpose web servers. Apache does not know how to serve Rack applications, and Puma doesn't know how to do a bunch of other things Nginx/Apache do (e.g. CGI scripts, URL rewriting, proxies, balancing, blacklisting...)
Rack is a library for Ruby that accepts parsed HTTP requests from an app server, funnels them through a configurable stack of middleware (such as e.g. session handling) passing the request object to a handler, and returning the response object the app server, making web development in Ruby easy. You can execute a Rack app directly (or rather, with a very simple server that is installed with Rack), but it is not recommended outside development, which is where "proper" application servers come in: they know how to keep your app alive, restart it if it dies, guarantee that there is the predetermined number of threads running, things like that.
Thus, typically, you have your Web server accept connections, then using simple reverse proxy pass the appropriate requests to your Rack application, which is executing inside the Rack app server. This gives you the benefits from all the involved pieces.
which is the actual way to test performance of upload and download files using JMeter tool ?
HTTP and FTP protocol are totally different, if your application supports both - you need to load test both methods as they are handled differently on the server side. Your load test needs to simulate what real users are/will be doing so check your application requirements prior to building the test plan.
For simulating HTTP uploads and downloads you need to use HTTP Request sampler
For simulating FTP uploads and downloads - go for FTP Request sampler. FTP protocol provides some more ways of files and folders manipulation, i.e. moving, deleting, listing contents, etc. so you may also need to perform these operations using Apache Commons Net libraries and JSR223 Sampler, check out Load Testing FTP and SFTP Servers Using JMeter guide for comprehensive explanation and example code snippets
I want to run a simple webserver in Go doing some basic authorisation and routing to multiple apps.
Is it possible to have the webserver running as a standalone executable and pass the response writer and http request to other executables?
The idea is that the app binaries can hopefully be compiled and deployed independently of the webserver.
Memory areas of running applications are isolated: a process cannot just read or write another application's memory (Wikipedia: Process isolation).
So just passing the response writer and the http request is not so easy. And even if you would implement it (e.g. serializing them into binary or text data, sending/passing them over somehow, and reconstructing them on the other side) serving an HTTP request in the background is more than just interacting with the ResponseWriter and Request objects: it involves reading from and writing to the underlying TCP connection... so you would also have to "pass" the TCP connection or create a bridge between the real HTTP client and the application you forward to.
Another option would be to send a redirect back to the client (HTTP 3xx status codes) after doing the authentication and routing logic. With this solution you could have authentication and certain routing logic implemented in your app, but you would lose further routing possibilities because further request would go directly to the designated host.
Essentially what you try to create is the functionality of a proxy server which have plenty of implementations out there. Given the complexity of a good proxy server, it should not be feasible to reproduce one.
I suggest to either utilize an existing proxy server or "refactor" your architecture to avoid this kind of segmentation.
This is sort of a theoretical question, however, I need to add file sharing capabilities to my web socket powered chat application. I could use a service like Amazon S3 to upload a file to share by posting a link to the file, but that involves the uploading of a file that may already be accessible over the local network (sharing a file between co-workers for example).
So I had the idea that it might be possible to somehow tunnel the upload/download/transfer through the already existing web socket connection. However, I don't know enough about HTTP file transfer to know the next step of how to implement it. Is there a limitation to web sockets that would prevent this from being possible?
I'm using Ruby and EventMachine for my current web socket implementation. If you were able to provide a high level overview to get me started, that would be very much appreciated.
Here's an example of a project that uses only Web Sockets and the javascript File API for transferring files: http://www.github.com/thirtysixthspan/waterunderice
To allow files to be shared without the need to upload it to the server, (i.e Coworkers) you can now use the WebRTC DataChannel API, to create a peer to peer connection.
I'm writing a simple chat application. The only "front-end" required is a single html file, a javascript file, and a few stylesheets. The majority of the application is the server-side EventMachine WebSocket server.
I'm also trying to host this on Heroku.
I currently have a sinatra app that just serves the static files, and a separate app that serves the WebSocket server (on a different port).
Is there a way I can combine these so that I can start up one application which serves/responds to port 80 (for the static files) and another port for the WebSocket server?
It's probably not a good idea to have your WebSocket server run on a different port. WebSockets run on port 80 specifically because that port is not blocked on most networks. If you use a different port, you will find users behind some firewalls unable to use your application.
Running your event server separate from your web server is probably the best way to go.
If you want something a bit more experimental, Goliath has WebSocket support in the master branch and can also serve the needed resources. If you look in the examples directory there is a WebSocket server that also serves it's HTML page.