Are Rack-based web servers represent FastCGI protocol? - ruby

I've read that CGI/FastCGI is a protocol for interfacing external applications to web servers.
so the web server (like Apache or NginX) sends environment information and the page request itself to a FastCGI process over a socket and responses are returned by FastCGI to the web server over the same connection, and the web server subsequently delivers that response to the end-user.
Now I'm confused between this and Rack, which is used by almost all Ruby web frameworks and libraries. It provides an interface for developing web applications in Ruby by wrapping HTTP requests and responses.
So, Is Rack-based web-servers like Unicorn, Thin, Passenger or Puma represents the same FastCGI approach? Can I say that Unicorn is a Ruby implementation of FastCGI ?

As you say:
FastCGI is a protocol
Rack is an API
So these are actually two quite different things, though they could
be used together.
FastCGI specifies how two different processes should talk to each other
FastCGI, as a protocol, specifies how two different processes (nominally a web server and an application server or "FastCGI server") should talk to each other over a network connection. The specification defines records of data in a particular format that are sent and received by the two processes.
Exactly what the programs that send and receive these messages look like is not specified, and could be anything. On one side you might have a C program that assembles data in memory and then makes system calls to have the OS send the data, and on the other side you might have a Ruby program that opens a socket, reads in data into Arrays, and then parses those data, and builds a new object encapsulating the request.
Rack specifies what Ruby objects and methods must be made available to higher-level software
On the other hand, Rack, being a Ruby API specification specifies precisely what Ruby objects and methods must be made available to higher-level software implementing some sort of web application, and how those objects and methods must behave, from the point of view of the application. (Don't be confused by the use of the word "protocol" in the document linked above. Here it's used not in the sense of data formats as sent over a communications link, but in the object-oriented programming sense of the conceptual "messages" exchanged between objects to express program behavior, though this is actually at various levels and times implemented as function calls.)
Being an API specification, the user of the Rack API ought at least to behave as if it has no idea what's going on underneath the hood when it calls methods on the various objects an implementation of Rack presents. (Frequently it will have no idea.) It could be the case that the library actually has set up communication with a separate process acting as a web server, via FastCGI or some other protocol, and reads messages from the other process and sends messages back to it, based on what the application using the API implementation does. But on the other hand, you could equally (at least in theory) drop in a completely different implementation of the API that itself has Ruby code to run a web server, and the very same process that ran Ruby code for the web application would be running additional Ruby code to talk the HTTP protocol directly with a client web browser or whatever.
You can't say that Unicorn (or any other implementation of the Rack API) is a "Ruby implementation of FastCGI"
The question does not apply in the way that you asked it, because the whole point of the Rack API specification is that you explicitly avoid thinking about the actual implementation of the services provided through that API. It could well be that some implementations are using FastCGI, but your application should work equally well with one that's not, and you really don't want to care about what's going on underneath the hood.

Related

Does an application developed on HTTP/1.x require modifications to run on HTTP/2?

In the HTTP/2 chapter in the High Performance Browser Networking book, there is a mention that
all existing applications can be delivered without modification.
Existing application using HTTP/1.1 use a request line, like POST /upload HTTP/1.1.
I think some code in our application should instead move the request line to a header frame. That means our application should consider the change from request line to header frame and deal with data frames as well.
Isn't that a modification?
This assumes your application is abstracted away from the raw HTTP implementation - as most applications are.
For example, let's assume you have created a Javascript web application (e.g. Angular), that talks through REST APIs to a web server (e.g. Apache), which proxies requests to a backend server (e.g. Node.js).
By adding HTTP/2 to the Apache Webserver your website and web application should benefit from HTTP/2 and be downloaded faster without any changes to your code. This despite the fact that your application (at both the front end Angular app and the back end Node.js server) likely uses a lot of HTTP semantics (e.g. Methods like GET and POST, headers... etc.). The web browser and web server should handle all the HTTP/2 stuff for you with no requirement to change your code or realise that you are using a different version of HTTP than what you originally write this app for as, fundamentally the concepts of HTTP have not changed even if the delivery method on the wire has.
Of course if you are writing at a lower level, and directly reading the bytes from the HTTP request then this may not be the case. But in most cases, unless you are writing a web server or browser, you will be going through a library and not looking at the direct request line and/or HTTP/2 frames.
Additionally, as the author of that book states above and below this quote, while changes will not be necessary in most cases to keep the application functioning, to get the most out of HTTP/2 may require changes to stop using HTTP/1.1 optimisations and learn new HTTP/2 optimisations.
To elaborate on BazzaDP's answer, from an application standpoint running on a web application framework, nothing changes. Those frameworks provide an abstraction of the underlying HTTP innards.
For example ASP.NET running on IIS provides a Request and Response object to the developer.
You can still use those objects' properties like Request.Headers or Response.Body, even though in HTTP/2 jargon those are called and transported different from HTTP/1.x.
The web server itself or the application framework running on top of it will handle the translation from network traffic to that object's properties and vice versa.

How do Web Applications communicate with a server?

Let's say I wanted to create a collaborative web application where users can simultaneously work on e.g. a diagram. I have my interactive web site (HTML/CSS/JavaScript) on the client side, and let's say a Java application synchronizing a centralized model on the server side. How would client and server exchange messages? To get a better idea of how the system is supposed to work like, here are some details and a small illustration:
Requirements
The underlaying technology should support authorized communication to show/hide sensitive information.
The model of the diagram should only exist on the server (centralized model, no copies), whereby changes should be broadcasted to other users (other browsers who are currently editing the same document).
After receiving changes to the model, the server shall perform computationally higher demanding tasks such as consistency checks and inform the user if e.g. an action is invalid.
Constraints
The system must not poll or set flags (it should be a direct communication).
The website must not be reloaded during use.
Software running on the client must be restricted to JavaScript.
Technology I've found
Here are some of the solutions I've looked at, ranked from (imho) most suitable to least, whereby I'm not sure if the list is even complete and all my assumptions are correct.
Java Web Service using javax.jws - The service would offer an API described by WSDL and be capable of responding with SOAP messages using the HTTP protocol, but can JavaScript process these messages?
Servlets & Java Server Pages (JSPs) - Would, as far as I know, require a page reload to fetch new content (isn't this pretty much the same as PHP? Both are executed on the server to generate HTML pages..)
I've already found this question which, however, does not provide concrete technologies in the answer. Thank you for any hints and clarification, I dearly appreciate it!
As you have already found out, there are many approaches to this common problem.
SOAP (sometimes called "Web Services" or "WS*") is a standard that was developed for application to application communication. It's strengths are precise specifications (and therefore many good libraries), interface sharing for client and server (through WSDL), and discoverability.
However, SOAP has very limited -if any- support in Browsers.
The most common modern approach is probably RESTful web services based on XML or JSON. When used in a browser as client, JSON is preferred, as the unmarshalling is trivial in JavaScript. Many frameworks and libraries like Angular and jQuery take on the tedious cross-browser AJAX request crafting. Some boilerplate is required to wrap the data in HTML, however. This downside can also be an upside if you consider the cleaner architecture and maintainability.
Since you're using Java on your server, you might also want to look into JSFs AJAX capabilities. It doesn't really use REST (no well-defined resources), but loads server-side rendered chunks of your HTML page asynchronously (read: AJAX) and re-renders them in the DOM. This can be seen as a compromise between real REST and full page reloading. The good side of it is you don't have to define any resources but your backing beans. The downsides are: possible overhead, since you're downloading HTML instead of pure data; and bad testability since you cannot access the rendered chunks outside the JSF context.
Wholly different issues are 1. the synchronization of concurrent requests and 2. communication with clients from server. For the latter you can either look into Web Sockets (the next big thing) or the good-old polling.
If concurrent edits of the same data are a concern to you, you'll have to implement a sort of strategy to tell whether edits can be merged or whether one has to be declined. The client will need to handle such situations and offer the user reviewing, overriding or reverting etc.
EDIT: more on Java servlets and JSP
Servlets are nothing but Java classes that handle specific HTTP requests, usually bound to certain URL paths. They're comparable to PHP pages: In PHP the web server gets a request and searches for a corresponding PHP file. Then it parses the PHP script and returns the response. Servlets do the same, only the syntax is very different. JSP is mostly a try at bringing more PHP-ish syntax to Java. Behind the curtains, they're rendered to pure Java servlets.
Java servlets have a long history which is older than AJAX, but many frameworks have taken on the challenge of implementing flexible resource handling on the server. Most notably Jersey and RESTEasy, which are implementations of the Java EE standard JAX-RS; and Spring, which is a request-based framework that competes well against Java EE standards. Then there is also JSF, which is sometimes called a successor of JSP. To the user it's little different than JSP, but the syntax is nicer and IMHO it is more feature-rich than JSP. Especially considering extensions like PrimeFaces. In contrast to JSP, JSF embraces AJAX and many built-in components are capable of communicating with the server using AJAX, including calling Java functions and re-rendering parts of the page.
I would recommend you try websockets since connections will be persistent between clients and server and any of them can send a message at any time.
You could implement a websocket server in Java on the server and use Javascript's built in websockets functionallity on the clients.
Take a look at https://www.websocket.org/ to learn more about websockets.

web2py - do I need another server for a real time application?

In the web2py examples there is a websocket example which uses tornado here:
gluon/contrib/websocket_messaging.py and this requires another server to be started namely tornado. My questions is, do I need another server? Should I only have one server to handle both the websocket stuff and the normal http requests?
Also, it seems tornado is the server of choice for the 2nd server, could that be something different?
I'm a bit of a newbie to websockets (and webapp development) so any comments/links that would help me better understand this would be appreciated.
Python WSGI based frameworks such as web2py are typically served via threaded web servers. A typical HTTP request occupies one of the server threads only very briefly in order to receive the incoming request and deliver the response, then freeing the thread to serve another incoming request.
Websockets (and long polling), on the other hand, require a long-lived connection between the client (i.e., browser) and the web server. A websocket connection will therefore occupy a thread indefinitely, so you can only have as many connections as you have threads, thus limiting the application to a relatively small number of concurrent users.
In order to enable many simultaneous websocket connections, it is therefore best to serve websockets via a server that features non-blocking network I/O, such as Tornado. For more details, see http://www.tornadoweb.org/en/stable/guide/async.html.
Another option is to use Gevent with monkey patching, which can be used in the context of a WSGI application as described here. Keep in mind, though, that any libraries you use that involve network I/O (such as database drivers) must be compatible with this approach (either via monkey patching or code explicitly designed for coroutines).
If realtime/server-push functionality is a major aspect of your application, and especially if you are new to web development, you might instead consider a framework built for this specific use case, such as Meteor.

Why should I avoid using CGI?

I was trying to create my website using CGI and ERB, but when I search on the web, I see people saying I should always avoid using CGI, and always use Rack.
I understand CGI will fork a lot of Ruby processes, but if I use FastCGI, only one persistent process will be created, and it is adopted by PHP websites too. Plus FastCGI interface only create one object for one request and has very good performance, as opposed to Rack which creates 7 objects at once.
Is there any specific reason I should not use CGI? Or it is just false assumption and it is entirely ok to use CGI/FastCGI?
CGI, by which I mean both the interface and the common programming libraries and practices around it, was written in a different time. It has a view of request handlers as distinct processes connected to the webserver via environment variables and standard I/O streams.
This was state-of-the-art in its day, when there were not really "web frameworks" and "embedded server modules" as we think of them today. Thus...
CGI tends to be slow
Again, the CGI model spawns one new process per connection. While spawning processes per se is cheap these days, heavy web app initialization — reading and parsing scores of modules, making database connections, etc. — makes this quite expensive.
CGI tends toward too-low-level (IMHO) design
Again, the CGI model explicitly mentions environment variables and standard input as the interface between request and handler. But ... who cares? That's much lower level than the app designer should generally be thinking about. If you look at libraries and code based on CGI, you'll see that the bulk of it encourages "business logic" right alongside form parsing and HTML generation, which is now widely seen as a dangerous mixing of concerns.
Contrast with something like Rack::Builder, where right away the coder is thinking of mapping a namespace to an action, and what that means for the broader web application. (Suddenly we are free to argue about the semantic web and the virtues of REST and this and that, because we're not thinking about generating radio buttons based off user-supplied input.)
Yes, something like Rack::Builder could be implemented on top of CGI, but, that's the point. It'd have to be a layer of abstraction built on top of CGI.
CGI tends to be sneeringly dismissed
Despite CGI working perfectly well within its limitations, despite it being simple and widely understood, CGI is often dismissed out of hand. You, too, might be dismissed out of hand if CGI is all you know.
Don't use CGI. Please. It's not worth it. Back in the 1990s when nobody knew better it seemed like a good idea, but that was when scripts were infrequent, used for special cases like handling form submissions, not driving entire sites.
FastCGI is an attempt at a "better CGI" but it's still deficient in a large number of ways, especially because you have to manage your FastCGI worker processes.
Rack is a much better system, and it works very well. If you use Rack, you have a wide variety of hosting systems to choose from, even Passenger which is really simple and reliable.
I don't know what mean when you say Rack creates "7 objects at once" unless you mean there are 7 different Rack processes running somehow or you've made a mistake in your implementation.
I can't think of a single instance where CGI would be better than a Rack equivalent.
There exists a lot of confusion about what CGI, Rack etc. really are. As I describe here, Rack is an API, and FastCGI is a protocol. CGI is also a protocol, but in its narrow sense also an implementation, and for what you're speaking of is not at all the same thing as FastCGI. So let's start with the background.
Back in the early 90s, web servers simply read files (HTML, images, whatever) off the disk and sent them to the client. People started to want to do some processing at the time of the request, and the early solution that came out was to run a program that would produce the result sent back to the client, rather than just reading the file. The "protocol" for this was for the web server to be given a URL that it was configured to execute as a program (e.g., /cgi-bin/my-script), where the web server would then set up a set of environment variables with various information about the request and run the program with the body of the request on the standard input. This was referred to as the "Common Gateway Interface."
Given that this forks off a new process for every request, it's clearly inefficient, and you almost certainly don't want to use this style of dynamic request handling on high-volume web sites. (Starting a whole new process is relatively expensive in computational resources.)
One solution to making this more efficient is to, rather than starting a new process, send the request information to an existing process that's already running. This is what FastCGI is all about; it maintains a very similar interface to CGI (you have a set of variables with most of the request information, and a stream of data for the body of the request). But instead of setting actual Unix environment variables and starting a new process with the body on stdin, it sends a request similar to an HTTP request to an FCGI server already running on the machine where it specifies the values of these variables and the request body contents.
If the web server can have the program code embedded in it somehow, this becomes even more efficient because it just runs the code itself. Two classic examples of how you might do this would be:
Have PHP embedded in Apache, so that the "Apache server code" just calls the "PHP server code" that's part of the same process; and
Not run Apache at all, but have the web server be written in Ruby (or Python, or whatever) and load and run more Ruby code that's been custom-written to handle the request.
So where does Rack come in to this? Rack is an API that lets code that handles web requests receive it in a common way, regardless of the web server. So given some Ruby code to process a request that uses the Rack API, the web server might:
Be a Ruby web server that simply makes function calls in its own process to the Rack-compliant code that it loaded;
Be a web server (written in any language) that uses the FastCGI protocol to talk to another process with FastCGI server code that, again, makes function calls to the Rack-compliant code that handles the request; or
Be a server that starts a brand new process that interprets the CGI environment variables and standard input passed to it and then calls the Rack-compliant code.
So whether you're using CGI, FastCGI, another inter-process protocol, or an intra-process protocol, makes no difference; you can do any of those using Rack so long as the server knows about it or is talking to a process that can understand CGI, FastCGI or whatever and call Rack-compliant code based on that request.
So:
For performance scaling, you definitely don't want to be using CGI; you want to be using FastCGI, a similar protocol (such as the Tomcat one), or direct in-process calling of the code.
If you use the Rack API, you don't need to worry at the early stages which protocol you're using between your web server and your program because the whole point of APIs like Rack is that you can change it later.

Handling XMLHttpRequest to call external application

I need a simple way to use XMLHttpRequest as a way for a web client to access applications in an embedded device. I'm getting confused trying to figure out how to make something thin and light that handles the XMLHttpRequests coming to the web server and can translate those to application calls.
The situation:
The web client using Ajax (ExtJS specifically) needs to send and receive asynchronously to an existing embedded application. This isn't just to have a thick client/thin server, the client needs to run background checking on the application status.
The application can expose a socket interface, with a known set of commands, events, and configuration values. Configuration could probably be transmitted as XML since it comes from a SQLite database.
In between the client and app is a lighttpd web server running something that somehow handles the translation. This something is the problem.
What I think I want:
Lighttpd can use FastCGI to route all XMLHttpRequest to an external process. This process will understand HTML/XML, and translate between that and the application's language. It will have custom logic to simulate pushing notifications to the client (receive XMLHttpRequest, don't respond until the next notification is available).
C/C++. I'd really like to avoid installing Java/PHP/Perl on an embedded device. So I'll need more low level understanding.
How do I do this?
Are there good C++ libraries for interpreting the CGI headers and HTML so that I don't have to do any syntax processing, I can just deal with the request/response contents?
Are there any good references to exactly what goes on, server side, when handling the XMLHttpRequest and CGI interfaces?
Is there any package that does most of this job already, or will I have to build the non-HTTP/CGI stuff from scratch?
If I understand correctly, the way I approach this problem would be a 3-tier (Don't get hang up so much on the 3-tier buzz words that we all have heard about):
JavaScript (ExtJs) on browsers talks HTTP, Ajax using XmlHttpRequest, raw (naked) or wrapper doesn't really matter, to the web server (Lighttpd, Apache, ...).
Since the app on the embedded device can talk by socket, the web server would use socket to talk to the embedded device.
You can decide to put more business logic on the JavaScript, and keep the Apache/Lighttpd code really thin so it wont timeout.
In this way, you can leverage all technologies that you're already familiar with. Ajax between tier 1 and 2 is nothing new, and use socket between 2 and 3.
I did not mean that you did not know socket. I just proposed a way to take a desc of a problem where I hear a lots of words: XML/HTML/Ajax/XmlHttpRequest/Java/PHP/Perl/C++/CGI and more and offer a way to simplify into smaller, better understood problem. Let me clarify:
If you want to ultimately retrieve data from the embedded devices and render on the browsers, then have the browsers making a request to the web server, the web server uses socket to talk to the embedded device. How the data is passed between browser and server, that's normal HTTP, no more, no less. Same thing between web server and embedded device, except socket instead of HTTP.
So if just you take a simple problem, like doing an addition of 2 numbers. Except these 2 input numbers would get passed to the web server, and then the web server passes to the embedded device, where the addition is carried out. The result gets passed back to the web server, back to the browser for rendering. If you can do that much, you can already make the data flow everywhere you want to.
How to parse the data depends on how you design the structure of the data that might include container which wraps around a payload.
"... whatever HTTP is coming to the server into usable bits of information, and generate the proper HTTP response"
...but that's not any different than how you handle the HTTP request on the server using your server-side language.
...how to implement a backend process in C/C++, instead of installing a package like PHP
If the embedded device is programmed in C/C++, you are required to know how to do socket programming in C/C++. On your web server, you also have to know how to socket programming, except that will be in that server-side language.
Hope this helps.

Resources