FastCGI shell script - shell

I would like to use FastCGI with shell scripts.
I have found several tutorials about writing CGI scripts in shell, but nothing about FastCGI, and I guess this is not the same thing.
Is it possible, and how?
Thank you
Edit: Ignacio: Thank you but this link is 14 years old and says that this is not currently supported. Is it still unsupported?

The whole point of FastCGI is to avoid spawning a new process for each incoming connection. By the very nature of the language, a shell script will spawn many processes during its execution, unless you want to restrict yourself greatly. (No cat, awk, sed, grep, etc, etc). So from the start, if you're going to use shellscript, you may as well use regular CGI instead of FastCGI.
If you're determined anyway, the first big hurdle is that you have to accept() connections on a listening socket provided by the webserver. As far as I know, there is no UNIX tool that does this. Now, you could write one in some other language, and it could run your shellscript once for each incoming connection. But this is exactly what normal CGI does, and I guarantee it does it better than the custom program you or I would write. So again, stick with normal CGI if you want to use shellscript.

"If you're determined anyway, the first big hurdle is that you have to accept() connections on a listening socket provided by the webserver. As far as I know, (...)" there is one C program doing nearly this: exec_with_piped.c
(it's using pipes, not sockets, but the C code should be easily adapted for your purpose)
Look at "Writing agents in sh: conversing through a pipe"
http://okmij.org/ftp/Communications.html
Kalou

No
I apologize in advance if this is a dumb question but is it
conceivable to use a mere shell script (sh or ksh) as a
FastCGI program and if so, how?
You can not use a simple shell script as a FastCGI program. Since
shell script can not persist across multiple HTTP requests, it can not be
used as a FastCGI application. For the program to handle multiple HTTP
requests in its own lifetime (i.e. not just handle requests and die, like
CGI applications), it needs some means to communicate with the web server
to recieve a request, and send the reply back to the server after handling
it. This communication is accomplished via FCGI library, which implements
the above and it currently supports only a subset of programming languages,
like C, Perl, Tcl, Java... In short, it does NOT support shell.
Hope that cleared it up a bit.
Stanley.

From http://www.fastcgi.com/:
FastCGI is simple because it is actually CGI with only a few extensions.
Also:
Like CGI, FastCGI is also language-independent.
So you can use FastCGI with shell scripts or any other kind of scripts, just like CGI.
Tutorials for CGI are useful to learn FastCGI too, except maybe for the particularities of setting up the web server.

Related

How do I get the hostname of the server end of a Windows pipe, from a file handle to the client end?

I want the reverse of WinAPI's GetNamedPipeClientComputerName.
Using WinAPI, preferably. (I'm trying to avoid having to talk to .NET, make expensive system() calls to Powershell, or write all the infrastructure necessary to consume WMI/MI or COM.)
Edit: Yes, I know if I control both ends and it's duplex I can write logic to have the server tell me its hostname. Let's skip that hack.

How is FastCGI implemented under Windows?

The official FastCGI documentation says that stdin is repurposed as a listening socket when a FastCGI module is started. That's great on Linux, where stdin and sockets are all ints, but I don't think it could it work on Windows, where stdin is a FILE*, and a socket is a HANDLE.
Since Windows servers do support FastCGI, someone has either found a way to make them compatible, or redefined the system for that OS. My Google-fu doesn't seem to be up to locating how though. Where can I find documentation on it?
FastCGI defines only the message exchange protocol, but people behind FastCGI also provide one implementation of that protocol for C++. In this implementation your app must use provided FCGX_Request object to rewire three provided FCGX_Stream objects to the usual ones (cin, cout, cerr). But I suspect that you don't have to rewire the streams, and can use them directly. Check out this FastCGI Hello World to see how it's done.
So, your app does not see HANDLE or FILE*. It sees instead fcgi_streambuf, which inherits from std::streambuf. The way the previously mentioned protocol is implemented is just a detail that you're not supposed to be concerned with. The implementation gets hold of a stream of bytes and provides it to the app, and also the other way around.

Reliable IPC With Shell Scripts

Okay, so I have two shell scripts, a server and a client, where the server is always run as root, and the clients can be run as standard users in order to communicate with the server.
The method I've chosen to do this is with a world-accessible directory containing named pipes (fifos), one of which is world-writable (to enable initial connection requests) and the others are created per-user and writable only by them (allowing my server script to know who sent a message).
This seems to work fine but it feels like it may be over-engineered or missing a more suitable alternative. It also lacks any means of determining whether the server is currently running, besides searching for its name in the output of ps. This is somewhat problematic as it means that writing to the connection fifo will hang if the server script isn't available to read from it.
Are there better ways to do something like this for a shell script? Of course I know could use an actual program to get access to more capabilities, but this is really just intended to provide secure access to a root service for non-root users (i.e - it's a wrapper for something else).
You could use Unix domain sockets instead of fifos. Domain sockets can be created with nc -lU /path/to/wherever and connected to with nc -U /path/to/wherever. This creates a persistent object in the filesystem (like a fifo, but different). The server should be able to maintain multiple simultaneous connections over the same socket.
If you're willing to write in C (or some other "real" programming language), it's also possible to pass credentials over Unix domain sockets, unlike fifos. This makes it possible for the server to authenticate its clients without needing to rely on filesystem permissions or other indirect means. Unfortunately, I'm not aware of any widely-supported interface for doing this in a shell script.

Using gevent and multiprocessing together to communicate with a subprocess

Question:
Can I use the multiprocessing module together with gevent on Windows in an efficient way?
Scenario:
I have a gevent based Python application doing asynchronous I/O on Windows. The application is mostly I/O bound, but there are spikes of higher CPU load as well. This application would need to control a console application via its stdin and stdout. I cannot modify this console application and the user will be able to use his own custom one, only the text (line) based communication protocol is fixed.
I have a working implementation using subprocess and threads, but I would rather move the whole subprocess based communication code together with those threads into a separate process to turn the main application back to single-threaded. I plan to use the multiprocessing module for this.
Prior reading:
I have been searching the Web a lot and read some source code, so I know that the multiprocessing module is using a Pipe implementation based on named pipes on Windows. A pair of multiprocessing.queue.Queue objects would be used to communicate with the second Python process. These queues are based on that Pipe implementation, e.g. the IPC would be done via named pipes.
The key question is, whether calling the incoming Queue's get method would block gevent's main loop or not. There's a timeout for that method, so I could make it into a loop with a small timeout, but that's not a good solution, since it would still block gevent for small time periods hurting its low I/O latency.
I'm also open to suggestions on how to circumvent the whole problem of using pipes on Windows, which is known to be hard and sometimes fragile. I'm not sure whether shared memory based IPC is possible on Windows or not. Maybe I could wrap the console application in a way which would allow communicating with the child process using network sockets, which is known to work well with gevent.
Please don't question my primary use case, if possible. Thanks.
The Queue's get method is really blocking. Using it with timeout could potentially solve your problem, but it definitely won't be a cleanest solution and, which is the most important, will introduce extra latency for no good reason. Even if it wasn't blocking, that won't be a good solution either. Just because non-blocking itself is not enough, the good asynchronous call/API should smoothly integrate into the I/O framework in use. Be that gevent for Python, libevent for C or Boost ASIO for C++.
The easiest solution would be to use simple I/O by spawning your console applications and attaching to its console in and out descriptors. There are at two major factors to consider:
It will be extremely easy for your clients to write client applications. They will not have to work with any kind of IPC, socket or other code, which could be very hard thing for many. With this approach, application will just read from stdin and write to stdout.
It will be extremely easy to test console applications using this approach as you can manually start them, enter text into console and see results.
Gevent is a perfect fit for async read/write here.
However, the downside is that you will have to start this application, there will be no support for concurrent communication with it, and there will be no support for communication over network. There is even a good example for starters.
To keep it simple but more flexible, you can use TCP/IP sockets. If both client and server are running on the same machine. Also, a good operating system will use IPC as an underlying implementation, so it will be fast. And, if you are worrying about performance of this case, you probably should not use Python at all and look at other technologies.
Even fancies solution – use ZeroC ICE. It is very modern technology allowing almost seamless inter-process communication. It is a CORBA killer, very easy to use. It is heavily used by many, proven to be fastest in its class and rock stable. The beauty of this solution is that you can seamlessly integrate programs in many different languages, like Python, Java, C++ etc. But this will require some of your time to get familiar with a concept. If you decide to go this way, just spend a day reading trough documentation.
Hope it helps. Good luck!
Your question is already quite old. Nevertheless, I would like to recommend http://gehrcke.de/gipc which -- I believe -- would tackle the outlined challenge in a very straight-forward fashion. Basically, it allows you to integrate multiprocessing-based child processes anywhere in your application (also on Windows). Interaction with Process objects (such as calling join()) is gevent-cooperative. Via its pipe management, it allows for cooperatively blocking inter-process communication. However, on Windows, IPC currently is much less efficient than on POSIX-compliant systems (since non-blocking I/O is imitated through a thread pool). Depending on the IPC messaging volume of your application, this might or might not be of significance.

interactive communication console programs(like client-server) windows

I have two console programs (ex. first - client, second - server).
Do Windows have a command or resource to connect it?
Client ask question, Server answer.
Anyone encountered this problem? (just win)
Client - Server is a programming model. As you are refering to programs and not scripts, this means you have to applications and you are simply have two corresponding windows instead of UI. You should look into interprocess communication, or tcp server-client, etc. in whichever lanuage you use.
//if I understood fully what you mean :)
Windows have pipes :
dir | sort
| sends data from one program to another via usual IO.
Bidirectial transfers are not that simple, unfortunately.
If you need bi-dir, you'll have to mess around sockets and stuff.

Resources