The documentation for SPDY says it is not compatible with mod_php as its not thread safe:
https://developers.google.com/speed/spdy/mod_spdy/php
Much like the Apache Worker MPM, mod_spdy is a multithreaded module,
and processes multiple SPDY requests from the same connection
simultaneously. This poses a problem for other Apache modules that may
not be thread-safe, such as mod_php. Fortunately, it is fairly easy to
adjust your Apache configuration to make your existing PHP code safe
to use with mod_spdy (and with the Worker MPM as well).
I have tried using SPDY with mod_php and I haven't had any issues. What is the danger of doing this?
The PHP core is thread-safe since PHP5. However many of the extensions and libraries that extension use are not.
If you're not using those extensions you'll probably not going to get any problems. If you do, you might get segfaults, other memory access violations or just strange behavior.
A partial list is available op the PHP site. Unfortunately, there doesn't seem to be a conclusive list of thread-safe and thread-unsafe extensions.
Related
Does rdflib's (e.g., SPARQL/SPARQL Update) work well with gevent in an asynchronous web framework setting (gunicorn/pyramid, Flask, ...)? One of the goals is to offload longer-running (but not that long to require external message queues) into their own greenlet workers.
Can't find any definite pointers either way. And checking it out is not reliable option, as figuring out subtle effects require advanced testing setting.
UPDATE: I've found a Flask application for provenance visualization, which is doing socket.io, and uses gevent/gunicorn and rdflib. So, I guess, the combination is compatible. Not sure if there are caveats.
I have run into trouble with Socket.io regarding memory leaks and scaling issues lately. My decision to use Socket.io was made over a year ago when it was undoubtedly the best library to use.
Now that Socket.io causes much trouble, I spent time looking for alternatives that became available in the meantime and think that both Engine.io and SockJS are generally well suited for me. However, in my opinion both have some disadvantages and I am not sure which one to choose.
Engine.io is basically the perfect lightweight version of Socket.io that does not contain all the features I do not require anyway. I have already written my own reconnection and heartbeat logic for Socket.io, because I was not satisfied with the default logics and I never intended to use rooms or other features that Socket.io offers.
But - in my opinion - the major disadvantage of Engine.io is the way connections are established. Clients start with slower jsonp-polling and are upgraded if they support better transports. The fact that the clients which support websockets natively (number increasing steadily) have a disadvantage in the form of a longer and unstable connection procedure over those clients which use outdated browsers, contradicts my sense of how it should be handled.
SockJS on the other hand handles the connections exactly as I would like to. From what I have read it seems to be pretty stable while Engine.io has some issues at this time.
My app is running behind an Nginx router on a single domain, therefore I do not need the cross-domain functionality SockJS offers. Because of providing this functionality, however, SockJS does not expose the cookie data of the client at all. So far I had a 2-factor authorization with Socket.io via cookie AND query string token and this would not be possible with SockJS (with Engine.io it would).
I have read pretty much all what is avilable about and pros and cons of both, but it seems there is not much being discussed or published so far, espacially about Engine.io (there are only 8 questions tagged with engine.io here).
Which of the 2 libraries do you prefer and for which reason? Do you use them in production?
Which one will likely be maintained more actively and could have a major advantage over the other in the future?
Have you looked at Primus? It offers the cookie requirements you mention, it supports all of the major 'real-time'/websocket libraries available and is a pretty active project. To me it also sounds like vendor lock-in could be a concern for you and Primus would address that.
The fact that it uses a plugin system should also a) make it easier for you to extend if needed and b) may actually have a community plugin that already does what you need.
Which of the 2 libraries do you prefer and for which reason? Do you use them in production?
I have only used SockJS via the Vert.x API and it was for an internal project that I would consider 'production', but not a production facing consumer app. That said, it performed very well.
Which one will likely be maintained more actively and could have a major advantage over the other in the future?
Just looking over the commit history of Engine.io and SockJS, and the fact that Auttomatic is supporting Engine.io makes me inclined to think that it will be more stable, for a longer period of time, but of course that's debatable. Looking at the issues for Engine.io and SockJS is another good place to evaluate, but since they're both split over multiple repos it should be taken with a grain of salt. I'm not sure where/how Automattic is using Engine/Socket.io, but if it's in WordPress.com or one of their plugins, it has substantial production-at-scale battle testing.
edit: change answer to reflect cookie support confirmed by Primus author in comments below
I'd like to redirect you to this (quite detailed) discussion thread about SockJS and Engine.io
https://groups.google.com/forum/#!topic/sockjs/WSIdcY14ciI
Basically,
SockJS detects working transports before marking the connection
as open. Engine.io will immediately open the connection and upgrade
it later.
flash, one of the Engine.io fallbacks
(and not present in SockJS) loads slowly and in environments
behind proxies takes 3 seconds to timeout.
SockJS doesn't use flash and therefore doesn't need to work around
this issue.
SockJS does the upgrade on start. After that you have
a consistent experience. You send what you send, you receive
what you receive.
Also, as far as I can tell, engine.io-client (the client-side) library for engine.io, does not support requirejs builds, so that's another negative point. (SockJS does build perfectly).
You may also consider node-walve. Complete WebSocket basic. Extremely performant as fully stream based.
Example of how to use:
walve.createServer(function(wsocket) {
wsocket.on('incoming', function(incoming) {
incoming.pipe(process.stdout, { end: false });
});
}).listen(server);
It may not be the best choice if you feel not secure in the nodejs environment (e.g. extending prototypes for API sugar), contributing to the project (though the code is more readable as socket.io).
I'm looking into building a service to run in the background that allows clients to connect and send commands, and get data back. I'm planning on writing the service in Ruby (as a gem) but wanted to know what the best method would be to allow clients to connect to the API?
I figured a socket connection would make sense, like you'd connect with Redis or something, but I'm not sure where to start!
Any tips would be much appreciated :)
Yep, you're on the right path. A socket is just a bidirectional communication channel that allows two programs to exchange bytes. If both endpoints are on the same machine, UNIX sockets are the obvious choice; otherwise, you'll need a TCP socket to communicate over the network. The principle is the same in either case.
On top of the socket, you'll have to define your own protocol, or you could use an existing one (such as HTTP) if it applies to your situation.
A random sockets tutorial.
Since you ask for any tips, my advice to you is that building a service container is hard work. Since you don't actually need to, there being lots of awesome service containers already, you should probably use one of those.
I would recommend something behind HTTP, which gives you a whole lot of advantages around existing tooling, message framing, content negotiation, scaling your service, and deployment and upgrade models.
If you want to avoid external dependencies, using something like Webrick or Mongel that is pure Ruby is a fine way to avoid needing to wrap Apache or Nginx around your system.
This also allows you to separate out the concerns in your project: work on building the actual service layer first, handling commands and returning responses. Run that under any web server, and get it going.
Then when you have time, focus separately on how to build the service container to meet your needs: because you know that the underlying service layer works fine, you can focus on only solving the container problems.
If you really do want to build your own container, I strongly recommend you use something higher level than a socket. Tools like 0mq provide framing and other message layer features that you don't get from a socket, and make it much easier to focus on defining the interesting parts of your problem space - the commands - rather than low level details like parsing a wire format and protocol.
I'm using a Ruby/Rails app with Redis running in the background on an EC2 server (Amazon Web Services AWS). This is the ubuntu build I found to be easiest to work with:
Linux version 2.6.32-341-ec2 (buildd#crested) (gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5) ) #42-Ubuntu SMP Tue Dec 6 14:56:13 UTC 2011
In my main .rb file that does most of the polling/searching I have this rubygems required, you should definitely check them out:
require 'aws'
require 'redis'
require 'timeout'
require 'json'
Let me know what you are specifically trying to do if that doesn't help you enough. Good luck!
I've built a couple of daemons with EventMachine in the past. It is efficient and powerful, supports TCP, HTTP and everything else. People even write web servers on top of it.
I can't find this on Google (so maybe it doesn't exist), but I basically'd like to install something on a web server such that I can run a site on Scheme, PHP is starting to annoy me, I want to get rid off it, what I want is:
Run Scheme sources towards UTF-8 output (duh)
Support for SXML, SXLT et cetera, I plan to compose the damned thing in SXML and -> to normal representation on at the end.
Ability to read other files from the server, write them, set permissions et cetera
Also some things to for instance determine the filesize of files, height of images, mime-types and all that mumbo-jumbo
(optionally) connect to a database, but for what I want to do storing the entire database in S-expressions itself is feasible enough
I don't need any fancy libraries and other things that come with it like CMS'es and what-not, except the support for SXML but I'm sure I can just find a lib for that anyway that I can load.
Spark-Scheme has a full web server. If you don't need that, it also has a FastCGI interface so that you can serve Scheme scripts from a web servers like Apache, Lighttpd etc. Spark-Scheme also seem to meet your requirements for database support, UTF-8, file handling and SXML. See the Spark-Scheme Programming Guide (pdf) for more information.
mod_lisp and FastCGI are the only two Apache modules I'm aware of that might work at this time. mod_lisp provides Scheme support because it's architecture is similar to FastCGI, where CGI like parameters are sent over a socket to a second process which remains running as the Scheme backend to the web server. Basically you use one or the other to send CGI like parameters across a socket to a running Scheme backend.
You can find some information about these solutions here. There was another FastCGI like effort called SCGI which demoed a simple SCGI receiver in Scheme called gambit. That code is probably not maintained anymore, but the scheme receiver might be useful.
Back in the Apache 2.0 days, there were more projects playing with scheme and clisp bindings. I don't believe that mod_scheme ever released anything, but if they did, odds are it is not compatible with the modern releases of Apache.
Did you come across Fermion (http://vijaymathew.wordpress.com/2009/08/19/fermion-the-scheme-web-server/)?
If you're looking for a lispy language to develop web applications in, I'd recommend looking into Clojure. Clojure is a lisp variant that's fairly close to scheme; here is a list of some of the differences.
Clojure runs on the Java virtual machine and integrates well with Java libraries, and there's a great webapp framework available called Compojure.
Check out Chicken Scheme's Eggs Unlimited. I think what you want is a combination of the sxml- packages coupled with the fastcgi package.
PLT Scheme has a web application server here: http://docs.plt-scheme.org/web-server/index.html
Is there a gem for threadpooling anyone can recommend?
From my experience forking/process pooling is much more effective than thereadpooling in Ruby (assuming you do not need much in terms of thread communication). Some time ago I created a gem called process_pool, which is a very basic process pool with a file based job queue (you can check it out here: http://github.com/psyho/process_pool).
I would try https://github.com/ruby-concurrency/concurrent-ruby/ .
It's basically a port of the java.util.concurrent abstractions (including threadpools) to ruby -- except if you install it under Jruby, it'll use the java.util.concurrent stuff. So you can write code that'll work and do the same thing semantically (not neccesarily the same performance) under any ruby platform.
It also offers Futures, a higher level abstraction which may be more convenient to use than thread pools.