Does rdflib's (e.g., SPARQL/SPARQL Update) work well with gevent in an asynchronous web framework setting (gunicorn/pyramid, Flask, ...)? One of the goals is to offload longer-running (but not that long to require external message queues) into their own greenlet workers.
Can't find any definite pointers either way. And checking it out is not reliable option, as figuring out subtle effects require advanced testing setting.
UPDATE: I've found a Flask application for provenance visualization, which is doing socket.io, and uses gevent/gunicorn and rdflib. So, I guess, the combination is compatible. Not sure if there are caveats.
Related
The documentation for SPDY says it is not compatible with mod_php as its not thread safe:
https://developers.google.com/speed/spdy/mod_spdy/php
Much like the Apache Worker MPM, mod_spdy is a multithreaded module,
and processes multiple SPDY requests from the same connection
simultaneously. This poses a problem for other Apache modules that may
not be thread-safe, such as mod_php. Fortunately, it is fairly easy to
adjust your Apache configuration to make your existing PHP code safe
to use with mod_spdy (and with the Worker MPM as well).
I have tried using SPDY with mod_php and I haven't had any issues. What is the danger of doing this?
The PHP core is thread-safe since PHP5. However many of the extensions and libraries that extension use are not.
If you're not using those extensions you'll probably not going to get any problems. If you do, you might get segfaults, other memory access violations or just strange behavior.
A partial list is available op the PHP site. Unfortunately, there doesn't seem to be a conclusive list of thread-safe and thread-unsafe extensions.
Our requirements for a real-time web framework include:
lightweight framework
scala support on server side
flexible on communication mechanism : may be Ajax, Server Sent Event or WebSocket.
relatively little changes required to client html.
E.g. using the WebSockets js library is fine
introducing significant compile time/server side page processing is not. E.g. Play routing annotations are not acceptable
must have working examples for both:
web clients
server to server communications
fully functional build. Preferably sbt, but maven maybe acceptable
I have evaluated the following frameworks: and each one of them has one or more drawbacks that make usage within our application less than desirable.
Play: somewhat heavy, but more importantly it introduces custom annotations/processing into the html page. We need VANILLA html pages.
Spray: closer to the mark. But although I found a number of example applications, the actor-based communication is not working in those examples. The SimpleServer example has a built-in "cases" counter (from SimpleClient) that do not work as given: they could certainly be made to work .. eventually..
atmosphere: lacking examples
jetty, netty: lacked fully functional examples buildable within sbt or maven
socko : The markdown essentially stipulates using eclipse/scala-IDE for running tests/doing development. That is a non-starter for us (IJ shop). It was unclear how to run examples and/or start their servers from sbt / command line.
I ended up writing a fair amount of custom code wrapped around Netty. After it is in better shape I may drop it on GitHub.
http://xitrum-framework.github.io/ is actively developed and contains SocksJs support. It is rather lightweight, you can directly annotate routes on actors and they become exposed on the web.
I have run into trouble with Socket.io regarding memory leaks and scaling issues lately. My decision to use Socket.io was made over a year ago when it was undoubtedly the best library to use.
Now that Socket.io causes much trouble, I spent time looking for alternatives that became available in the meantime and think that both Engine.io and SockJS are generally well suited for me. However, in my opinion both have some disadvantages and I am not sure which one to choose.
Engine.io is basically the perfect lightweight version of Socket.io that does not contain all the features I do not require anyway. I have already written my own reconnection and heartbeat logic for Socket.io, because I was not satisfied with the default logics and I never intended to use rooms or other features that Socket.io offers.
But - in my opinion - the major disadvantage of Engine.io is the way connections are established. Clients start with slower jsonp-polling and are upgraded if they support better transports. The fact that the clients which support websockets natively (number increasing steadily) have a disadvantage in the form of a longer and unstable connection procedure over those clients which use outdated browsers, contradicts my sense of how it should be handled.
SockJS on the other hand handles the connections exactly as I would like to. From what I have read it seems to be pretty stable while Engine.io has some issues at this time.
My app is running behind an Nginx router on a single domain, therefore I do not need the cross-domain functionality SockJS offers. Because of providing this functionality, however, SockJS does not expose the cookie data of the client at all. So far I had a 2-factor authorization with Socket.io via cookie AND query string token and this would not be possible with SockJS (with Engine.io it would).
I have read pretty much all what is avilable about and pros and cons of both, but it seems there is not much being discussed or published so far, espacially about Engine.io (there are only 8 questions tagged with engine.io here).
Which of the 2 libraries do you prefer and for which reason? Do you use them in production?
Which one will likely be maintained more actively and could have a major advantage over the other in the future?
Have you looked at Primus? It offers the cookie requirements you mention, it supports all of the major 'real-time'/websocket libraries available and is a pretty active project. To me it also sounds like vendor lock-in could be a concern for you and Primus would address that.
The fact that it uses a plugin system should also a) make it easier for you to extend if needed and b) may actually have a community plugin that already does what you need.
Which of the 2 libraries do you prefer and for which reason? Do you use them in production?
I have only used SockJS via the Vert.x API and it was for an internal project that I would consider 'production', but not a production facing consumer app. That said, it performed very well.
Which one will likely be maintained more actively and could have a major advantage over the other in the future?
Just looking over the commit history of Engine.io and SockJS, and the fact that Auttomatic is supporting Engine.io makes me inclined to think that it will be more stable, for a longer period of time, but of course that's debatable. Looking at the issues for Engine.io and SockJS is another good place to evaluate, but since they're both split over multiple repos it should be taken with a grain of salt. I'm not sure where/how Automattic is using Engine/Socket.io, but if it's in WordPress.com or one of their plugins, it has substantial production-at-scale battle testing.
edit: change answer to reflect cookie support confirmed by Primus author in comments below
I'd like to redirect you to this (quite detailed) discussion thread about SockJS and Engine.io
https://groups.google.com/forum/#!topic/sockjs/WSIdcY14ciI
Basically,
SockJS detects working transports before marking the connection
as open. Engine.io will immediately open the connection and upgrade
it later.
flash, one of the Engine.io fallbacks
(and not present in SockJS) loads slowly and in environments
behind proxies takes 3 seconds to timeout.
SockJS doesn't use flash and therefore doesn't need to work around
this issue.
SockJS does the upgrade on start. After that you have
a consistent experience. You send what you send, you receive
what you receive.
Also, as far as I can tell, engine.io-client (the client-side) library for engine.io, does not support requirejs builds, so that's another negative point. (SockJS does build perfectly).
You may also consider node-walve. Complete WebSocket basic. Extremely performant as fully stream based.
Example of how to use:
walve.createServer(function(wsocket) {
wsocket.on('incoming', function(incoming) {
incoming.pipe(process.stdout, { end: false });
});
}).listen(server);
It may not be the best choice if you feel not secure in the nodejs environment (e.g. extending prototypes for API sugar), contributing to the project (though the code is more readable as socket.io).
It is unbelievable that ZeroMQ uses select() on Windows, I didn't know that until I have completes my code and started performance test. They should present this information on their web site with big red font.
Is there anyway to replace ZeroMQ's select()?
IOCP is proactor model and can't be easily integrated into it, how about WSAEventSelect, this is also a reactor model and have a near performance like poll.
Another choice for me is http://nanomsg.org/, but it is still alpha.
One of the main objectives in Zeromq is to provide a consistent API for communication between threads, processes, nodes, and clusters. Protocol specific optimization is outside of this scope because of the ways that it can effect other areas of communication. For example, shared memory would be a better form of IPC, but UNIX domain sockets make a consistent API easier. It would also be nice to know when an endpoint disconnects, but how would you implement such behavior between threads?
Their main goal is to allow every pattern to work the same way regardless of topology, protocol, system, or language, to the point that any mixture can be used regardless of how odd it may seem (node.js Websockets communicating with C# brokers passing messages to Ruby and PHP workers which share work with java threads, etc.)
Each of it's features would be enhanced greatly if optimised for each specific protocol and system, but that would also make uniform patterns close to impossible.
BTW, they might accept a pactch if you could find a way to implement iocp while still maintaining this versatility and neutrality.
PPS, nanomsg is made by one of the main original developers of Zeromq. Crossroads.IO is a direct fork of Zeromq, by original Zeromq developers as well and including some developers of nanomsg. if I'm not mistaken, Nano will likely become the core of crossroads when complete.
I am looking into using ZeroMQ as the messaging/transport layer for a fairly large distributed system, mainly targeting monitoring and data collection (many producers, a few consumers).
As far as I can see there are currently two different implementations of the same concept; ZeroMQ and Crossroads I/O, the latter being a fork of ZeroMQ (in 2012?).
I am trying to figure out which one to use and wonder about the differences between them, but have so far not found much information regarding this.
For example:
Are they compatible on the wire?
Are they API compatible, i.e. some kind of common base API, possibly with different add-ons?
Do they both implement support for ZMTP (ZeroMQ Message Transport Protocol)?
Do they share some kind of common understanding of future development or will they continue in two separate and possible different directions?
What are the pros/cons in relation to the other?
Basically, how do one choose one over the other?
Crossroads.io is pretty dead since Martin Sustrik has started on a new stack, in C, called nano: https://github.com/250bpm/nanomsg
Crossroads.io does not, afaik, implement ZMTP/1.0 nor ZMTP/2.0 but its own version of the protocol.
Nano has pluggable transports and we'll probably make a ZMTP transport for that. Nano is really nice, a rethinking of the original libzmq library, and if it's successful would make a good new kernel.
Ideally, Nano would interoperate both at the API and the protocol level, so be a pluggable replacement for libzmq. It does have quite a long way to go, though.
Note that there are now several rewrites of libzmq emerging, including JeroMQ (Java) and NetMQ (C#). These two do implement ZMTP/1.0 and ZMTP/2.0 properly. There are also other libraries like Axon (https://github.com/visionmedia/axon) which are heavily inspired by 0MQ but not compatible.
Based on experience, users value interoperability more than almost anything else, so it's quite likely that different 0MQ-like stacks will end up speaking the same protocols.