Apple Push Notification in Erlang (or improved in Ruby?) - ruby

I currently have an Apple Push Notification running on my server in Ruby. I'd like to get one going in Erlang as I'd like to use a supervisor to keep watch over it. Does anyone have any code that they could help me with?
Here's my Ruby code. One thing I do not like about this current implementation is that it does not seem to stay connected - it disconnects 2-3 times a day, and it seems after I reconnect that the first push will not go through:
context = OpenSSL::SSL::SSLContext.new
context.cert = OpenSSL::X509::Certificate.new(File.read(cert))
context.key = OpenSSL::PKey::RSA.new(File.read(cert))
def connect_sockets(server, context)
sock = TCPSocket.new(server, 2195)
ssl = OpenSSL::SSL::SSLSocket.new(sock,context)
ssl.connect
return sock, ssl
end
sock, ssl = connect_sockets(server, context) # this is called to initially connect and also reconnect whenever disconnected.
If Erlang Push isn't doable then I wouldn't mind sticking to my Ruby one as long as I can keep my connections alive, and perhaps supervise it through Erlang. Does anyone know if any of this is possible?

This question on Apple Push Notifications with Erlang might also be useful for this one.

The HTTP Client (with SSL support) that ships with Erlang works reasonably well ( I can't say I have battle tested it ). The relevant documentation is available here.
1) Don't forget to perform an "inets:start()" in your application before attempting to do HTTP calls.
2) In my (small) experience, starting the 'inets' module seems to be a bit tricky: don't try starting it within your supervisor module or else your servers won't work. I usually do 'inets:start()' in the first server module of my application before any other servers requiring HTTP are.
3) To perform the 'push' operation, I guess you would need to use the 'stream' option.

You might also check out the apn_on_rails
project.
If you come up with an Erlang implementation, please consider sharing it with us :).

Related

Automatic reconnect in case of network failures

I am testing .NET version of ZeroMQ to understand how to handle network failures. I put the server (pub socket) to one external machine and debugging the client (sub socket). If I stop my local Wi-Fi connection for seconds, then ZeroMQ automatically recovers and I even get remaining values. However, if I disable Wi-Fi for longer time like a minute, then it just gets stuck on a frame waiting. How can I configure this period when ZeroMQ is still able to recover? And how can I reconnect manually after, say, several minutes? How can I understand that the socket is locked and I need to kill/open again?
Q :" How can I configure this ... ?"
A :Use the .NET versions of zmq_setsockopt() detailed parameter settings - family of link-management parameters alike ZMQ_RECONNECT_IVL, ZMQ_RCVTIMEO and the likes.
All other questions depend on your code.
If using blocking-forms of the .recv()-methods, you can easily throw yourself into unsalvageable deadlocks, best never block your own code ( why one would ever deliberately lose one's own code domain-of-control ).
If in a need to indeed understand low-level internal link-management details, do not hesitate to use zmq_socket_monitor() instrumentation ( if not available in .NET binding, still may use another language to see details the monitor-instance reports about link-state and related events ).
I was able to find an answer on their GitHub https://github.com/zeromq/netmq/issues/845. Seems that the behavior is by design as I got the same with native zmq lib via .NET binding.

Best practice for updating Go web application

I am wondering what would be the best practice for deploying updates to a (MVC) Go web application. Imagine the following scenario :
1) Code and test some changes for my Go Web Application
2) Deploy update without anyone currently using the previous version getting interrupted.
I don't know how to make sure point 2) can be covered - when somebody is sending a request to the server and I rebuild/restart it just in this moment, he gets an error - even if the request just uses a part of the code I did not touch or that is backwards-compatible, or if I just added a new Request-handler.
Maybe I'm missing something trivial or a well-known pattern as I am just in the process of learning go and my previous web applications were ASP.NET- or php-applications where this was no issue as I did not need to restart the webserver on code changes.
It's not just an issue with Go, but in general we can divide the problem into two separate ones:
Making sure current requests do not get terminated and affect user experience.
Making sure there is no down-time in which new requests cannot be handled.
The first one is easier to tackle: You just don't violently kill your server, but tell it to exit, causing a "Drain phase", in which it does not accept new requests and only finishes the currently running requests, and exits. This can be done by listening on signals for example, and entering the app into a special state.
It's not trivial with Go as the default http server doesn't support shutting it down, but you can start a server with a net.Listener, and then keep a reference to it an close it when the time is due.
Now, doing only approach one and then starting the service again will cause new requests not to be accepted while this is going on, and we all know this can take a number of seconds in extreme cases.
So what we need is another instance of the server already running with the new code, the instant the old one is not responding to new requests, right? That can be done in several ways:
Having more than one server, and a load-balancer on top of them, allowing one (or more) server to take the load while we restart another. That's the simplest way, and the way most people do it. If you need N servers to take the load of your users, just keep N+1 and restart one at a time.
Using socket sharing tricks. In Newer Linux kernels, Many processes can listen and accept on the same port. What you do is simply start the new instance and then tell the old one to finish and exit. This way there is no pause. This is done by setting SO_REUSEPORT on the listening socket.
The above can be automated with ready to ship solutions, like Einhorn, that deals with all the details for you, see https://github.com/stripe/einhorn
Another approach is documented in this blog post: http://blog.nella.org/?p=879

Reusing connections between threads in Ruby / replacement for Net::HTTP::Persistent

I'm running a multithreaded daemon where an instance of ruby Mechanize (which contains a Net::HTTP::Persistent object), might be used and run by one of many threads. I'm running into tons of problems because Net::HTTP::Persistent opens a new connection for each thread that runs it, so if I have 50 threads, i end up opening 50 times more connections than i need to! I've tried subclassing and patching Net::HTTP::Persistent to store its connection information as part of its class instead of in Thread.current, but then I keep getting
too many connection resets (due to Broken pipe - Errno::EPIPE)
all over the place.. any thoughts? anyone know an alternate library to Net::HTTP::Persistant I could use, and hopefully easily patch Mechanize with?
The problem is, if you access a Net::HTTP::Persistent object from another thread, and that object is in the middle of something, that thread would either have to block (stop execution and wait for the object to do what it needs to), or create a new object and mess with that. With threading, you could be in the (forgive me, I'm making assumptions here) middle of a HTTP request, when all of a sudden, another thread wants to create a HTTP request using the same connection, which breaks everything (probably why you got the connection reset issue).
If you really want threading, your options are to have however many connections open, or wait for an open connection so you can use it.
Reverted back to Mechanize 1.0.0, and that solved the problem. Persitant connections are handled in a more reliable, multithreading friendly way in 1.0, unlike in Net::HTTP::Persistent which Mechanize 2+ uses. My advice: stick with Mechanize 1.0 its more reliable, gets less errors, and DOESNT HAVE CRAZY PROBLEMS WITH MULTITHREADED CODE!!! sheesh.
Note: Unlike some of the comments may suggest, Mechanize 1.0 DOES implement persistent connections: take a look at the source code, or verify with Wireshark.

How can I create a reliable and fast network daemon with Ruby?

I am trying to create a Ruby daemon process which clients will be able to connect to.
I need to ensure that the remote Ruby process always remains up and available for connection, so I need to detect network outages or unreachable errors.
I was thinking of having a heartbeat mechanism at the application level between clients and the server, and a timeout in the client if the connection fails.
I was told the select method in Ruby could be of help as well but not sure.
Can anyone share any good links/resources or impart some general wisdom to create reliable and fast daemon processes in Ruby?
I think a lot of people would use eventmachine for this type of application. At its core, it uses epoll (which is similar to select) to decide which socket to deal with next. There are lots of gems that build on eventmachine to allow you to run different types of servers. One example is em-websocket.

Creating proxy between application queries and Internet

Is it possible (for example with C++, but it does not really matter) to create a bridge/proxy application to get the data requested by another application? To be more detailed, I'm talking about a Adobe Air based game. (I want to create a report with stats based on the data acquired, but that is not actually part of this question.)
Rather than simple "boolean" answer please provide some link to example/documentation. Thanks
It would always be possible, and depending on the your target operating system, may require a fair amount of effort, which begs the question - is there a reason you cannot use Fiddler or some packet sniffing software for your target OS?
You can write a proxy by hand, in python can be quite easy. All you have to do is to set localhost as proxy, then forward the request and pass it back to the calling socket.
I've started writing something like this some times ago. The idea was to write a simple replacement for dansguardian.
I've uploaded it on github so you can give it a look if it can help.
I do not remember well (I've started writing it the last year) but maybe with some modification can fit well your requests.
Conceptually, this is your configuration:
app_client -> [app_channel] -> proxy -> [server_channel] -> app_server
Your proxy starts a server socket, the app_client connects to it. This is our app_channel. Now your proxy creates a connection to the app_server. This is your server_channel.
Now start 2 threads, one which reads from the app_channel and writes to the server_channel, the other reads from the server_channel and writes to the app_channel.
This will create a transparent connection to the app_server via your proxy. You can extract the data as you wish. If the data is encrypted though, there's very little you can actually do by way of analysis.

Resources