Golang, net.TCPConn, SetReadTimeout? - windows

I've created a simple Go application on a Mac for writing and reading data to and from a TCP connection. I've used the GAE Go version. Later, I ported that program to Windows, and I got this error :
Connection.SetReadTimeout undefined (type *net.TCPConn has no field or method SetReadTimeout)
I guess the net package information on the Golang website describes the package only for the GAE version. How would I properly set the timeout in a non-GAE Go version?

With latest weekly (aka Go 1 RC2) one has to use the various Set*Deadline methods of the net.Conn type. Note that the old timeouts were relative to some event, deadlines are absolute times. The background for this change is roughly: setting a [relative] timeout of 1 s seems like a good idea in some scenario, but it applied to every event, like receiving a single byte, thus allowing crafted transfers to avoid timeouts forever (with the respective DOS nearby).

Related

Automatic reconnect in case of network failures

I am testing .NET version of ZeroMQ to understand how to handle network failures. I put the server (pub socket) to one external machine and debugging the client (sub socket). If I stop my local Wi-Fi connection for seconds, then ZeroMQ automatically recovers and I even get remaining values. However, if I disable Wi-Fi for longer time like a minute, then it just gets stuck on a frame waiting. How can I configure this period when ZeroMQ is still able to recover? And how can I reconnect manually after, say, several minutes? How can I understand that the socket is locked and I need to kill/open again?
Q :" How can I configure this ... ?"
A :Use the .NET versions of zmq_setsockopt() detailed parameter settings - family of link-management parameters alike ZMQ_RECONNECT_IVL, ZMQ_RCVTIMEO and the likes.
All other questions depend on your code.
If using blocking-forms of the .recv()-methods, you can easily throw yourself into unsalvageable deadlocks, best never block your own code ( why one would ever deliberately lose one's own code domain-of-control ).
If in a need to indeed understand low-level internal link-management details, do not hesitate to use zmq_socket_monitor() instrumentation ( if not available in .NET binding, still may use another language to see details the monitor-instance reports about link-state and related events ).
I was able to find an answer on their GitHub https://github.com/zeromq/netmq/issues/845. Seems that the behavior is by design as I got the same with native zmq lib via .NET binding.

Golang Cronjob vs time.Ticker usecase

I need to implement a service for my web server that refreshes an access token from some outside rest-api because that token has a 10 minute expiration time. (This is not an accesstoken that my server produces, it is
a token I receive from an outside api that allows me to use their services for a limited time)
For implementing timed functions in Go I've come across both cronjobs and functions using time.Ticker, however I havent come across any posts on the advantages/disadvantages of using one over the other and would like to one which would possibly be a better use for my situation.
If there is an optional route I'd be open to exploring it as well.
Thank you
time.Ticker is included with the Go standard library. No "cron" library is. So you reduce your external dependencies by using time.Ticker.
Cron is designed to run jobs on a specified schedule. Usually these jobs are run outside the Go program by the operating system. This isn't quite what you want. There are other job runners, and libraries called "cron" which are actually job runners, but again they're third party libraries.
A time.Ticker inside a goroutine is very simple and you can just have a nice infinite loop that fetches an API token every few minutes and sends it down a channel to wherever it's needed. That's maybe eight lines of code.

Profiling Golang server

I want to profile a simple webserver that I wrote in Go. It accepts requests, maps the request to avro object and sends it to Kafka in a go routine. The requirement is that it answers immediately and sends an object to the Kafka later. Locally it answers in under 1 ms on average. I have been trying to profile it by starting the script with davecheney/profile package and sending test requests with jmeter. I can see in the output that the profile file is generated but it remains empty, no matter how long jemeter is sending the requests. I'm running it on Mac El Capitan. I read that there were some issues with profiling on Mac but it would be working on El Capitan. Do you have any tips?
Firstly, I'm not sure whether you're trying to do latency profiling. If so, be aware that Go's CPU profiler only reports time spent by a function executing on the CPU and doesn't include time spent sleeping, etc. If CPU Profiling really is what you're looking for, read on.
If you're running a webserver, just include the following in your imports (in the file where main() is) and rebuild:
import _ "net/http/pprof"
Then, while applying load through jmeter, run:
go tool pprof /path/to/webserver/binary http://<server>/debug/pprof/profile
The net/http/pprof package provides profiling hooks that allow you to profile your webserver on demand any time, even while it's running in production. You may want to use a different, firewalled port for it, though, if your webserver is meant to be exposed publicly.

Creating proxy between application queries and Internet

Is it possible (for example with C++, but it does not really matter) to create a bridge/proxy application to get the data requested by another application? To be more detailed, I'm talking about a Adobe Air based game. (I want to create a report with stats based on the data acquired, but that is not actually part of this question.)
Rather than simple "boolean" answer please provide some link to example/documentation. Thanks
It would always be possible, and depending on the your target operating system, may require a fair amount of effort, which begs the question - is there a reason you cannot use Fiddler or some packet sniffing software for your target OS?
You can write a proxy by hand, in python can be quite easy. All you have to do is to set localhost as proxy, then forward the request and pass it back to the calling socket.
I've started writing something like this some times ago. The idea was to write a simple replacement for dansguardian.
I've uploaded it on github so you can give it a look if it can help.
I do not remember well (I've started writing it the last year) but maybe with some modification can fit well your requests.
Conceptually, this is your configuration:
app_client -> [app_channel] -> proxy -> [server_channel] -> app_server
Your proxy starts a server socket, the app_client connects to it. This is our app_channel. Now your proxy creates a connection to the app_server. This is your server_channel.
Now start 2 threads, one which reads from the app_channel and writes to the server_channel, the other reads from the server_channel and writes to the app_channel.
This will create a transparent connection to the app_server via your proxy. You can extract the data as you wish. If the data is encrypted though, there's very little you can actually do by way of analysis.

performance of accessing a mono server application via remoting

This is my setting: I have written a .NET application for local client machines, which implements a feature that could also be used on a webpage. To keep this example simple, assume that the client installs a software into which he can enter some data and gets some data back.
The idea is to create a webpage that holds a form into which the user enters the same data and gets the same results back as above. Due to the company's available web servers, the first idea was to create a mono webservice, but this was dismissed for reasons unknown. The "service" is not to be run as a webservice, but should be called by a PHP script. This is currently realized by calling the mono application via shell_exec from PHP.
So now I am stuck with a mono port of my application, which works fine, but takes way too long to execute. I have already stripped out all unnecessary dlls, methods etc, but calling the application via the command line - submitting the desired data via commandline parameters - takes approximately 700ms. We expect about 10 hits per second, so this could only work when setting up a lot of servers for this task.
I assume the 700m are related to the cost of starting the application every time, because it does not make much difference in terms of time if I handle the request only once or five hundred times (I take the original input, vary it slighty and do 500 iterations with "new" data every time. Starting from the second iteration, the processing time drops down to approximately 1ms per iteration)
My next idea was to setup the mono application as a remoting server, so that it only has to be started once and can then handle incoming requests. I therefore wrote another mono application that serves as the client. Calling the client, letting the client pass the data to the server and retrieving the result now takes 344ms. This is better, but still way slower than I would expect and want it to be.
I have then implemented a new project from scratch based on this blog post and get stuck with the same performance issues.
The question is: am I missing something related to the mono-projects that could improve the speed of the client/server? Although the idea of creating a webservice for this task was dismissed, would a webservice perform better under these circumstances (as I would not need the client application to call the service), although it is said that remoting is faster than webservices?
I could have made that clearer, but implementing a webservice is currently not an option (and please don't ask why, I didn't write the requirements ;))
Meanwhile I have checked that it's indeed the startup of the client, which takes most of the time in the remoting scenario.
I could imagine accessing the server via pipes from the command line, which would be perfectly suitable in my scenario. I guess this would be done using sockets?
You can try to use AOT to reduce the startup time. On .NET you would use ngen for that purpoise, on mono just do a mono --aot on all assemblies used by your application.
AOT'ed code is slower than JIT'ed code, but has the advantage of reducing startup time.
You can even try to AOT framework assemblies such as mscorlib and System.
I believe that remoting is not an ideal thing to use in this scenario. However your idea of having mono on server instead of starting it every time is indeed solid.
Did you consider using SOAP webservices over HTTP? This would also help you with your 'web page' scenario.
Even if it is a little to slow for you in my experience a custom RESTful services implementation would be easier to work with than remoting.

Resources