I've been working with a golang application lately that does network i/o using a bunch of protocols- HTTP (TCP), DNS and WHOIS (UDP) as well as a few others. Some make use of third-party APIs
The application is open-source so I would like to make changes allowing me to specify the network interface for the sockets to bind to, allowing me to use different interfaces depending on a runtime flag. The only way around this without writing code would be to modify the system-wide routing table each time I want to utilize a different interface, which isn't a very appealing solution
Before I go an modify every instance where a Dialer is used (or try to create a wrapper that they can all use) is there a golang feature that would allow setting the interface globally once, so that the various Dialer invocations would "Just Work"- and adhere to the interface I specified?
I did some searching and have only found ways to do this when each Dialer is created (using DialerContext.LocalAddr) but given I'm really a C programmer and not a golang programmer, I realize I may be totally missing a golang idiom for doing something like this
Is there a way how I can call golang functions from jsonnet?
Now that there is a go port of jsonnet and for example ksonnet is adding custom native functions I am wondering if there is a way how to extend jsonnet with more native functions?
I have many packages written in golang (with unit-testing, etc) and now it seems like I will need to rewrite some of them into jsonnet.
As discussed in the go-jsonnet's issue Custom builtin functions #223, you can introduce your custom golang functions but a pluggable support is not available - you cannot directly use the functions in a jsonnet binary.
You need to compile your own binary/library that creates an instance of vm.NativeFunction jsonnet VM and then add your native functions there.
I have a linux daemon with http api which I have wrote on golang. At start he initialize variables and all time when I ask api - he is answer. Init is hard operation: read many config's, add many object's etc.
My problem that if main process die I can't use http api ;). My code isn't perfect and sometimes he stack or die, or user's disable linux service. But I still need some low level functionality to work.
If I try to implement all functions of web api at cli: his start will be very slow and hard for system. But I have more problem if implementation will be separated between CLI & web api: inconsistent. For example: I can start inside web api create && at same time inside CLI - delete all. I must implement lock function to prevent this. (I think write code at this side isn't good)
I don't use database server (and don't need). Maybe I can store inside files or use some shared memory?
My question is how can I share object's data between golang daemon and CLI-client?
Go has a built-in RPC system for easy communication between Go processes. You could also take a look at 0mq, or using D-Bus.
I'm working on a program that read arp cache from machine. I'm using Cocoa. There's a library called libdnet (libdnet.sourceforge.net) which has arp reading function. But I don't know how to write a code to use that function. Please help.
You'll need to know C and apply that knowledge to call the library's functions. See this question for links to C-learning resources.
Objective-C is a superset of C, so you'll be able to integrate the C code to call those functions into your Objective-C methods just fine once you know both languages.
I have a C# module responsible for acquiring the list of network adapters that are "connected to the internet" on a windows Vista machine. The module uses the "Network List Manager API" (or NLM API) to iterate over all network connections and returns all those for which the IsConnectedToInternet value is true.
I received some suggestions for the implementation of this module in this SO question
To test this module I've decided to write a helper that returns the list of internet connected interfaces based on another logic, so it would be a sort of a "reality check" for the original module's logic. Note that for the test helper I am willing to use detection methods that might be considered bad practice for production code (e.g. relying on some internet resource like "Google" to be available - in case it shuts down, blocked by our internal firewall etc. it's relatively easy to fix the test as opposed to a deployed product base).
The alternative detection method I chose was to try to connect to "www.google.com:80" with a TcpClient. My problem: When I have more than one connected adapter (e.g. both wireless and LAN) the detection method fails for one of them with the error "A connect request was made on an already-connected socket".
My question is three fold:
How would you go about testing such a module in general? Do you support the idea of doing the same thing in a different way and comparing the results or is it an overkill and I should rely on the system's API? My main problem here, is that it's very hard to pre-configure the system so that I'll know what the expected results are in advance.
What alternative logic would you suggest? One thing that was suggested in the aforementioned question was looking at the routing table - what about considering each adapter that has a routing entry with a destination of 0.0.0.0 as "connected to the internet"? Other suggestions?
Do you understand why I get the "already-connected" error with the current test logic?
I can only answer your question about the unit test.
The code you're testing is, in your own words, "a C# module responsible for acquiring the list of network adapters that are 'connected to the internet' on a windows Vista machine. The module uses the 'Network List Manager API' (or NLM API) to iterate over all network connections and returns all those for which the IsConnectedToInternet value is true."
If I were writing this module, I would first use an interface for the NLM API, call it...NLMAPIService. Now, for the real code, create an Adapter that implements NLMAPIService and adapts the real NLM API.
For testing, create a class FakeNLMAPI that implements NLMAPIService and has all of its data in-memory somewhere, or in an XML file, or whatever. Your module calls methods only on the NLMAPIService, so you don't have to change any "real" code depending on whether you're testing or not.
Therefore, in your test setup method, you can instantiate FakeNLMAPI and pass it to your module, and in production, instantiate your NLM API Adapter.
I'm going to assume that you can instantiate and modify the object that represents a network connection. If not, you can follow the same pattern for faking the actual network connection object.
Dependency Injection is a very handy pattern to deal with issues like this. Instead of simply using the NLM API components directly in your code define an interface and a class that implements it and serves as a proxy to the NLM API. Pass an instance of this class to your module in the constructor and have your module use it. In your unit tests, instead of the real proxy object, use a mock object that returns known information -- it doesn't even have to reference the NLM API -- to use in testing the logic of your module. Granted, your proxy class will need some testing as well, but the logic in it is much simpler -- probably just some data marshaling. You might be able to convince yourself of its correctness or, if not, do some manual testing on it to make sure that it is working properly.
UnitTests shouldn't access to external resources. To UnitTest your method, I would stub out the Network List Manager API.
You still need an acceptance test layer. In that test environment you should replicate various configurations you expect to support in your environment, setup your own webhosts, routers, machine config. Acceptance testing should be done at the user experience level using a tool like Fitnesse.