I have a linux daemon with http api which I have wrote on golang. At start he initialize variables and all time when I ask api - he is answer. Init is hard operation: read many config's, add many object's etc.
My problem that if main process die I can't use http api ;). My code isn't perfect and sometimes he stack or die, or user's disable linux service. But I still need some low level functionality to work.
If I try to implement all functions of web api at cli: his start will be very slow and hard for system. But I have more problem if implementation will be separated between CLI & web api: inconsistent. For example: I can start inside web api create && at same time inside CLI - delete all. I must implement lock function to prevent this. (I think write code at this side isn't good)
I don't use database server (and don't need). Maybe I can store inside files or use some shared memory?
My question is how can I share object's data between golang daemon and CLI-client?
Go has a built-in RPC system for easy communication between Go processes. You could also take a look at 0mq, or using D-Bus.
Related
I've recently been trying to refactor some code that takes advantage of the global batch requests feature for the Google APIs that were recently deprecated. Currently, we use the npm package google-batch, but since it dangerously edits the filesystem and uses the deprecated global endpoint, I'd like to move away from it before the endpoint gets fully removed.
How can I create a batch request using (ideally) only the Node.js client? I want to use methods already present in the client as much as possible since it natively provides Promise and TypeScript support, which I intend to use.
I've looked into the Batchelor package suggested in this answer, but it requires you to manually write the HTTP request object instead of using the Node.js client.
This Github issue discusses the use of batch requests in the new node client.
According to that thread, the new intended method of "batching" (outside of a poorly listed set of endpoints that support it) is to make use of the HTTP/2 feature being shipped with the client- and then just to make your requests all at once it seems.
The reason "Batching" is in quotes is because I do not believe this explanation matches my definition of batching- the client isn't performing the queuing of requests to be executed, but instead managing network traffic better when you execute them yourself.
I'm unsure if I am understanding it correctly, but this HTTP/2 feature doesn't actually batch requests, and requires you to queue things yourself and instead tidys up some TCP overhead. In short, I do not believe that batching itself is possible with the api client alone.
(FWIW, I would have preferred to comment with a link as I'm uncertain I explained this well, but reputation didn't let me)
I need to deploy an application onto some Windows machines for purposes of data collection from a group of people (i.e. the application will be used to gather responses to a series of survey questions). The process is interactive, alternating between displays of text and images with specific timing requirements. I have put together a prototype application using HTML and JavaScript that implements the survey. However, there are some unique constraints on the deployment environment that have me stuck:
While the machine is Internet-connected, the client requires that the survey application must run fully local to the PC that it runs on. Therefore, sending the survey results to a remote server is not permissible. Obviously, saving to a local file from a Web browser is typically not permitted for security reasons.
Installation of applications onto the machines that will run the survey is not permitted.
The configuration of the machines is not known specifically a priori, but I can assume some recent version of Windows with IE8+.
The "no remote access" requirement was a late comer, and has thrown a wrench into the plan of just writing a simple Web application that could post results to an HTTP server. I'm now looking for the easiest way forward. Two main approaches come to mind:
Use a GUI framework that provides a control that can display HTML/JavaScript; running a full-blown application on the PC would allow me to save the results to the filesystem. I've never done this, but it seems like in this day and age it shouldn't be too difficult. This would allow me to reuse much of my existing prototype implementation, but I would need some way of transferring the results (which would be stored in a JavaScript data structure) outside of the Web control to where the rest of the application could access it.
Reimplement the entire application using some GUI framework (I've used PyQt successfully before, although not on Windows). This approach is obviously less desirable than #1 due to the lack of reuse. However, it may be necessary if #1 isn't feasible.
Any recommendations for the best way to go? Ideally, I'm looking for a solution that can be run in a "portable" manner from a USB thumbdrive or similar.
Have you looked at HTML Applications (HTA)? They work in IE5+ and can use Windows Scripting Host to write to local drives and UNC shares...
Maybe you can use a portable web server with a scripting language on the server side. http://code.google.com/p/mongoose/ Mongoose, for example, you can run PHP, CGI, etc. .. scripts. Then, simply create a script to save a file to your hard drive. And let the rest of the application in the same manner.
Use a script to start the web server, and perhaps a portable web browser like K-Meleon to start the application http://kmeleon.sourceforge.net/ This is highly configurable. Or start the system explorer to your localhost URL.
The only problem may be that the user has to modify the firewall for the first time you run the server?
I'm looking for a way to create an app which has a realtime web interface as well as an API which can be called by a node.js client while sharing most of its code.
I'd like to be able to manage data, monitor and execute tasks inside of my app via browser, but also have an automation/scheduling program which connects to my web app and tells it to run various tasks and get results of each task.
Unfortunately it doesn't look like I can connect to Meteor from the server, so I'm wondering if there's another approach? Is what I described even possible using Meteor?
I have done some testing using socket.io and I think I may be able to do it this way, but Meteor seems like it'd be really great for the realtime user interface.
Yes, you can use npm packages to do what you want. Just like standard Node.js programming.
There might be one error you run into when calling Meteor between external code, but it is easy to solve.
I guess in your case you could set up a TCP server that way and make it update a collection, then you could get the clients to update through the reactive collection publishing mechanism.
I'm looking into building a service to run in the background that allows clients to connect and send commands, and get data back. I'm planning on writing the service in Ruby (as a gem) but wanted to know what the best method would be to allow clients to connect to the API?
I figured a socket connection would make sense, like you'd connect with Redis or something, but I'm not sure where to start!
Any tips would be much appreciated :)
Yep, you're on the right path. A socket is just a bidirectional communication channel that allows two programs to exchange bytes. If both endpoints are on the same machine, UNIX sockets are the obvious choice; otherwise, you'll need a TCP socket to communicate over the network. The principle is the same in either case.
On top of the socket, you'll have to define your own protocol, or you could use an existing one (such as HTTP) if it applies to your situation.
A random sockets tutorial.
Since you ask for any tips, my advice to you is that building a service container is hard work. Since you don't actually need to, there being lots of awesome service containers already, you should probably use one of those.
I would recommend something behind HTTP, which gives you a whole lot of advantages around existing tooling, message framing, content negotiation, scaling your service, and deployment and upgrade models.
If you want to avoid external dependencies, using something like Webrick or Mongel that is pure Ruby is a fine way to avoid needing to wrap Apache or Nginx around your system.
This also allows you to separate out the concerns in your project: work on building the actual service layer first, handling commands and returning responses. Run that under any web server, and get it going.
Then when you have time, focus separately on how to build the service container to meet your needs: because you know that the underlying service layer works fine, you can focus on only solving the container problems.
If you really do want to build your own container, I strongly recommend you use something higher level than a socket. Tools like 0mq provide framing and other message layer features that you don't get from a socket, and make it much easier to focus on defining the interesting parts of your problem space - the commands - rather than low level details like parsing a wire format and protocol.
I'm using a Ruby/Rails app with Redis running in the background on an EC2 server (Amazon Web Services AWS). This is the ubuntu build I found to be easiest to work with:
Linux version 2.6.32-341-ec2 (buildd#crested) (gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5) ) #42-Ubuntu SMP Tue Dec 6 14:56:13 UTC 2011
In my main .rb file that does most of the polling/searching I have this rubygems required, you should definitely check them out:
require 'aws'
require 'redis'
require 'timeout'
require 'json'
Let me know what you are specifically trying to do if that doesn't help you enough. Good luck!
I've built a couple of daemons with EventMachine in the past. It is efficient and powerful, supports TCP, HTTP and everything else. People even write web servers on top of it.
My company is looking at implementing a new VPN solution, but require that the connection be maintained programatically by our software. The VPN solution consists of a background service that seems to manage the physical connection and a command line/GUI utilty that initiates the request to connect/disconnect. I am looking for a way to "spy" on the API calls between the front-end utilty and back-end service so that our software can make the same calls to the service. Are there any recommended software solutions or methods to do this?
Typically, communications between a front-end application and back-end service are done through some form of IPC (sockets, named pipes, etc.) or through custom messages sent through the Service Control Manager. You'll probably need to find out which method this solution uses, and work from there - though if it's encrypted communication over a socket, this could be difficult.
Like Harper Shelby said, it could be very difficult, but you may start with filemon, which can tell you when certain processes create or write to files, regmon, which can do the same for registry writes and reads, and wireshark to monitor the network traffic. This can get you some data, but even with the data, it may be too difficult to interpret in a manner that would allow you to make the same calls.
I don't understand why you want to replace the utility, instead of simply running the utility from your application.
Anyway, you can run "dumpbin /imports whatevertheutilitynameis.exe" to see the static list of API function names to which the utility is linked; this doesn't show the sequence in which they're called, nor the parameter values.
You can then use a system debugger (e.g. Winice or whatever its more modern equivalent might be) to set breakpoints on these API, so that you break into the debugger (and can then inspect parameter values) when the utility invokes these APIs.
You might be able to glean some information using tools such as Spy++ to look at Windows messages. Debugging/tracing tools (Windbg, or etc.) may allow you to see API calls that are in process. The Sysinternals tools can show you system information to some degree of detail of usage.
Although I would recommend against this for the most part -- is it possible to contact the solution provider and get documentation? One reason for that is fragility -- if a vendor is not expecting users to utilize that aspect of the interface, they are more likely to change it without notice.