I'm developing a specialised browser and part of that I would like to support the ipfs protocol. Thus this simple question:
How can I get a file through the ipfs protocol using Go? The browser I'm building specialised in streaming media so I need an io.ReadSeeker.
What I've tried so far:
I read the documentation... All I could find is the command line(which is useless to me as I'm considering a tighter integration/a Go API) and the shell.Get method which is undocumented and seems to require an "outdir" instead to return an io.ReadSeeker/ReadSeekCloser
Related
I've recently been trying to refactor some code that takes advantage of the global batch requests feature for the Google APIs that were recently deprecated. Currently, we use the npm package google-batch, but since it dangerously edits the filesystem and uses the deprecated global endpoint, I'd like to move away from it before the endpoint gets fully removed.
How can I create a batch request using (ideally) only the Node.js client? I want to use methods already present in the client as much as possible since it natively provides Promise and TypeScript support, which I intend to use.
I've looked into the Batchelor package suggested in this answer, but it requires you to manually write the HTTP request object instead of using the Node.js client.
This Github issue discusses the use of batch requests in the new node client.
According to that thread, the new intended method of "batching" (outside of a poorly listed set of endpoints that support it) is to make use of the HTTP/2 feature being shipped with the client- and then just to make your requests all at once it seems.
The reason "Batching" is in quotes is because I do not believe this explanation matches my definition of batching- the client isn't performing the queuing of requests to be executed, but instead managing network traffic better when you execute them yourself.
I'm unsure if I am understanding it correctly, but this HTTP/2 feature doesn't actually batch requests, and requires you to queue things yourself and instead tidys up some TCP overhead. In short, I do not believe that batching itself is possible with the api client alone.
(FWIW, I would have preferred to comment with a link as I'm uncertain I explained this well, but reputation didn't let me)
I'm new to parse and i've just setup my server and dashboard on my local machine.
For my use, i just not need the simple API from parse, i need to write a server (with NodeJS + Express) to handle users request.
I've just see how to integrate an Express application with parse, so my application instead of the server directly will use my server that will serve:
The standard Parse API (/classes etc)
All my others route, that could not to depend on Parse API
This is correct ?
Reading online i've see that Parse Cloud need to extend Parse functionality with additional "routing" (if i have understand well).
So, in my application i will have
The standard API (ad described up here)
All other routers (that could not depend on Parse)
Other routers (that come from Cloud) and use Parse API
So, Parse Cloud is just a "simple" way to write additional Routing ? (i've see that exists the job function too, but right now i've not studied it).
My question is just because i'm a little confused about the real needed, just would like to have more info on "when to use it"
Thanks
EDIT
I provide here an example (that in part come from Parse Docs).
I have a Video class with an director name field.
In my Application (iOs, Android etc) i setup a view that need to know all the Video provided from a particular director.
I will have three ways:
Get all Videos (/classes/videos) and then filter it directly in APP
Write an NodeJS + Express router endpoint (http://blabla.com/videos/XXX) where XXX is the director and then get the result with Parse JS API and send back it to the app
Write an Clound function (that if i have understand respond to /functions/) that do the same as the router one.
This is just a little example, but is this the usage of Parse Cloud ? (or at least, one on them :))
I have a linux daemon with http api which I have wrote on golang. At start he initialize variables and all time when I ask api - he is answer. Init is hard operation: read many config's, add many object's etc.
My problem that if main process die I can't use http api ;). My code isn't perfect and sometimes he stack or die, or user's disable linux service. But I still need some low level functionality to work.
If I try to implement all functions of web api at cli: his start will be very slow and hard for system. But I have more problem if implementation will be separated between CLI & web api: inconsistent. For example: I can start inside web api create && at same time inside CLI - delete all. I must implement lock function to prevent this. (I think write code at this side isn't good)
I don't use database server (and don't need). Maybe I can store inside files or use some shared memory?
My question is how can I share object's data between golang daemon and CLI-client?
Go has a built-in RPC system for easy communication between Go processes. You could also take a look at 0mq, or using D-Bus.
I'm writing a simple golang application that needs to do some decoding of some DNS packets. I noticed that in the net library, there appears to be the perfect implementation in the form of net/dnsmsg.go which contains the right structs, pack / unpack functions etc.
However, the type is marked private (lower case dnsMsg). So it appears that I have no way of using this from within my app.
I'm quite new to golang, so don't know if my only option would be to reimplement net/dnsmsg.go myself, or if there's a better way around this.
My problem was solved by using a third party dns library, specifically miekg/dns (https://github.com/miekg/dns).
Another option would be to use Google's gopacket package which provides packets decoding for Go. In particular, the layers sub-package provides logic for decoding protocol packets, among which what is necessary to decode DNS packets.
I am trying to track the bandwidth usage of individual requests in ruby, to see how much of my network usage is being split between different API calls.
Nothing I can find in net/http or ruby socket classes (TCPSocket, et. al) that seems to have a decent way to do this with out much monkey patching.
I have found a number of useful tools on linux for doing this, but none of them give me the granularity to inspect inside the http requests at the headers (so I could figure out which url we are requesting). The tools I am using are vnStat and ipfm -- which are great system bandwidth or host/network monitoring.
Ideally I would like to do something within the ruby code to track the data sent/received. I think if I could just get the raw header and add that length to the body length for both transfer and receive that would be Good Enoughâ„¢.
Can you use New Relic for Ruby? It has great visualizations/graphs for network usage/api calls.
It seems like it would be pretty easy to write a middleware to track this using Faraday.