I'm using streadway/amqp to do a tie in from rabbitmq to our alert system. I need a method that can return a list of all the currently declared queues (exchanges would be nice too!) so that I can go through and get all the message counts.
I'm digging through the api documentation here...
http://godoc.org/github.com/streadway/amqp#Queue
...but I don't seem to be finding what I'm looking for. We're currently using a bash call to 'rabbitmqctl list_queues' but that's a kludge way to get this information, requires a custom sudo setting, and fires off hundreds of log entries a day to the secure log.
edit: method meaning, 'a way to get this piece of information' as opposed to an actual call, though a call would be great I don't believe it exists.
Answered my own question. There isn't a way! The amqp spec doesn't have a standard way of finding this out which seems like a glaring oversight to me. However, since my backend is rabbitmq with the management plugin, I can make a call to that to get this information.
from https://stackoverflow.com/a/21286370/5076297 (in python, I'll just have to translate this and probably also figure out the call to get vhosts):
import requests
def rest_queue_list(user='guest', password='guest', host='localhost', port=15672, virtual_host=None):
url = 'http://%s:%s/api/queues/%s' % (host, port, virtual_host or '')
response = requests.get(url, auth=(user, password))
queues = [q['name'] for q in response.json()]
return queues
edit: In golang (this was a headache to figure out as I haven't done anything with structures in years)
package main
import (
"fmt"
"net/http"
"encoding/json"
)
func main() {
type Queue struct {
Name string `json:name`
VHost string `json:vhost`
}
manager := "http://127.0.0.1:15672/api/queues/"
client := &http.Client{}
req, _ := http.NewRequest("GET", manager, nil)
req.SetBasicAuth("guest", "guest")
resp, _ := client.Do(req)
value := make([]Queue, 0)
json.NewDecoder(resp.Body).Decode(&value)
fmt.Println(value)
}
Output looks like this (I have two queues named hello and test)
[{hello /} {test /}]
Related
The official Go documentation on the datastore package (client library for the GCP datastore service) has the following code snippet for demonstartion:
type Entity struct {
Value string
}
func main() {
ctx := context.Background()
// Create a datastore client. In a typical application, you would create
// a single client which is reused for every datastore operation.
dsClient, err := datastore.NewClient(ctx, "my-project")
if err != nil {
// Handle error.
}
k := datastore.NameKey("Entity", "stringID", nil)
e := new(Entity)
if err := dsClient.Get(ctx, k, e); err != nil {
// Handle error.
}
old := e.Value
e.Value = "Hello World!"
if _, err := dsClient.Put(ctx, k, e); err != nil {
// Handle error.
}
fmt.Printf("Updated value from %q to %q\n", old, e.Value)
}
As one can see, it states that the datastore.Client should ideally only be instantiated once in an application. Now given that the datastore.NewClient function requires a context.Context object does it mean that it should get instantiated only once per HTTP request or can it safely be instantiated once globally with a context.Background() object?
Each operation requires a context.Context object again (e.g. dsClient.Get(ctx, k, e)) so is that the point where the HTTP request's context should be used?
I'm new to Go and can't really find any online resources which explain something like this very well with real world examples and actual best practice patterns.
You may use any context.Context for the datastore client creation, it may be context.Background(), that's completely fine. Client creation may be lengthy, it may require connecting to a remote server, authenticating, fetching configuration etc. If your use case has limited time, you may pass a context with timeout to abort the operation. Also if creation takes longer than the time you have, you may use a context with cancel and abort the mission at your will. These are just options which you may or may not use. But the "tools" are given via context.Context.
Later when you use the datastore.Client during serving (HTTP) client requests, then using the request's context is reasonable, so if a request gets cancelled, then so will its context, and so will the datastore operation you issue, rightfully, because if the client cannot see the result, then there's no point completing the query. Terminating the query early you might not end up using certain resources (e.g. datastore reads), and you may lower the server's load (by aborting jobs whose result will not be sent back to the client).
I am trying to implement a very basic PUB/SUB pattern using ZeroMQ. I would like to have a server (always active) broadcasting messages (publisher) to all clients and does not care about connected clients.
If a clients connect to this server as a subscriber, it should receive the message.
However, I can not send the message using PUB/SUB.
In Python it would be:
# publisher (server.py)
import zmq
ctx = zmq.Context()
publisher = ctx.socket(zmq.PUB)
publisher.bind('tcp://127.0.0.1:9091')
while True:
publisher.send_string("test")
and
# subscriber (client.py)
import zmq
ctx = zmq.Context()
subscriber = ctx.socket(zmq.SUB)
subscriber.connect('tcp://127.0.0.1:9091')
while True:
msg = subscriber.recv_string()
print msg
Or in golang:
package main
import (
"github.com/pebbe/zmq4"
"log"
"time"
)
func Listen(subscriber *zmq4.Socket) {
for {
s, err := subscriber.Recv(0)
if err != nil {
log.Println(err)
continue
}
log.Println("rec", s)
}
}
func main() {
publisher, _ := zmq4.NewSocket(zmq4.PUB)
defer publisher.Close()
publisher.Bind("tcp://*:9090")
subscriber, _ := zmq4.NewSocket(zmq4.SUB)
defer subscriber.Close()
subscriber.Connect("tcp://127.0.0.1:9090")
go Listen(subscriber)
for _ = range time.Tick(time.Second) {
publisher.Send("test", 0)
log.Println("send", "test")
}
}
Did I mis-understood this pattern or do I need to send a particular signal from the client to the server, when connecting. I am interested in the golang version and only use the python version for testing.
Did I mis-understood this pattern? Yes, fortunately you did.
ZeroMQ archetypes were defined so as to represent a certain behaviour. As said, PUSH-archetype AccessPoint pushes every message "through" all the so far setup communication channels, PULL-er AccessPoint pulls anything that has arrived down the line(s) to "it's hands", PUB-lisher AccessPoint publishes, SUB-scriber AccessPoint subscribes, so as to receive just the messages, that match it's topic-filter(s), but not any other.
As it seems clear, such Archetype "specification" helps build the ZeroMQ smart messaging / signalling infrastructure for our ease of use in distributed-systems architectures.
# subscriber (client.py)
import zmq
ctx = zmq.Context()
subscriber = ctx.socket( zmq.SUB )
subscriber.connect( 'tcp://127.0.0.1:9091' )
subscriber.setsockopt( zmq.LINGER, 0 ) # ALWAYS:
subscriber.setsockopt( zmq.SUBSCRIBE, "" ) # OTHERWISE NOTHING DELIVERED
while True:
msg = subscriber.recv_string() # MAY USE .poll() + zmq.NOBLOCK
print msg
subscriber, _ := zmq4.NewSocket( zmq4.SUB )
subscriber.Connect( "tcp://127.0.0.1:9090" )
subscriber.SetSubscribe( filter ) // SET: <topic-filter>
subscriber.SetLinger( 0 ) // SAFETY FIRST: PREVENT DEADLOCK
defer subscriber.Close() // NOW MAY SAFELY SET:
...
msg, _ := subscriber.Recv( 0 )
As defined, any right instantiated SUB-side AccessPoint object has literally zero-chance to know, what will be the choice of what messages are those right ones, so that they ought be "delivered" and what messages are not.
Without this initial piece of knowledge, ZeroMQ designers had a principal choice to be either Archetype-policy consistent and let PUB-side AccessNode to distribute all the .send()-acquired messages only to those SUB-side AccessNode(s), that have explicitly requested to receive any such, right via the zmq.SUBSCRIBE-mechanics or to deliver everything sent from PUB also to all so far un-decided SUB-s.
The former was a consistent and professional design step from ZeroMQ authors.
The latter would actually mean to violate ZeroMQ own RFC-specification.
The latter choice would be something like if one has just moved to a new apartment, one would hardly expect to find all newspapers and magazines to appear delivered in one's new mailbox from next morning on, would one? But if one subscribes to Boston Globe, the very next morning the fresh release will be at the doorstep as it will remain to be there, until one cancels the subscription or the newspaper went bankrupt or a lack of paper rolls prevented the printing shop from delivering in due time and fashion or a traffic jam in the Big Dig tunnel might have caused troubles for all or just the local delivery some one particular day.
All this is natural and compatible with the Archetype-policy.
Intermezzo: Golang has already bindings to many different API versions
Technology purists will object here, that early API releases ( till some v3.2+ ) actually did technically transport all message-payloads from a PUB to all SUB-s, as it simplified PUB-side workload envelope, but increased transport-class(es) data-flow and SUB-side resources / deferred topic-filter processing. Yet all this was hidden from user-code, right by the API horizon of abstraction. So, except of a need to properly scale resources, this was transparent to user. More recent API versions reverted the role of topic-filter processor and let this to now happen on the PUB-side. Nevertheless, in both cases, the ZeroMQ RFC specification policy is implemented in such a manner, the SUB-side will never deliver ( through the .recv()-interface ) a single message, that was not matching the valid, explicit SUB-side subscription(s)
In all cases a SUB-side has not yet explicitly set any zmq.SUBSCRIBE-instructed topic-filter, it cannot and will not deliver anything ( which is both natural and fully-consistent with the ZeroMQ RFC Archetype-policy defined for the SUB-type AccessPoint ).
The Best Next Step:
Always, at least, read the ZeroMQ API documentation, where all details are professionally specified - so at least, one can get a first glimpse on the breath of the smart messaging / signaling framework.
This will not help anyone to start on a green-field and fully-build one's own complex mental-concept and indepth understanding of how all the things work internally, which is obviously not any API-documentation's ambition, is it? Yet, this will help anyone to refresh or remind about all configurable details, once one has mastered the ZeroMQ internal architecture, as detailed in the source, referred in the next paragraph.
Plus, for anyone, who is indeed interested in distributed-systems or just zeromq per-se, it is worth one's time and efforts to always read Pieter HINTJENS' book "Code Connected, Volume 1" ( freely available in pdf ) plus any other of his books on his rich experience on software engineering later, because his many insights into modern computing may and will inspire ( and lot ).
edit:
MWE in GO
package main
import (
"github.com/pebbe/zmq4"
"log"
"time"
)
func Listen(subscriber *zmq4.Socket) {
for {
s, err := subscriber.Recv(0)
if err != nil {
log.Println(err)
continue
}
log.Println("rec", s)
}
}
func main() {
publisher, _ := zmq4.NewSocket(zmq4.PUB)
publisher.SetLinger(0)
defer publisher.Close()
publisher.Bind("tcp://127.0.0.1:9092")
subscriber, _ := zmq4.NewSocket(zmq4.SUB)
subscriber.SetLinger(0)
defer subscriber.Close()
subscriber.Connect("tcp://127.0.0.1:9092")
subscriber.SetSubscribe("")
go Listen(subscriber)
for _ = range time.Tick(time.Second) {
publisher.Send("test", 0)
log.Println("send", "test")
}
}
I'm trying to develop a simple job queue server with some worker that query it but I encountered a problem with my net/http server. I'm surely doing something bad but after ~3 minutes my server start displaying :
http: Accept error: accept tcp [::]:4200: accept4: too many open files; retrying in 1s
For information it receive 10 request per second in my test case.
Here's two files to reproduce this error :
// server.go
package main
import (
"net/http"
)
func main() {
http.HandleFunc("/get", func(rw http.ResponseWriter, r *http.Request) {
http.Error(rw, "Try again", http.StatusInternalServerError)
})
http.ListenAndServe(":4200", nil)
}
// worker.go
package main
import (
"net/http"
"time"
)
func main() {
for {
res, _ := http.Get("http://localhost:4200/get")
defer res.Body.Close()
if res.StatusCode == http.StatusInternalServerError {
time.Sleep(100 * time.Millisecond)
continue
}
return
}
}
I already done some search about this error and I found some interesting response but none of these fixed my issue.
The first response I saw was to correctly close the Body in the http.Get response, as you can see I did it.
The second response was to change the file descriptor ulimit of my system but as I will not control where my app will run I prefer to not use this solution (But for information it's set at 1024 on my system)
Can someone explain me why this problem happen and how I can fix it in my code ?
Thanks a lot for your time
EDIT :
EDIT 2 : In comment Martin says that I'm not closing the Body, I tried to close it (without defer so) and it fixed the issue. Thanks Martin ! I was thinking that continue will execute my defer, I was wrong.
I found a post explaining the root problem in a lot more detail.
Nathan Smith even explains how to control timeouts on the TCP level, if needed.
Below is a summary of everything I could find on this particular problem, as well as the best practices to avoid this problem in future.
Problem
When a response is received regardless of whether response-body is required or not, the connection is kept alive until the response-body stream is closed. So, as mentioned in this thread, always close the response-body. Even if you do not need to use/read the body content:
func Ping(url string) (bool) {
// simple GET request on given URL
res, err := http.Get(url)
if err != nil {
// if unable to GET given URL, then ping must fail
return false
}
// always close the response-body, even if content is not required
defer res.Body.Close()
// is the page status okay?
return res.StatusCode == http.StatusOK
}
Best Practice
As mentioned by Nathan Smith never use the http.DefaultClient in production systems, this includes calls like http.Get as it uses http.DefaultClient at its base.
Another reason to avoid http.DefaultClient is that it is a Singleton (package level variable), meaning that the garbage collector will not try to clean it up, which will leave idling subsequent streams/sockets alive.
Instead create your own instance of http.Client and remember to always specify a sane Timeout:
func Ping(url string) (bool) {
// create a new instance of http client struct, with a timeout of 2sec
client := http.Client{ Timeout: time.Second * 2 }
// simple GET request on given URL
res, err := client.Get(url)
if err != nil {
// if unable to GET given URL, then ping must fail
return false
}
// always close the response-body, even if content is not required
defer res.Body.Close()
// is the page status okay?
return res.StatusCode == http.StatusOK
}
Safety Net
The safety net is for that newbie on the team, who does not know the shortfalls of http.DefaultClient usage. Or even that very useful, but not so active, open-source library that is still riddled with http.DefaultClient calls.
Since http.DefaultClient is a Singleton we can easily change the Timeout setting, just to ensure that legacy code does not cause idle connections to remain open.
I find it best to set this on the package main file in the init function:
package main
import (
"net/http"
"time"
)
func init() {
/*
Safety net for 'too many open files' issue on legacy code.
Set a sane timeout duration for the http.DefaultClient, to ensure idle connections are terminated.
Reference: https://stackoverflow.com/questions/37454236/net-http-server-too-many-open-files-error
*/
http.DefaultClient.Timeout = time.Minute * 10
}
As Martin say in comment I don't really closed the Body after the Get request. I used defer res.Body.Close() but it's not executed since I'm staying in the for loop. So continue dont't trigger defer
Please note that in some cases the setting in /etc/sysctl.conf
net.ipv4.tcp_tw_recycle = 1
Could cause this error because TCP connections remain open.
A temporary solution, just increase the number of open files:
ulimit -Sn 10000
I have a zip file stored on Google Drive (it is shared publicly). I want to know how to download it in Golang. This current code just creates a blank file named "file.zip":
package main
import (
"fmt"
"io"
"net/http"
"os"
)
func main() {
url := "https://docs.google.com/uc?export=download&id=0B2Q7X-dUtUBebElySVh1ZS1iaTQ"
fileName := "file.zip"
fmt.Println("Downloading file...")
output, err := os.Create(fileName)
defer output.Close()
response, err := http.Get(url)
if err != nil {
fmt.Println("Error while downloading", url, "-", eerrror)
return
}
defer response.Body.Close()
n, err := io.Copy(output, response.Body)
fmt.Println(n, "bytes downloaded")
}
This appears to be a bug, either with Google drive or with golang, I'm not sure which!
The problem is that the first URL you gave redirects to a second URL which looks something like this
https://doc-00-c8-docs.googleusercontent.com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/8i67l6m6cdojptjuh883mu0qqmtptds1/1376330400000/06448503420061938118/*/0B2Q7X-dUtUBebElySVh1ZS1iaTQ?h=16653014193614665626&e=download
Note the * in the URL which is legal according to this stack overflow question. However it does have a special meaning as a delimeter.
Go fetches the URL with the * encoded as %2A like this
https://doc-00-c8-docs.googleusercontent.com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/8i67l6m6cdojptjuh883mu0qqmtptds1/1376330400000/06448503420061938118/%2A/0B2Q7X-dUtUBebElySVh1ZS1iaTQ?h=16653014193614665626&e=download
Which Google replies "403 Forbidden" to.
Google doesn't seem to be resolving the %2A into a *.
According to this article on wikipedia reserved characters (of which * is one) used in a URI scheme: if it is necessary to use that character for some other purpose, then the character must be percent-encoded.
I'm not enough of an expert on this to say who is right, but since Google wrote both parts of the problem it is definitely their fault somewhere!
Here is the program I was using for testing
I found the solution.
Use: https://googledrive.com/host/ID
Instead of: https://docs.google.com/uc?export=download&id=ID
I'm still investigating on why this is happening, in the meanwhile you can use this workaround:
http://play.golang.org/p/SzGBAiZdGJ
CheckRedirect is called when a redirect happens and you can add an Opaque path to avoid having the URL url-encoded.
Francesc
This simple HTTP server contains a call to time.Sleep() that makes
each request take five seconds. When I try quickly loading multiple
tabs in a browser, it is obvious that each request
is queued and handled sequentially. How can I make it handle concurrent requests?
package main
import (
"fmt"
"net/http"
"time"
)
func serve(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "Hello, world.")
time.Sleep(5 * time.Second)
}
func main() {
http.HandleFunc("/", serve)
http.ListenAndServe(":1234", nil)
}
Actually, I just found the answer to this after writing the question, and it is very subtle. I am posting it anyway, because I couldn't find the answer on Google. Can you see what I am doing wrong?
Your program already handles the requests concurrently. You can test it with ab, a benchmark tool which is shipped with Apache 2:
ab -c 500 -n 500 http://localhost:1234/
On my system, the benchmark takes a total of 5043ms to serve all 500 concurrent requests. It's just your browser which limits the number of connections per website.
Benchmarking Go programs isn't that easy by the way, because you need to make sure that your benchmark tool isn't the bottleneck and that it is also able to handle that many concurrent connections. Therefore, it's a good idea to use a couple of dedicated computers to generate load.
From Server.go , the go routine is spawned in the Serve function when a connection is accepted. Below is the snippet, :-
// Serve accepts incoming connections on the Listener l, creating a
// new service goroutine for each. The service goroutines read requests and
// then call srv.Handler to reply to them.
func (srv *Server) Serve(l net.Listener) error {
for {
rw, e := l.Accept()
if e != nil {
......
c, err := srv.newConn(rw)
if err != nil {
continue
}
c.setState(c.rwc, StateNew) // before Serve can return
go c.serve()
}
}
If you use xhr request, make sure that xhr instance is a local variable.
For example, xhr = new XMLHttpRequest() is a global variable. When you do parallel request with the same xhr variable you receive only one result. So, you must declare xhr locally like this var xhr = new XMLHttpRequest().