I am using eclipse paho golang library to create new MQTT client for a particular client id:
func CreateMQTTClient(clientID string) (client MQTT.Client) {
username := viper.GetString("messaging.rabbitmq.username")
password := viper.GetString("messaging.rabbitmq.password")
host := viper.GetString("messaging.rabbitmq.host")
mqqtPort := viper.GetString("messaging.rabbitmq.mqqtPort")
rabbitMqMQQTURL := "tcp://" + host + ":" + mqqtPort
opts := MQTT.NewClientOptions().AddBroker(rabbitMqMQQTURL)
opts.SetClientID(clientID)
opts.Username = username
opts.Password = password
opts.SetCleanSession(false)
cli := MQTT.NewClient(opts)
if (!cli.IsConnected()) {
log.Println("I came here for cli:", clientID)
if token := cli.Connect(); token.Wait() && token.Error() != nil {
log.Print(token.Error())
}
}
return cli
}
I am not sure how do I get this client back using clientId. If I call CreateMQTTClient again, all existing subscriptions are lost.
There is, unfortunately, no way to query an MQTT server to find out what subscriptions it has active for your client id. When you connect with the same client ID as a previous session the server assumes you have the same state as last time you were connected, but there is no way to pre connect a MessageHandler with a topic in the Go client, the only way to add and remove them is with Subscribe/Unsubscribe.
Assuming the server is working correctly, if you connect as above reusing a client id the server will continue to send you messages according to your previous subscriptions but the Go client doesn't know what to do with them so will invoke the default message handler. The best way to currently resolve this would be to call Subscribe() in the OnConnectHandler, assuming the topics you want to subscribe to are predetermined rather than dynamic.
Related
I have a server-side streaming RPC hosted on Google Cloud Run.
With the following proto definition:
syntax = "proto3";
package test.v1;
service MyService {
// Subscribe to a stream of events.
rpc Subscribe (SubscribeRequest) returns (stream SubscribeResponse) {}
}
message SubscribeRequest {
}
message SubscribeResponse {
}
Using BloomRPC/grpcurl, when I stop the method I get a stream.Context().Done() event which I can use to gracefully stop certain tasks. Here is an example of the Suscribe method:
func (s *myService) Subscribe(req *pb.SubscribeRequest, stream pb.Instruments_SubscribeServer) error {
// Create a channel for this client.
ch := make(chan *pb.SubscribeResponse)
// Add the channel object 'ch' to a Global list of channels where we have a 'broadcaster' sending
// messages to all connected clients.
// TODO: pass to broadcaster client list.
for {
select {
case <-stream.Context().Done():
close(ch)
fmt.Println("Removed client from global list of channels")
return nil
case res := <-ch:
_ = stream.Send(res)
}
}
}
On the client side, when I test the service locally (i.e. running a local gRPC server in Golang), using BloomRPC/grpcurl I get a message on the stream.Context().Done() channel whenever I stop the BloomRPC/grpcurl connection. This is the expected behaviour.
However, running the exact same code on Cloud Run in the same way (via BloomRPC/grpcurl), I don't get a stream.Context().Done() message - any reason why this would be different on Google Cloud Run? Looking at the Cloud Run logs, a call to the Subscribe method essentially 'hangs' until the request reaches its timeout.
I needed to enable HTTP/2 Connections in Cloud Run for this to work.
I am very new to Go and have found myself working with sockets as my first project. This is a redundant question, but I have failed to understand how
to send a websocket update to a specific client in Go (using Gorilla).
The broad problem that I am trying to solve is - Building a typeahead using websockets and a search engine like ES/Lucene. I have maintained a bunch of indexes on my search engine and have a Go wrapper around it. When I started working on using websockets in Go, I have been finding almost all the examples showing broadcasting mechanism. When I tried to dig into this and tried to modify the example given in Gorilla's github repo based on the examples given in this thread and in this answer, I don't seem to understand connections and how does that fit in client.go
Ideally, the way I would like to see this working is -
A socket connection between the client and server is established
Upon the client sending inputs via the socket, the server fetches it and throws into into a channel (Go channel)
The indexing wrapper checks for this channel, and once there is something to fetch, the index is retrieved and written back to the socket
How can the server uniquely identify the Client?
I have used the examples given on Gorilla's Github repo
From my codebase hub.go has the following
type Hub struct {
// Registered clients.
clients map[*Client]bool
// Inbound messages from the clients.
broadcast chan []byte
// Register requests from the clients.
register chan *Client
// Unregister requests from clients.
unregister chan *Client
connections map[string]*connection
}
func newHub() *Hub {
return &Hub{
broadcast: make(chan []byte),
register: make(chan *Client),
unregister: make(chan *Client),
clients: make(map[*Client]bool),
connection: make(map[*Client]bool), // is this alright?
}
}
func (h *Hub) run() {
for {
select {
case client := <-h.register:
h.clients[client] = true
case client := <-h.unregister:
if _, ok := h.clients[client]; ok {
delete(h.clients, client)
close(client.send)
}
case message := <-h.broadcast:
for client := range h.connections {
select {
case client.send <- message:
default:
close(client.send)
delete(h.connections, client)
}
}
}
}
}
and I am unsure with what I should be adding to client.go
type Client struct {
// unique ID for each client
// id string
// Hub object
hub *Hub
// The websocket connection.
conn *websocket.Conn
// Buffered channel of outbound messages.
send chan []byte
// connection --> (what should the connection property be?)
connection string
}
Please note - I will be adding an Id field within the Client struct. How can I proceed from here?
The chat example shows how to implement broadcast. The chat example is not a good starting point for an application if broadcast is not required.
To send a message to a specific websocket connection, simply write to the connection using NextWriter or WriteMessage. These methods do not support concurrent writers, so you may need to use a mutex or goroutine to ensure a single writer.
The simple approach for finding a specific *websocket.Connection is to pass *websocket.Connection to the code that needs it. If the application needs to associate other state with a connection, then define a type to hold that state and pass a pointer to that around:
type Client struct {
conn *websocket.Conn
mu sync.Mutex
...
}
The Hub can be modified to send messages to specific connection, but it's a roundabout path if broadcast is not needed. Here's how to do it:
Add ID field to client:
ID idType // replace idType with int, string, or whatever you want to use
Change the Gorilla hub field from connections map[*connection]bool to connections map[idType]*connection.
Define a message type containing the message data and the ID of the target client:
type message struct {
ID idtype
data []byte
}
Replace the hub broadcast field with:
send chan message
Change the hub for loop to:
for {
select {
case client := <-h.register:
h.clients[client.ID] = client
case client := <-h.unregister:
if _, ok := h.clients[client.ID]; ok {
delete(h.clients, client.ID)
close(client.send)
}
case message := <-h.send:
if client, ok := h.clients[message.ID]; ok {
select {
case client.send <- message.data:
default:
close(client.send)
delete(h.connections, client)
}
}
}
Send messages to a specific client by creating a message with the appropriate ID:
hub.send <- message{ID: targetID, data: data}
I'm experiencing some unexpected errors while trying to subscribe on Podio Push service. I use golang concurrency pattern defined here and here is the bayeux client library used for subscription.
Basically the flow tries to retrieve the item first and then subscribe into push object provided with the item object. There is channel object where i store each task (taskLoad: ~each item_id with credentials it needs for retrieval)
item := new(podio.Item)
item, err = podio.GetItem(itemId)
if err != nil {
log.Errorf("PODIO", "Could not get item %d -> %s", itemId, err)
return
}
and, later inside another func
messages := make(chan *bayeux.Message)
server := GetBayeux()
defer server.Close()
if err = push.Subscribe(server, messages); err != nil {
// log err, with item details
log.Errorf("PODIO", "%s", err, push)
// re-enqueue the task in taskLoad channel
go enqueueTask(true, messages, sigrepeat, timer)
// release sigwait channel on subscription error
<-sigwait
return
}
here GetBayeux func is just a singleton which wraps the client
func GetBayeux() *bayeux.Client {
bayeuxOnce.Do(func() {
Bayeux = bayeux.NewClient("https://push.podio.com/faye", nil)
})
return Bayeux
}
there is about ~15000 items to listen and I should subscribe to each of them but unfortunately sometimes I got one of these errors while processing subscriptions
401:k9jx3v4qq276k9q84gokpqirhjrfbyd:Unknown client [{"channel":"/item/9164xxxxx","signature":"46bd8ab3ef2a31297d8f4f5ddxxxx","timestamp":"2018-01-02T14:34:02Z","expires_in":21600}]
OR
HTTP Status 400 [{"channel":"/item/9158xxxxx","signature":"31bf8b4697ca2fda69bd7fd532d08xxxxxx","timestamp":"2018-01-02T14:37:02Z","expires_in":21600}]
OR
[WRN] Bayeux connect failed: HTTP Status 400
OR
Bayeux connect failed: Post https://push.podio.com/faye: http2: server sent GOAWAY and closed the connection; LastStreamID=1999, ErrCode=NO_ERROR, debug=""
So now, i'd like to know why i got these errors and most of all how i can fix them to ensure to listen to all items in the scope.
If anyone knows, is there any limitation about concurrent access into podio push service?
Thanks
UPDATE 2019-01-07
it was the singleton that messed the process. as it was in a goroutine context there was some subscriptions that was not allowed because the server has been closed by another goroutine. The fix was to exposing Unsubscribe method and use it instead of Close method which disconnect the client from the server.
defer server.Close()
became
defer push.Unsubscribe(server)
I'm using gin framework to build an API server. In General, I'm build 2 projects. Project 'API' and Project 'SOCKET'. Project 'API' is the main REST API that will used in Android, developed using gin framework (golang). And Project 'SOCKET' is the socket server for client that will use socket connection , using node.js (Socket.IO)
The process begin like this :
User A : as the requester ; A connect to "API"
User B : as the responder ; B connect to "SOCKET"
User A call API requestData from android, the request will handled by "API"'s project. And Project "API" will record the request, and publish on redis
as new_request using pubsub
this is the code for example :
client := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
Password: "", // no password set
DB: 0, // use default DB
})
pong, err := client.Ping().Result()
fmt.Println(pong, err)
if err !=nil {
fmt.Println("err",err);
}
pubsub, err := client.Subscribe("responseclient")
if err !=nil {
panic(err)
}
defer pubsub.Close()
err = client.Publish("new_request", "Example New Request").Err()
if err !=nil {
panic(err)
}
msg, err :=pubsub.ReceiveMessage()
if err != nil {
panic(err)
}
fmt.Println(msg.Channel, msg.Payload)
}
In Project "SOCKET" there is a subscriber that will listen every publish that occured, and publish new message to channel responseclient
this is for the example code :
ioApp.on ('connection' , function(socket) {
redisSub.on('new_request', function (channel, message) {
console.log(channel + ':' + message);
redisPub.publish("responseclient", JSON.stringify(res));
});
})
This work smoothly, if User B is Connected to Socket.IO. But if User B was offline, or not connected to socket.io, this will waiting for long, until we kill manually or until User B is online
What i am asking for , are :
Can we create something like a callback on redis pub/sub ? If the subscriber doesn't accept the message, due to off line, or something else , we close the connection. Is this possible ?
In Node.Js i know i can use timeout function, that will close the subscribe or emit any event if on certain time, there was no message received, how to do this on golang ? I need to inform the User A if User B is active or offline, so he can wait for a another time to create request.
If nothing can, what is your suggestion for me to do this ?
I hope my question , understandable, and can answered well.
*Some code maybe, missing variable.
** I'm using this library for golang redis : go-redis
1) There are no callbacks in Redis.
2) The usual way to implement a timeout in Go is to use channels and select - where one is a channel where you do the blocking and another channel receives a message on timeout. Examples of that can be found here and here for the docs
Now for (3), you have some options on methods. The first is to use a list, pushing from one side (publishing) and popping from another (subscribing). For the receiver you wild use BRPOP of BLPOP - blocking pop from right or left respectively. You can combine the two to have persistent messaging.
Now part of PUBSUB also depends on what you are publishing to. If you are publishing to a channel that would have a subscriber if and only if there is a user connected to receive it (and thus one and only one subscriber to that channel), you can check the response from your publish command. It will tell you how many clients it was published to. If the channel is only subscribed to by an online receiver you would get a '1' back, and a '0' if the user was offline.
A third example is to store the messages in a sorted set, with the timestamp as the score. This would allow the receiver to connect and get messages from the last time it was connected - but that assumes some persistence of that somewhere - usually the client. You would also need some cleanup activity on the sorted sets.
Some other things to consider in this scenario is whether you eventually use replication, in which case you have to explicitly account for failovers - though really in the scenario you describe you'd want to account for disconnects and reconnects. There are specific examples of this at my post on reliable PUBSUB.
package main
import (
"context"
"fmt"
"time"
"github.com/go-redis/redis/v8"
)
var ctx = context.Background()
func main() {
rdb := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
Password: "", // no password set
DB: 0, // use default DB
})
subscribe := rdb.Subscribe(ctx, "hello")
subscriptions := subscribe.ChannelWithSubscriptions(ctx, 1)
go func() {
var sentCount = 0
for {
rdb.Publish(ctx,"hello",time.Now().UnixNano())
sentCount++
if sentCount >300{
break
}
}
}()
for {
select {
case sub:=<-subscriptions:
fmt.Println(sub)
}
}
}
I use code.google.com/p/go.net/websocket in server, so client can get notification from server.
however, It seems after client connected to server, if there is no any data tranfer between client and server, server will return EOF error at websocket.JSON.Receive(), it looks like a timeout mechanism.
And I have search in Google, it seems websocket protocol has a ping-pong heartbeat to maintain the connection, I want to ask whether code.google.com/p/go.net/websocket support this ping protocol or not?
What should I do if I want keep connection between client and server alive?
Here's working drop-in solution for gorilla/websocket package.
func keepAlive(c *websocket.Conn, timeout time.Duration) {
lastResponse := time.Now()
c.SetPongHandler(func(msg string) error {
lastResponse = time.Now()
return nil
})
go func() {
for {
err := c.WriteMessage(websocket.PingMessage, []byte("keepalive"))
if err != nil {
return
}
time.Sleep(timeout/2)
if(time.Since(lastResponse) > timeout) {
c.Close()
return
}
}
}()
}
As recently as 2013, the go.net websocket library does not support (automatic) keep-alive messages. You have two options:
Implement an "application level" keep-alive by periodically having your application send a message down the pipe (either direction should work), that is ignored by the other side.
Move to a different websocket library that does support keep-alives (like this one) Edit: it looks like that library has been superseded by Gorilla websockets.