Is there any way for broadcast to every channel in redis? - go

I am trying to broadcast the message to channel in redis, but every user have they'r own channel with their user_id.
the way i think can be use is get all active channel and then publish it one by one. because what i know redis can't publish if diferent channel.
but the problem is, in lib go-redis i am using when the user subribe and check the redis with command PUBSUB CAHNNELS there is no channel result. i read on documentation the subribe function not gonna actived thhe channel immidietly. so how can i get the subribe channel ?
is there any solution to solve this ?
i am using golang-redis https://godoc.org/github.com/go-redis/redis

Subscribe each connection to the per user channel and and a channel for broadcasts. To send to all users, publish to the broadcast channel. The subscriber code will look something like this with the go-redis client:
sub := client.Subscribe(userChannel, broadcastChannel)
defer sub.Close()
for {
m, err := sub.ReceiveMessage()
... do something with m
}
where userChannel and broadcastChannel are the names of Redis channels. Use code like this to broadcast:
cmd := client.Publish(broadcastChannel, message)
if cmd.Err() != nil {
// handle error
}

Related

Golang Redis close vs unsubscribe from channel

pubsub := rdb.Subscribe(ctx, "mychannel1")
// Close the subscription when we are done.
defer pubsub.Close()
// vs unsubscribe from a channel
defer pubsub.Unsubscribe(ctx, "mychannel1")
ch := pubsub.Channel()
for msg := range ch {
fmt.Println(msg.Channel, msg.Payload)
}
If I don't wan't redis pub-sub channel anymore. Which is the recommended way to unsubscribe a receiver/subscription from a channel and why? Do I also need to delete the redis pub-sub channel at the end too?
Close and Unsubscribe are two behaviors.
we should close when service stop, just like close net conn.
Unsubscribe means we no longer subscribe to something even if the service start.
you don't need to delete the channel at the end, GC will ecycle.

Difference between NewChannel vs Request in ssh sftp server

I'm looking at go sftp server example code
https://github.com/pkg/sftp/blob/master/examples/go-sftp-server/main.go
There are section of code which are unclear to me
_, chans, reqs, err := ssh.NewServerConn(nConn, config)
if err != nil {
log.Fatal("failed to handshake", err)
}
fmt.Fprintf(debugStream, "SSH server established\n")
// The incoming Request channel must be serviced.
go ssh.DiscardRequests(reqs)
// Service the incoming Channel channel.
for newChannel := range chans {
...
}
First: With ssh.NewServerConn, if NewChannel(chans) represent new request to the channel what is Request reqs. So what is difference between chans and reqs here.
Second: Why is the need to ssh.DiscardRequests(reqs)
Looking at the documentation for ssh.NewServerConn it appears that it returns the following:
*ServerConn
<-chan NewChannel
<-chan *Request
error
The second returned value, NewChannel
represents an incoming request to a channel
The third returned value, Request
is a request sent outside of the normal stream of data
This doesn't really answer your questions but it does provide helpful clues as where to look.
So to answer you questions:
chans receives connections that are new to the server. Using the received value from chans, you can either accept and communicate with that connection or just reject the connection. This can be thought of multiple people logging into a remote machine via ssh and handling multiple sessions.
reqs holds global requests (which is defined here) sent to either the server or client that should not be sent to any specific channel. RFC4254 gives the example of a such a request as "start TCP/IP forwarding for a specific port".
You can see the internal usage of how the ssh package uses the incomingRequests channel here.
The documentation for ssh.NewServerConn explicitly states
The Request and NewChannel channels must be serviced, or the connection will hang.
In the event that this server does receive a global request it needs to be handled appropriately if the request is asking for a reply.
Apart from #will7200 answer I just want to add a couple of things which I found while my reading around this.
SSH has Global request called SSH_MESSAGE_GLOBAL_REQUEST and SSH_MESSAGE_CHANNEL_REQUEST or starts TCP/IP forwarding for a specific port
a channel is any specific terminal or how we see it when we send the data across the ssh server and client.
So reqs over here is the global request and all channel requests are wrapped inside the channel.
GLOBAL requests are requests that are not specific to a CHANNEL like TCPKeepAlive (as mention in ssh_config) or start TCP/IP forwarding for a specific port.
and DisdCardRequest essentially discard those request that does not want a reply

Relay data between two different tcp clients in golang

I'm writing a TCP server which simultaneously accepts multiple connections from mobile devices and some WiFi devices (IOT). The connections needs to be maintained once established, with the 30 seconds timeout if there is no heartbeat received. So it is something like the following:
// clientsMap map[string] conn
func someFunction() {
conn, err := s.listener.Accept()
// I store the conn in clientsMap
// so I can access it, for brevity not
// shown here, then:
go serve(connn)
}
func serve(conn net.Conn) {
timeoutDuration := 30 * time.Second
conn.SetReadDeadline(time.Now().Add(timeoutDuration))
for {
msgBuffer := make([]byte, 2048)
msgBufferLen, err := conn.Read(msgBuffer)
// do something with the stuff
}
}
So there is one goroutine for each client. And each client, once connected to the server, is pending on the read. The server then processes the stuff read.
The problem is that I sometimes need to read things off one client, and then pass data to another (Between a mobile device and a WiFi device). I have stored the connections in clientsMap. So I can always access that. But since each client is handled by one goroutine, shall I be passing the data from one client to another by using a channel? But if the goroutine is blocked waiting for a pending read, how do I make it also wait for data from a channel? Or shall I just obtain the connection for the other party from the clientsMap and write to it?
The documentation for net.Conn clearly states:
Multiple goroutines may invoke methods on a Conn simultaneously.
So yes, it is okay to simply Write to the connections. You should take care to issue a single Write call per message you want to send. If you call Write more than once you risk interleaving messages from different mobile devices. This implies calling Write directly and not via some other API (in other words don't wrap the connection). For instance, the following would not be safe:
json.NewEncoder(conn).Encode(myValue) // use json.Marshal(myValue) instead
io.Copy(conn, src) // use io.ReadAll(src) instead

Google pubsub golang subscriber stops receiving new published message(s) after being idle for a few hours

I created a TOPIC in google pubsub, and created a SUBSCRIPTION inside the TOPIC, with the following settings
then I wrote a puller in go, using its Receive to pull and acknowledge published messages
package main
import (
...
)
func main() {
ctx := context.Background()
client, err := pubsub.NewClient(ctx, config.C.Project)
if err != nil {
// do things with err
}
sub := client.Subscription(config.C.PubsubSubscription)
err := sub.Receive(ctx, func(ctx context.Context, msg *pubsub.Message) {
msg.Ack()
})
if err != context.Canceled {
logger.Error(fmt.Sprintf("Cancelled: %s", err.Error()))
}
if err != nil {
logger.Error(fmt.Sprintf("Error: %s", err.Error()))
}
}
Nothing fancy, its working well, but then after a while (~ after 3 hours idle), it stops receiving new published messages, no error(s), nothing. Am i missing something?
In general, there can be several reasons why a subscriber may stop receiving messages:
If a subscriber does not ack or nack messages, the flow control limits can be reached, meaning no more messages can be delivered. This does not seem to be the case in your particular instance given that you immediately ack messages.
If another subscriber starts up for the same subscription, it could be receiving the messages. In this scenario, one would expect the subscriber to receive a subset of the messages rather than no messages at all.
Publishers just stop publishing messages and therefore there are no messages to receive. If you restart the subscriber and it starts receiving messages again, this probably isn't the case. You can also verify that a backlog is being built up by looking at the Stackdriver metric for subscription/backlog_bytes.
If your problem does not fall into one of those categories, it would be best to reach out to Google Cloud support with your project name, topic name, and subscription name so that they can narrow down the issue to either your user code, the client library itself, or the service.
I was experiencing something similar and I was pretty sure there was not another subscriber pulling those messages.
Try this: go to the topic, create a new bogus subscription (name it whatever you want, because you'll just delete it later). Right after I did that both the fake subscription (which I was using the python sample code client to subscribe to) and the real one was receiving messages again. Strange solution, but maybe it kicked the topic awake again.
Hopefully someone from Google could give us some insight into what's happening here, but I'm definitely not paying them enough to get direct support.
Few changes will help you to investigate the issue better:
- Check error from Receive
- Use separate context for Receive
ctx := context.Background()
err := sub.Receive(ctx, func(ctx context.Context, msg *pubsub.Message) {
msg.Ack()
})
if err != nil {
log.Fatal(err)
}
Does your code work before? I have problems with PubSub since today. Methods like get_topic(), create_topic() in Python PubSub library stop working, but I don't have any problems with sending and pulling messages. Yesterday everything was working fine but today not...

reconnectable websocket which drops message while reconnecting

I'm implementing websocket client with golang.
I have to send several messages in one websocket session.
To deal with network problem, I need to re-connect to websocket server whenever a connection is accidentally closed.
Currently I'm thinking of implementation like this.
for {
select {
case t := <-message:
err := connection.WriteMessage(websocket.TextMessage, []byte(t.String()))
if err != nil {
// If session is disconnected.
// Try to reconnect session here.
connection.reconnect()
}
case t := <- errSignal:
panic()
}
}
In an above example, messages stacks while reconnecting.
This is not preferable for my purpose.
How can I drop websocket messages while reconnecting?
messages stacks while reconnecting.
This is not preferable for my purpose.
How can I drop websocket messages while reconnecting?
I take it message is a buffered channel. It's not clear to me exactly what behavior you're asking for with regard to dropping websocket messages while reconnecting, or why you want to do that, but you have some options for tracking messages related to the reconnect and handling them however you want.
First off, buffered channels act like queues: first in, first out (FIFO). You can always pop an element off the channel with a receive operation. You don't need to pass this to a variable or use it. So say you just wanted to remove the first two messages from the queue around the reconnect and do nothing with them (not sure why), you can:
if err != nil {
// If session is disconnected.
// Try to reconnect session here.
connection.reconnect()
// drop the next two messages
<-message
<-message
}
But that's going to remove the messages from the front of the queue. If the queue wasn't empty when you started the reconnect, it won't specifically remove the messages added during the reconnect.
If you want to relate the number of messages removed to the number of messages added during the reconnect, you can use the channel length:
before := len(message)
connection.reconnect()
after := len(message)
for x := 0; x < after - before; x++ {
<-message
}
Again, that will remove from the front of the queue, and I don't know why you would want to do that unless you're just trying to empty the channel.
And if the channel is non-empty at the start of the reconnect and you really want to drop the messages that got added during the reconnect, you can use the time package. Channels can be defined for any Go type. So create a struct with fields for the message and a timestamp, redefine your buffered message channel to the struct type, and set the timestamp before sending the message. Save a "before" timestamp from before the reconnect and an "after" afterward. Then before processing a received message you can check whether it's in an after/before window and delete it (not write it) if so. You could make a new data structure to save several before/after windows, methods on the type to check whether a given time falls within any. Again, I don't know why you would want to do this.
Perhaps a better solution would be to just limit the buffer size of the channel instead, and then no new messages could be added to the channel when the channel is full. Would that meet your needs? If you have a reason to drop messages maybe you can explain more about your goals and design -- especially to explain which messages you want to drop.
It might also clarify your question if you include more of the relevant code, such as the declaration of the message channel.
Edit: Asker added info in comment to this answer, and comment to question.
The choice between a buffered channel and an unbuffered channel is, in part, about whether you want senders to block when receivers are not available. If this is your only receiver, during the reconnect it will be unavailable. Therefore if it works for your design for senders to block, you can use an unbuffered channel instead of timestamps and no messages will be added to the channel during the reconnect. However, senders blocked at the channel send will be waiting for the receiver with their old message, and only after that send succeeds will they send a new message with current data. If that doesn't work for you, a buffered channel is probably the better option.

Resources