I am facing a very weird problem over here. I started a Kafka consumer using sarama-cluster library in go to consume some messages from a kafka topic. But the messages are not being received by the consumer being started.
However, a very weird thing is happening. If I start another consumer parallel to it, the messages are suddenly being delivered to both the consumers.
I cannot think of a logical explanation to it. Any pointers will be appreciated.
Note : This problem started after the Kafka and Zookeeper servers were started non-gracefully.
Here is the go code of consumer for consuming the messages which is not working:
if err := consumer.Start(); err != nil {
return err
}
updChan, err := consumer.Consume()
if err != nil {
return err
}
go func() {
for {
select {
case msg, ok := <-updChan:
if !ok {
return
}
var message liveupdater.KafkaMessage
err := json.Unmarshal(msg.Msg, &message)
if err != nil {
fmt.Println(err)
}
err = handleMessaege(message)
if err != nil {
logrus.Println("encountered error:" + err.Error())
}
consumer.MarkProcessed(msg, string(message.Type))
}
}
}()
Following is the go code where consumer is receiving messages(the only difference between this and previous code is another consumer running in parallel for same topic).
consumeMessages(config)
if err := consumer.Start(); err != nil {
return err
}
updChan, err := consumer.Consume()
if err != nil {
return err
}
go func() {
for {
select {
case msg, ok := <-updChan:
if !ok {
return
}
var message liveupdater.KafkaMessage
err := json.Unmarshal(msg.Msg, &message)
if err != nil {
fmt.Println(err)
}
err = handleMessaege(message)
if err != nil {
logrus.Println("encountered error:" + err.Error())
}
consumer.MarkProcessed(msg, string(message.Type))
}
}
}()
func consumeMessages(config *rakshak_config.Config) {
kafkaConfig := kafka.Config{Brokers: strings.Split(config.Kafka.Brokers, ",")}
logrus.Println("brokers %s", config.Kafka.Brokers)
hermesConsumer, err := hermes.NewConsumer(hermes.Kafka, []string{config.Kafka.Topic}, kafkaConfig)
if err != nil {
logrus.Println("could not get consumer through hermes %s", err)
}
err = hermesConsumer.Start()
if err != nil {
logrus.Println("could not start consumer through hermes %s", err)
}
conChan, err := hermesConsumer.Consume()
if err != nil {
logrus.Println("not able to start consumer channel %s", err)
}
go func() {
for {
select {
case msg, ok := <-conChan:
if !ok {
logrus.Println("could not consume message")
}
logrus.Println("kafka msg string: %s", string(msg.Msg[:]))
hermesConsumer.MarkProcessed(msg, "")
}
}
}()
Thanks in advance.
Related
i have function:
func write() {
defer func() {
serverConn.Close()
}()
for message := range msgChan {
w, err := serverConn.NextWriter(websocket.TextMessage)
if err != nil {
return
}
bmessage, err := json.Marshal(message)
if err != nil {
return
}
_, err = w.Write(bmessage)
if err != nil {
fmt.Println(err)
}
if err := w.Close(); err != nil {
return
}
}
}
And i got
panic: concurrent write to websocket connection
I'm wondering how is this possible? in ws writes only this function, which is running in 1 instance. and second question: what is the point of using NextWriter instead of just conn.WriteMessage()? Is it possible that with a large number of messages NextWriter accumulate and can try to write at the same time?
I'm trying to create a TCP server that will timeout if the client does not respond within the span of every second.
I tried:
func main() {
listener, err := net.Listen("tcp", "localhost:8000")
if err != nil {
log.Fatal(err)
}
for {
conn, err := listener.Accept()
conn.SetDeadline(time.Now().Add(timeout))
if err != nil {
log.Print(err)
}
go handleConn(conn)
}
}
where the timeout is a single second but the disconnects immediately, not even waiting for a reply.
What you want can be achieved by setting socket options on your listener. Tweak the values as per your needs
Note that this is its own KeepAlive and does not depend on incoming/outgoing data by application
func enableTCPKeepAlive(listener *net.TCPListener) error {
rawConn, err := listener.SyscallConn()
if err != nil {
return err
}
cfg := config.TLSServerConfig()
rawConn.Control(
func(fdPtr uintptr) {
// got socket file descriptor. Setting parameters.
fd := int(fdPtr)
//Idle time before sending probe.
err = syscall.SetsockoptInt(fd, syscall.IPPROTO_TCP, syscall.TCP_KEEPIDLE, cfg.TCPKAIdleTime)
if err != nil {
return err
}
//Number of probes.
err := syscall.SetsockoptInt(fd, syscall.IPPROTO_TCP, syscall.TCP_KEEPCNT, cfg.TCPKANumProbes)
if err != nil {
return err
}
//Wait time after an unsuccessful probe.
err = syscall.SetsockoptInt(fd, syscall.IPPROTO_TCP, syscall.TCP_KEEPINTVL, cfg.TCPKAInterval)
if err != nil {
return err
}
// go syscall doesn't have the constant 0x12 (18) for TCP_USER_TIMEOUT.
// 0x12 value referenced from linux kernel source code header:
// include/uapi/linux/tcp.h
err = syscall.SetsockoptInt(fd, syscall.IPPROTO_TCP, 0x12, cfg.TCPKAUserTimeout)
if err != nil {
return err
}
})
return nil
}
There are more options available than the ones I have mentioned above.
Call this function on your listener before the for loop.
func main() {
listener, err := net.Listen("tcp", "localhost:8000")
if err != nil {
log.Fatal(err)
}
err = enableTCPKeepAlive(listener)
if err != nil {
log.Fatal(err)
}
for {
conn, err := listener.Accept()
conn.SetDeadline(time.Now().Add(timeout))
if err != nil {
log.Print(err)
}
go handleConn(conn)
}
}
The problem is almost always in code that is not posted here. The function obviously works like a charme:
package main
import (
"crypto/rand"
"log"
"net"
"time"
)
func main() {
listener, err := net.Listen("tcp", "localhost:8000")
if err != nil {
log.Fatal(err)
}
go func() {
for {
conn, err := listener.Accept()
if err != nil {
log.Print(err)
return
}
go func(c net.Conn) {
defer c.Close()
conn.SetDeadline(time.Now().Add(2 * time.Second))
if err != nil {
log.Print(err)
return
}
buf := make([]byte, 1<<19) // 512 KB
for {
_, err := conn.Read(buf)
if err != nil {
log.Print(err)
break
}
}
}(conn)
}
}()
payload := make([]byte, 1<<20)
_, err = rand.Read(payload) // generate a random payload
if err != nil {
log.Print(err)
}
conn, err := net.Dial("tcp", listener.Addr().String())
if err != nil {
log.Fatal(err)
}
log.Println("Connected to server.")
time.Sleep(5 * time.Second)
_, err = conn.Write(payload)
if err != nil {
log.Print(err)
}
listener.Close()
}
I have this API that scans for drivers' locations and send them via web-socket every 1 second. The issue is that the loop cannot be escaped when client disconnects. It seems it is alive for ever. I am using Gin with nhooyr websocket library.
var GetDriverLocations = func(c *gin.Context) {
wsoptions := websocket.AcceptOptions{InsecureSkipVerify: true}
wsconn, err := websocket.Accept(c.Writer, c.Request, &wsoptions)
if err != nil {
return
}
defer wsconn.Close(websocket.StatusInternalError, "the sky is falling")
driverLocation := &models.DriverLocation{}
ticker := time.NewTicker(time.Second)
defer ticker.Stop()
for {
select {
case <-ticker.C:
case <-c.Request.Context().Done():
fmt.Println("done") //this never gets printed
return
}
coords, err := driverLocation.GetDrivers()
if err != nil {
break
}
err = wsjson.Write(c.Request.Context(), wsconn, &coords)
if websocket.CloseStatus(err) == websocket.StatusNormalClosure {
break
}
if err != nil {
break
}
}
fmt.Println("conn ended") //this never gets printed
}
I also tried this loop but also has the same issue:
for range ticker.C{
coords, err := driverLocation.GetDrivers()
if err != nil {
break
}
err = wsjson.Write(c.Request.Context(), wsconn, &coords)
if websocket.CloseStatus(err) == websocket.StatusNormalClosure {
break
}
if err != nil {
break
}
}
Because the network connection is hijacked from the net/http server by the nhooyr websocket library, the context c.Request.Context() is not canceled until handler returns.
Call CloseRead to get a context that's canceled when the connection is closed. Use that context in the loop.
var GetDriverLocations = func(c *gin.Context) {
wsoptions := websocket.AcceptOptions{InsecureSkipVerify: true}
wsconn, err := websocket.Accept(c.Writer, c.Request, &wsoptions)
if err != nil {
return
}
defer wsconn.Close(websocket.StatusInternalError, "")
ctx := wsconn.CloseRead(c.Request.Context())
driverLocation := &models.DriverLocation{}
ticker := time.NewTicker(time.Second)
defer ticker.Stop()
for {
select {
case <-ticker.C:
case <-ctx.Done():
return
}
coords, err := driverLocation.GetDrivers()
if err != nil {
break
}
err = wsjson.Write(c.Request.Context(), wsconn, &coords)
if err != nil {
break
}
}
}
I have code (I use https://github.com/fiorix/go-smpp):
// -----------------------------------------------
// handleConnection new clients.
// -----------------------------------------------
func (_srv *ServerSmpp) handleConnection(_cfg *ConfigSmpp, c *conn) {
defer c.Close()
if err := _srv.auth(_cfg, c); err != nil {
if err != io.EOF {
log.Printf("smpp_server: server auth failed: %s\n", err)
}
return
}
notify := make(chan error)
go func() {
for {
pb, err := c.Read()
if err != nil {
notify <- err
return
}
err = _srv.Handler(_srv.RemoteProvider, c, pb)
if err != nil {
fmt.Printf("%s\n", err)
notify <- err
return
}
}
}()
for {
select {
case err:= <-notify:
if io.EOF == err {
fmt.Printf("Smpp server (read): %s\n", err)
return
}
case <-time.After(time.Second * 10):
fmt.Printf("Client disconnected by timeout.\n")
return
}
}
}
Code for invoked handleConnection:
func (_srv *ServerSmpp) Serve(_cfg *ConfigSmpp) {
for {
client, err := _srv.NetListener.Accept()
if err != nil {
break
}
c := newConn(client)
go _srv.handleConnection(_cfg, c)
}
}
When this code work, the server disconnects all clients by timeout 10 sec, but how I can disconnect the client when it's doesn't work 10 sec?
Your client object seems to be a net.Conn,
choose a way to call client.SetReadDeadline() with the appropriate time.Time value before blocking on client.Read() :
c.client.SetDeadline( time.Now().Add(10 * time.Second )
pb, err := c.Read() { ...
These days I was working on send message via websoket,using Beego framework.
but meet the wrong message http: multiple response.WriteHeader calls
Where is the problem?
Any tips would be great!
func (this *WsController) Get() {
fmt.Println("connected")
handler(this.Ctx.ResponseWriter, this.Ctx.Request, this);
conn, err := upgrader.Upgrade(this.Ctx.ResponseWriter, this.Ctx.Request, nil)
if _, ok := err.(websocket.HandshakeError); ok {
http.Error(this.Ctx.ResponseWriter, "Not a websocket handshake", 400)
return
} else if err != nil {
return
}
fmt.Println("connected")
connection := consumer.New(beego.AppConfig.String("LoggregatorAddress"), &tls.Config{InsecureSkipVerify: true}, nil)
fmt.Println("===== Tailing messages")
msgChan, err := connection.Tail(this.Ctx.Input.Param(":appGuid"), this.Ctx.Input.Param(":token"))
if err != nil {
fmt.Printf("===== Error tailing: %v\n", err)
} else {
for msg := range msgChan {
// if closeRealTimeLogFlag{
// consumer.Close()
// break
// }
if err = conn.WriteMessage(websocket.TextMessage, msg.Message); err != nil {
fmt.Println(err)
}
fmt.Printf("%v \n", msg)
}
}
}
because you write more than statusCode