Note: I'm more interested in understanding general Go concepts/patterns, rather than solving this contrived example.
The Go (golang) WebSocket package provides a trivial echo server example, which condenses down to something like this:
func EchoServer(ws *websocket.Conn) { io.Copy(ws, ws); }
func main() {
http.Handle("/echo", websocket.Handler(EchoServer));
http.ListenAndServe(":12345", nil);
}
The server handles simultaneous connections, and I'm trying to upgrade it to a basic chat server by echoing the input to all connected clients.
How would I go about providing the EchoServer handler access to each of the open connections?
A quick little almost-functional example to give you an idea
var c = make(chan *websocket.Conn, 5) //5 is an arbitrary buffer size
var c2 = make(chan []byte, 5)
func EchoServer(ws *websocket.Conn) {
buff := make([]byte, 256)
c <- ws
for size, e := ws.Read(buff); e == nil; size, e = ws.Read(buff) {
c2 <- buff[0:size]
}
ws.Close()
}
func main() {
go func() {
var somekindofstorage
for {
select {
case newC := <-c:
somekindofstorage.Add(newC)
case msg := <-c2:
for _, v := range somekindofstorage {
if _, e := v.Write(msg); e != nil { //assuming the client disconnected on write errors
somekindofstorage.Remove(v)
}
}
}
}
}()
http.Handle("/echo", websocket.Handler(EchoServer));
http.ListenAndServe(":12345", nil);
}
This starts a goroutine that listens on two channels, one for new connections to add and one for messages to send to all active connection. somekindofstorage could be a map or a vector.
Edit:
Alternatively, you could just store all connections in a global map and write to each from EchoServer. But maps aren't designed to be accessed concurrently.
Related
To give you context,
The variable elementInput is dynamic. I do not know the exact length of it.
It can be 10, 5, or etc.
The *Element channel type is struct
My example is working. But my problem is this implementation is still synchronized, because I am waiting for the channel return so that I can append it to my result
Can you pls help me how to concurrent call GetElements() function and preserve the order defined in elementInput (based on index)
elementInput := []string{FB_FRIENDS, BEAUTY_USERS, FITNESS_USERS, COMEDY_USERS}
wg.Add(len(elementInput))
for _, v := range elementInput {
//create channel
channel := make(chan *Element)
//concurrent call
go GetElements(ctx, page, channel)
//Preserve the order
var elementRes = *<-channel
if len(elementRes.List) > 0 {
el = append(el, elementRes)
}
}
wg.Wait()
Your implementation is not concurrent.
Reason after every subroutine call you are waiting for result, that is making this serial
Below is Sample implementation similar to your flow
calling Concurreny method which calls function concurrently
afterwards we loop and collect response from every above call
main subroutine sleep for 2 seconds
Go PlayGround with running code -> Sample Application
func main() {
Concurrency()
time.Sleep(2000)
}
func response(greeter string, channel chan *string) {
reply := fmt.Sprintf("hello %s", greeter)
channel <- &reply
}
func Concurrency() {
events := []string{"ALICE", "BOB"}
channels := make([]chan *string, 0)
// start concurrently
for _, event := range events {
channel := make(chan *string)
go response(event, channel)
channels = append(channels, channel)
}
// collect response
response := make([]string, len(channels))
for i := 0; i < len(channels); i++ {
response[i] = *<-channels[i]
}
// print response
log.Printf("channel response %v", response)
}
I am trying to design a HTTP client in Go that will be capable ofcConcurrent API calls to the web services and write some data in a textfile.
func getTotalCalls() int {
reader := bufio.NewReader(os.Stdin)
...
return callInt
}
getTotalColls decide how many calls I want to make, input comes from terminal.
func writeToFile(s string, namePrefix string) {
fileStore := fmt.Sprintf("./data/%s_calls.log", namePrefix)
...
defer f.Close()
if _, err := f.WriteString(s); err != nil {
log.Println(err)
}
}
The writeToFile will write data to file synchronously from a buffered channel.
func makeRequest(url string, ch chan<- string, id int) {
var jsonStr = []byte(`{"from": "Saru", "message": "Saru to Discovery. Over!"}`)
req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonStr))
req.Header.Set("Content-Type", "application/json")
client := &http.Client{}
start := time.Now()
resp, err := client.Do(req)
if err != nil {
panic(err)
}
secs := time.Since(start).Seconds()
defer resp.Body.Close()
body, _ := ioutil.ReadAll(resp.Body)
ch <- fmt.Sprintf("%d, %.2f, %d, %s, %s\n", id, secs, len(body), url, body)
}
This is the function which make the API call in a go Routine.
and Finally Here is the Main function, which send data from go routine to a bufferend channel and Later I range over the bufferend channel of string and write the data to file.
func main() {
urlPrefix := os.Getenv("STARCOMM_GO")
url := urlPrefix + "discovery"
totalCalls := getTotalCalls()
queue := make(chan string, totalCalls)
for i := 1; i <= totalCalls; i++ {
go makeRequest(url, queue, i)
}
for item := range queue {
fmt.Println(item)
writeToFile(item, fmt.Sprint(totalCalls))
}
}
The problem is at the end of the call the buffered somehow block and the program wait forever end of all the call. Does someone have a better way to design such use case? My final goal is to check for different number of concurrent post request how much time it takes for each calls for bench marking the API endpoint for 5, 10, 50, 100, 500, 1000 ... set of concurrent call.
Something has to close(queue). Otherwise range queue will block. If you want to range queue, you have to ensure that this channel is closed once the final client is done.
However... It's not even clear that you need to range queue though, since you know exactly how many results you'll get - it's totalCalls. You just need to loop this many times receiving from queue.
I believe your use case is similar to the Worker Pools example on gobyexample, so you may want to check that one out. Here's the code from that example:
// In this example we'll look at how to implement
// a _worker pool_ using goroutines and channels.
package main
import (
"fmt"
"time"
)
// Here's the worker, of which we'll run several
// concurrent instances. These workers will receive
// work on the `jobs` channel and send the corresponding
// results on `results`. We'll sleep a second per job to
// simulate an expensive task.
func worker(id int, jobs <-chan int, results chan<- int) {
for j := range jobs {
fmt.Println("worker", id, "started job", j)
time.Sleep(time.Second)
fmt.Println("worker", id, "finished job", j)
results <- j * 2
}
}
func main() {
// In order to use our pool of workers we need to send
// them work and collect their results. We make 2
// channels for this.
const numJobs = 5
jobs := make(chan int, numJobs)
results := make(chan int, numJobs)
// This starts up 3 workers, initially blocked
// because there are no jobs yet.
for w := 1; w <= 3; w++ {
go worker(w, jobs, results)
}
// Here we send 5 `jobs` and then `close` that
// channel to indicate that's all the work we have.
for j := 1; j <= numJobs; j++ {
jobs <- j
}
close(jobs)
// Finally we collect all the results of the work.
// This also ensures that the worker goroutines have
// finished. An alternative way to wait for multiple
// goroutines is to use a [WaitGroup](waitgroups).
for a := 1; a <= numJobs; a++ {
<-results
}
}
Your "worker" makes HTTP requests, otherwise it's pretty much the same pattern. Note the for loop at the end which reads from the channel a known number of times.
If you need to limit a number of simultaneous requests, you can use a semaphore implemented with a buffered channel.
func makeRequest(url string, id int) string {
var jsonStr = []byte(`{"from": "Saru", "message": "Saru to Discovery. Over!"}`)
req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonStr))
req.Header.Set("Content-Type", "application/json")
client := &http.Client{}
start := time.Now()
resp, err := client.Do(req)
if err != nil {
panic(err)
}
secs := time.Since(start).Seconds()
defer resp.Body.Close()
body, _ := ioutil.ReadAll(resp.Body)
return fmt.Sprintf("%d, %.2f, %d, %s, %s\n", id, secs, len(body), url, body)
}
func main() {
urlPrefix := os.Getenv("STARCOMM_GO")
url := urlPrefix + "discovery"
totalCalls := getTotalCalls()
concurrencyLimit := 50 // 5, 10, 50, 100, 500, 1000.
// Declare semaphore as a buffered channel with capacity limited by concurrency level.
semaphore := make(chan struct{}, concurrencyLimit)
for i := 1; i <= totalCalls; i++ {
// Take a slot in semaphore before proceeding.
// Once all slots are taken this call will block until slot is freed.
semaphore <- struct{}{}
go func() {
// Release slot on job finish.
defer func() { <-semaphore }()
item := makeRequest(url, i)
fmt.Println(item)
// Beware that writeToFile will be called concurrently and may need some synchronization.
writeToFile(item, fmt.Sprint(totalCalls))
}()
}
// Wait for jobs to finish by filling semaphore to full capacity.
for i := 0; i < cap(semaphore); i++ {
semaphore <- struct{}{}
}
close(semaphore)
}
I am working on a personal project that will run on a Raspberry Pi with some sensors attached to it.
The function that read from the sensors and the function that handle the socket connection are executed in different goroutines, so, in order to send data on the socket when they are read from the sensors, I create a chan []byte in the main function and pass it to the goroutines.
My problem came out here: if I do multiple writes in a row, only the first data arrives to the client, but the others don't. But if I put a little time.Sleep in the sender function, all the data arrives correctly to the client.
Anyway, that's a simplified version of this little program :
package main
import (
"net"
"os"
"sync"
"time"
)
const socketName string = "./test_socket"
// create to the socket and launch the accept client routine
func launchServerUDS(ch chan []byte) {
if err := os.RemoveAll(socketName); err != nil {
return
}
l, err := net.Listen("unix", socketName)
if err != nil {
return
}
go acceptConnectionRoutine(l, ch)
}
// accept incoming connection on the socket and
// 1) launch the routine to handle commands from the client
// 2) launch the routine to send data when the server reads from the sensors
func acceptConnectionRoutine(l net.Listener, ch chan []byte) {
defer l.Close()
for {
conn, err := l.Accept()
if err != nil {
return
}
go commandsHandlerRoutine(conn, ch)
go autoSendRoutine(conn, ch)
}
}
// routine that sends data to the client
func autoSendRoutine(c net.Conn, ch chan []byte) {
for {
data := <-ch
if string(data) == "exit" {
return
}
c.Write(data)
}
}
// handle client connection and calls functions to execute commands
func commandsHandlerRoutine(c net.Conn, ch chan []byte) {
for {
buf := make([]byte, 1024)
n, err := c.Read(buf)
if err != nil {
ch <- []byte("exit")
break
}
// now, for sake of simplicity , only echo commands back to the client
_, err = c.Write(buf[:n])
if err != nil {
ch <- []byte("exit")
break
}
}
}
// write on the channel to the autosend routine so the data are written on the socket
func sendDataToClient(data []byte, ch chan []byte) {
select {
case ch <- data:
// if i put a little sleep here, no problems
// i i remove the sleep, only data1 is sent to the client
// time.Sleep(1 * time.Millisecond)
default:
}
}
func dummyReadDataRoutine(ch chan []byte) {
for {
// read data from the sensors every 5 seconds
time.Sleep(5 * time.Second)
// read first data and send it
sendDataToClient([]byte("dummy data1\n"), ch)
// read second data and send it
sendDataToClient([]byte("dummy data2\n"), ch)
// read third data and send it
sendDataToClient([]byte("dummy data3\n"), ch)
}
}
func main() {
ch := make(chan []byte)
wg := sync.WaitGroup{}
wg.Add(2)
go dummyReadDataRoutine(ch)
go launchServerUDS(ch)
wg.Wait()
}
I don't think it's correct to use a sleep to synchronize writes. How do I fix this while keeping the functions running on a different different goroutines.
The primary problem was in the function:
func sendDataToClient(data []byte, ch chan []byte) {
select {
case ch <- data:
// if I put a little sleep here, no problems
// if I remove the sleep, only data1 is sent to the client
// time.Sleep(1 * time.Millisecond)
default:
}
If the channel ch isn't ready at the moment the function is called, the default case will be taken and the data will never be sent. In this case you should eliminate the function and send to the channel directly.
Buffering the channel is orthogonal to the problem at hand, and should be done for the similar reasons as you would buffered IO, i.e. provide a "buffer" for writes that can't immediately progress. If the code were not able progress without a buffer, adding one only delays possible deadlocks.
You also don't need the exit sentinel value here, as you could range over the channel and close it when you're done. This however still ignores write errors, but again that requires some re-design.
for data := range ch {
c.Write(data)
}
You should also be careful passing slices over channels, as it's all too easy to lose track of which logical process has ownership and is going to modify the backing array. I can't say from the information given if passing the read+write data over channels improves the architecture, but this is not a pattern you will find in most go networking code.
JimB gave a good explanation, so I think his answer is the better one.
I have included my partial solution in this answer.
I was thinking that my code was clear and simplified, but as Jim said I can do it simpler and clearer. I leave my old code posted so people can understand better how you can post simpler code and not do a mess like I did.
As chmike said, my issue wasn't related to the socket like I was thinking, but was only related to the channel. Write on a unbuffered channel was one of the problems. After change the unbuffered channel to a buffered one, the issue was resolved. Anyway, this code is not "good code" and can be improved following the principles that JimB has written in his answer.
So here is the new code:
package main
import (
"net"
"os"
"sync"
"time"
)
const socketName string = "./test_socket"
// create the socket and accept clients connections
func launchServerUDS(ch chan []byte, wg *sync.WaitGroup) {
defer wg.Done()
if err := os.RemoveAll(socketName); err != nil {
return
}
l, err := net.Listen("unix", socketName)
if err != nil {
return
}
defer l.Close()
for {
conn, err := l.Accept()
if err != nil {
return
}
// this goroutine are launched when a client is connected
// routine that listen and echo commands
go commandsHandlerRoutine(conn, ch)
// routine to send data read from the sensors to the client
go autoSendRoutine(conn, ch)
}
}
// routine that sends data to the client
func autoSendRoutine(c net.Conn, ch chan []byte) {
for {
data := <-ch
if string(data) == "exit" {
return
}
c.Write(data)
}
}
// handle commands received from the client
func commandsHandlerRoutine(c net.Conn, ch chan []byte) {
for {
buf := make([]byte, 1024)
n, err := c.Read(buf)
if err != nil {
// if i can't read send an exit command to autoSendRoutine and exit
ch <- []byte("exit")
break
}
// now, for sake of simplicity , only echo commands back to the client
_, err = c.Write(buf[:n])
if err != nil {
// if i can't write back send an exit command to autoSendRoutine and exit
ch <- []byte("exit")
break
}
}
}
// this goroutine reads from the sensors and write to the channel , so data are sent
// to the client if a client is connected
func dummyReadDataRoutine(ch chan []byte, wg *sync.WaitGroup) {
x := 0
for x < 100 {
// read data from the sensors every 5 seconds
time.Sleep(1 * time.Second)
// read first data and send it
ch <- []byte("data1\n")
// read second data and send it
ch <- []byte("data2\n")
// read third data and send it
ch <- []byte("data3\n")
x++
}
wg.Done()
}
func main() {
// create a BUFFERED CHANNEL
ch := make(chan []byte, 1)
wg := sync.WaitGroup{}
wg.Add(2)
// launch the goruotines that handle the socket connections
// and read data from the sensors
go dummyReadDataRoutine(ch, &wg)
go launchServerUDS(ch, &wg)
wg.Wait()
}
I have been learning Golang to move all my penetration testing tools to it. Since I like to write my own tools this is a perfect way to learn a new language. In this particular case I think something is wrong with the way I am using channels. I know for a fact that is not finishing the port mapping because the other tools I use that I wrote on ruby are finding all the open ports but my golang tool is not. Can someone please help me understand what I'm doing wrong? Are channels the right way to go about doing this?
package main
import (
"fmt"
"log"
"net"
"strconv"
"time"
)
func portScan(TargetToScan string, PortStart int, PortEnd int, openPorts []int) []int {
activeThreads := 0
doneChannel := make(chan bool)
for port := PortStart; port <= PortEnd; port++ {
go grabBanner(TargetToScan, port, doneChannel)
activeThreads++
}
// Wait for all threads to finish
for activeThreads > 0 {
<-doneChannel
activeThreads--
}
return openPorts
}
func grabBanner(ip string, port int, doneChannel chan bool) {
connection, err := net.DialTimeout(
"tcp",
ip+":"+strconv.Itoa(port),
time.Second*10)
if err != nil {
doneChannel <- true
return
}
// append open port to slice
openPorts = append(openPorts, port)
fmt.Printf("+ Port %d: Open\n", port)
// See if server offers anything to read
buffer := make([]byte, 4096)
connection.SetReadDeadline(time.Now().Add(time.Second * 5))
// Set timeout
numBytesRead, err := connection.Read(buffer)
if err != nil {
doneChannel <- true
return
}
log.Printf("+ Banner of port %d\n%s\n", port,
buffer[0:numBytesRead])
// here we add to map port and banner
targetPorts[port] = string(buffer[0:numBytesRead])
doneChannel <- true
return
}
Note: seems to find the first bunch ports but not the ones that are above a hight number example 8080 but it usually does get 80 and 443...
So I suspect something is timing out, or something odd is going on.
There are lots of bad hacks of code, mostly because I'm learning and searching a lot in how to do things, so feel free to give tips and even changes/pull requests. thanks
Your code has a few problems. In grabBanner you appear to be referencing openPorts but it is not defined anywhere. You're probably referencing a global variable and this append operation is not going to be thread safe. In addition to your thread safety issues you also are likely exhausting file descriptor limits. Perhaps you should limit the amount of concurrent work by doing something like this:
package main
import (
"fmt"
"net"
"strconv"
"sync"
"time"
)
func main() {
fmt.Println(portScan("127.0.0.1", 1, 65535))
}
// startBanner spins up a handful of async workers
func startBannerGrabbers(num int, target string, portsIn <-chan int) <-chan int {
portsOut := make(chan int)
var wg sync.WaitGroup
wg.Add(num)
for i := 0; i < num; i++ {
go func() {
for p := range portsIn {
if grabBanner(target, p) {
portsOut <- p
}
}
wg.Done()
}()
}
go func() {
wg.Wait()
close(portsOut)
}()
return portsOut
}
func portScan(targetToScan string, portStart int, portEnd int) []int {
ports := make(chan int)
go func() {
for port := portStart; port <= portEnd; port++ {
ports <- port
}
close(ports)
}()
resultChan := startBannerGrabbers(16, targetToScan, ports)
var openPorts []int
for port := range resultChan {
openPorts = append(openPorts, port)
}
return openPorts
}
var targetPorts = make(map[int]string)
func grabBanner(ip string, port int) bool {
connection, err := net.DialTimeout(
"tcp",
ip+":"+strconv.Itoa(port),
time.Second*20)
if err != nil {
return false
}
defer connection.Close() // you should close this!
buffer := make([]byte, 4096)
connection.SetReadDeadline(time.Now().Add(time.Second * 5))
numBytesRead, err := connection.Read(buffer)
if err != nil {
return true
}
// here we add to map port and banner
// ******* MAPS ARE NOT SAFE FOR CONCURRENT WRITERS ******
// ******************* DO NOT DO THIS *******************
targetPorts[port] = string(buffer[0:numBytesRead])
return true
}
Your usage of var open bool and constantly setting it, then returning it is both unnecessary and non-idiomatic. In addition, checking if someBoolVar != false is a non-idiomatic and verbose way of writing if someBoolVar.
Additionally maps are not safe for concurrent access but your grabBanner function is writing to map from many go routines in a concurrent fashion. Please stop mutating global state inside of functions. Return values instead.
Here's an updated explanation of what's going on. First we make a channel that we will push port numbers onto for our workers to process. Then we start a go-routine that will write ports in the range onto that channel as fast as it can. Once we've written every port available onto that channel we close the channel so that our readers will be able to exit.
Then we call a method that starts a configurable number of bannerGrabber workers. We pass the ip address and the channel to read candidate port numbers off of. This function spawns num goroutines, each ranging over the portsIn channel that was passed, calls the grab banner function and then pushes the port onto the outbound channel if it was successful. Finally, we start one more go routine that waits on the sync.WaitGroup to finish so we can close the outgoing (result) channel once all of the workers are done.
Back in the portScan function We receive the outbound channel as the return value from the startBannerGrabbers function. We then range over the result channel that was returned to us, append all the open ports to the list and then return the result.
I also changed some stylistic things, such as down-casing your function argument names.
At risk of sounding like a broken record I am going to emphasize the following again. Stop mutating global state. Instead of setting targetPorts you should accumulate these values in a concurrency-safe manner and return them to the caller for use. It appears your usage of globals in this case is ill-thought out a mixture of convenience and not having thought about how to solve the problem without globals.
I'm looking for a solution to multiplex some channel output in go.
I have a source of data which is a read from an io.Reader that I send to a single channel. On the other side I have a websocket request handler that reads from the channel. Now it happens that two clients create a websocket connection, both reading from the same channel but each of them only getting a part of the messages.
Code example (simplified):
func (b *Bootloader) ReadLog() (<-chan []byte, error) {
if b.logCh != nil {
logrus.Warn("ReadLog called while channel already exists!")
return b.logCh, nil // This is where we get problems
}
b.logCh = make(chan []byte, 0)
go func() {
buf := make([]byte, 1024)
for {
n, err := b.p.Read(buf)
if err == nil {
msg := make([]byte, n)
copy(msg, buf[:n])
b.logCh <- msg
} else {
break
}
}
close(b.logCh)
b.logCh = nil
}()
return b.logCh, nil
}
Now when ReadLog() is called twice, the second call just returns the channel created in the first call, which leads to the problem explained above.
The question is: How to do proper multiplexing?
Is it better/easier/more ideomatic to care about the multiplexing on the sending or receiving site?
Should I hide the channel from the receiver and work with callbacks?
I'm a little stuck at the moment. Any hints are welcome.
Mutiplexing is pretty straightforward: make a slice of channels you want to multiplex to, start up a goroutine that reads from the original channel and copies each message to each channel in the slice:
// Really this should be in Bootloader but this is just an example
var consumers []chan []byte
func (b *Bootloader) multiplex() {
// We'll use a sync.once to make sure we don't start a bunch of these.
sync.Once(func(){
go func() {
// Every time a message comes over the channel...
for v := range b.logCh {
// Loop over the consumers...
for _,cons := range consumers {
// Send each one the message
cons <- v
}
}
}()
})
}