I am trying to automate my recon tool using Go. So far I can run two basic tools in kali (Nikto/whois). Now I want them to execute parallelly rather than waiting for one function to finish. After reading a bit, I came to know this can be achieved by using goroutines. But my code doesn't seem to work:
package main
import (
"log"
"os/exec"
"os"
"fmt"
)
var url string
func nikto(){
cmd := exec.Command("nikto","-h",url)
cmd.Stdout = os.Stdout
err := cmd.Run()
if err != nil {
log.Fatal(err)
}
}
func whois() {
cmd := exec.Command("whois","google.co")
cmd.Stdout = os.Stdout
err := cmd.Run()
if err !=nil {
log.Fatal(err)
}
}
func main(){
fmt.Printf("Please input URL")
fmt.Scanln(&url)
nikto()
go whois()
}
I do understand that here, go whois() would execute till main() would, but I still can't see them both execute parallel.
If I understood your question correctly, you want to execute both nikto() and whois() in parallel and wait until both returned. To wait until a set of goroutines has finished, sync.WaitGroup is a good way to archive this. For you, it could look something like this:
func main(){
fmt.Printf("Please input URL")
fmt.Scanln(&url)
var wg sync.WaitGroup
wg.Add(2)
go func() {
defer wg.Done()
nikto()
}()
go func() {
defer wg.Done()
whois()
}()
wg.Wait()
}
Here wg.Add(2) tells the WaitGroup, that we will wait for 2 goroutines.
Then your two functions are called inside small wrapper functions, which also call wg.Done() once each function is done. This tells the WaitGroup, that the function finished. The defer keyword just tells Go, to execute the function call, once the surrounding function returns. Also notice the go keyword before the two calls to the wrapper functions, which causes the execution to happen in 2 separate goroutines.
Finally, once both goroutines have been started, the call to wg.Wait() inside the main function blocks until wg.Done() was called twice, which will happen once both nikto() and whois() have finished.
Related
I got this code from someone on github and I am trying to play around with it to understand concurrency.
package main
import (
"bufio"
"fmt"
"os"
"sync"
"time"
)
var wg sync.WaitGroup
func sad(url string) string {
fmt.Printf("gonna sleep a bit\n")
time.Sleep(2 * time.Second)
return url + " added stuff"
}
func main() {
sc := bufio.NewScanner(os.Stdin)
urls := make(chan string)
results := make(chan string)
for i := 0; i < 20; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for url := range urls {
n := sad(url)
results <- n
}
}()
}
for sc.Scan() {
url := sc.Text()
urls <- url
}
for result := range results {
fmt.Printf("%s arrived\n", result)
}
wg.Wait()
close(urls)
close(results)
}
I have a few questions:
Why does this code give me a deadlock?
How does that for loop exist before the operation of taking in input from user does the go routines wait until anything is passes in the urls channel then start doing work? I don't get this because it's not sequential, like why is taking in input from user then putting every input in the urls channel then running the go routines is considered wrong?
Inside the for loop I have another loop which is iterating over the urls channel, does each go routine deal with exactly one line of input? or does one go routine handle multiple lines at once? how does any of this work?
Am i gathering the output correctly here?
Mostly you're doing things correctly, but have things a little out of order. The for sc.Scan() loop will continue until Scanner is done, and the for result := range results loop will never run, thus no go routine ('main' in this case) will be able to receive from results. When running your example, I started the for result := range results loop before for sc.Scan() and also in its own go routine--otherwise for sc.Scan() will never be reached.
go func() {
for result := range results {
fmt.Printf("%s arrived\n", result)
}
}()
for sc.Scan() {
url := sc.Text()
urls <- url
}
Also, because you run wg.Wait() before close(urls), the main goroutine is left blocked waiting for the 20 sad() go routines to finish. But they can't finish until close(urls) is called. So just close that channel before waiting for the waitgroup.
close(urls)
wg.Wait()
close(results)
The for-loop creates 20 goroutines, all waiting input from the urls channel. When someone writes into this channel, one of the goroutines will pick it up and work on in. This is a typical worker-pool implementation.
Then, then scanner reads input line by line, and sends it to the urls channel, where one of the goroutines will pick it up and write the response to the results channel. At this point, there are no other goroutines reading from the results channel, so this will block.
As the scanner reads URLs, all other goroutines will pick them up and block. So if the scanner reads more than 20 URLs, it will deadlock because all goroutines will be waiting for a reader.
If there are fewer than 20 URLs, the scanner for-loop will end, and the results will be read. However that will eventually deadlock as well, because the for-loop will terminate when the channel is closed, and there is no one there to close the channel.
To fix this, first, close the urls channel right after you finish reading. That will release all the for-loops in the goroutines. Then you should put the for-loop reading from the results channel into a goroutine, so you can call wg.Wait while results are being processed. After wg.Wait, you can close the results channel.
This does not guarantee that all items in the results channel will be read. The program may terminate before all messages are processed, so use a third channel which you close at the end of the goroutine that reads from the results channel. That is:
done:=make(chan struct{})
go func() {
defer close(done)
for result := range results {
fmt.Printf("%s arrived\n", result)
}
}()
wg.Wait()
close(results)
<-done
I am not super happy with previous answers, so here is a solution based on the documented behavior in the go tour, the go doc, the specifications.
package main
import (
"bufio"
"fmt"
"strings"
"sync"
"time"
)
var wg sync.WaitGroup
func sad(url string) string {
fmt.Printf("gonna sleep a bit\n")
time.Sleep(2 * time.Millisecond)
return url + " added stuff"
}
func main() {
// sc := bufio.NewScanner(os.Stdin)
sc := bufio.NewScanner(strings.NewReader(strings.Repeat("blah blah\n", 15)))
urls := make(chan string)
results := make(chan string)
for i := 0; i < 20; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for url := range urls {
n := sad(url)
results <- n
}
}()
}
// results is consumed by so many goroutines
// we must wait for them to finish before closing results
// but we dont want to block here, so put that into a routine.
go func() {
wg.Wait()
close(results)
}()
go func() {
for sc.Scan() {
url := sc.Text()
urls <- url
}
close(urls) // done consuming a channel, close it, right away.
}()
for result := range results {
fmt.Printf("%s arrived\n", result)
} // the program will finish when it gets out of this loop.
// It will get out of this loop because you have made sure the results channel is closed.
}
I am running command exec.Command("cf api https://something.com/") and the response takes sometimes. But when executing this command, there is no wait happens but executed and goes further immediately. I need to wait for some seconds or until output has been received. How to achieve this?
func TestCMDExex(t *testing.T) {
expectedText := "Success"
cmd := exec.Command("cf api https://something.com/")
cmd.Dir = "/root//"
out, err := cmd.Output()
if err != nil {
t.Fail()
}
assert.Contains(t, string(out), expectedText)
}
First: the correct way to create the cmd is:
cmd := exec.Command("cf", "api", "https://something.com/")
Every argument to the child program must be a separate string. This way you can also pass arguments that contain spaces in them. For instance, executing the program with:
cmd := exec.Command("cf", "api https://something.com/")
will pass one command line argument to cf, which is "api https://something.com/", whereas passing two strings will pass two arguments "api" and "https://something.com/".
In your original code, you are trying to execute a program whose name is "cf api https://something.com/".
Then you can run it and get the output:
out, err:=cmd.Output()
This can be solved with a goroutine, a channel and the select statement. The sample code below also does error handling:
func main() {
type output struct {
out []byte
err error
}
ch := make(chan output)
go func() {
// cmd := exec.Command("sleep", "1")
// cmd := exec.Command("sleep", "5")
cmd := exec.Command("false")
out, err := cmd.CombinedOutput()
ch <- output{out, err}
}()
select {
case <-time.After(2 * time.Second):
fmt.Println("timed out")
case x := <-ch:
fmt.Printf("program done; out: %q\n", string(x.out))
if x.err != nil {
fmt.Printf("program errored: %s\n", x.err)
}
}
}
By choosing one of the 3 options in exec.Command(), you can see the code behaving in the 3 possible ways: timed out, normal subprocess termination, errored subprocess termination.
As usual when using goroutines, care must be taken to ensure they terminate, to avoid resource leaks.
Note also that if the executed subprocess is interactive or if it prints its progression to stdout and it is important to see the output while it is happening, then it is better to use cmd.Run(), remove the struct and report only the error in the channel.
You can use a go waitGroup. This is the function we’ll run in every goroutine. Note that a WaitGroup must be passed to functions by pointer. On return, notify the WaitGroup that we’re done. Sleep to simulate an expensive task. (remove it in your case) This WaitGroup is used to wait for all the goroutines launched here to finish. Block until the WaitGroup counter goes back to 0; all the workers notified they’re done.
package main
import (
"fmt"
"sync"
"time"
)
func worker(id int, wg *sync.WaitGroup) {
defer wg.Done()
fmt.Printf("Worker %d starting\n", id)
time.Sleep(time.Second)
fmt.Printf("Worker %d done\n", id)
}
func main() {
var wg sync.WaitGroup
for i := 1; i <= 5; i++ {
wg.Add(1)
go worker(i, &wg)
}
wg.Wait()
}
The sample of code:
package main
import (
"io"
"os"
"os/signal"
"sync"
"syscall"
)
func main() {
sigintCh := make(chan os.Signal, 1)
signal.Notify(sigintCh, syscall.SIGINT, syscall.SIGTERM)
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
io.Copy(os.Stdout, os.Stdin)
}()
<-sigintCh
os.Stdin.Close()
wg.Wait()
}
If run this sample and try to interrupt by ^C it waits for any input and stops only after sending something to stdin (e.g. just press enter).
I expect that closing Stdin will be like sending EOF, but it doesn't work.
Closing os.Stdin will cause io.Copy to return with error file already closed next time it reads from it (after CTRL-C, try pressing Enter).
As explained in the File.Close docs:
Close closes the File, rendering it unusable for I/O.
You cannot force an EOF return from os.Stdin by closing it (or any other way). Instead, you would need to either wrap os.Stdin and implement your own Read method that conditionally returns EOF, or read a limited number of bytes in a loop.
You can see some more discussion and possible workarounds on this golang-nuts thread.
You can interrupt an io.Copy without closing the source side - by passing an io.Reader that has been wrapped with logic that takes a cancelable context.Context outlined here.
Modify your above goroutine like so:
ctx, cancel := context.WithCancel(context.Background())
go func() {
defer wg.Done()
r := NewReader(ctx, os.Stdin) // wrap io.Reader to make it context-aware
_, err := io.Copy(os.Stdout, r)
if err != nil {
// context.Canceled error if interrupted
}
}()
<-sigintCh
cancel() // canceling context will interrupt io.Copy operation
You can import NewReader from an external package like github.com/jbenet/go-context/io or inline a snippet from the blog link above:
type readerCtx struct {
ctx context.Context
r io.Reader
}
func (r *readerCtx) Read(p []byte) (n int, err error) {
if err := r.ctx.Err(); err != nil {
return 0, err
}
return r.r.Read(p)
}
// NewReader gets a context-aware io.Reader.
func NewReader(ctx context.Context, r io.Reader) io.Reader {
return &readerCtx{ctx: ctx, r: r}
}
I wrote a short script to write a file concurrently.
One goroutine is supposed to write strings to a file while the others are supposed to send the messages through a channel to it.
However, for some really strange reason the file is created but no message is added to it through the channel.
package main
import (
"fmt"
"os"
"sync"
)
var wg sync.WaitGroup
var output = make(chan string)
func concurrent(n uint64) {
output <- fmt.Sprint(n)
defer wg.Done()
}
func printOutput() {
f, err := os.OpenFile("output.txt", os.O_CREATE|os.O_RDWR|os.O_APPEND, 0666);
if err != nil {
panic(err)
}
defer f.Close()
for msg := range output {
f.WriteString(msg+"\n")
}
}
func main() {
wg.Add(2)
go concurrent(1)
go concurrent(2)
wg.Wait()
close(output)
printOutput()
}
The printOutput() goroutine is executed completely, if I tried to write something after the for loop it would actually get into the file. So this leads me to think that range output might be null
You need to have something taking from the output channel as it is blocking until something removes what you put on it.
Not the only/best way to do it but: I moved printOutput() to above the other funcs and run it as a go routine and it prevents the deadlock.
package main
import (
"fmt"
"os"
"sync"
)
var wg sync.WaitGroup
var output = make(chan string)
func concurrent(n uint64) {
output <- fmt.Sprint(n)
defer wg.Done()
}
func printOutput() {
f, err := os.OpenFile("output.txt", os.O_CREATE|os.O_RDWR|os.O_APPEND, 0666)
if err != nil {
panic(err)
}
defer f.Close()
for msg := range output {
f.WriteString(msg + "\n")
}
}
func main() {
go printOutput()
wg.Add(2)
go concurrent(1)
go concurrent(2)
wg.Wait()
close(output)
}
One of the reason why you get a null output is because channels are blocking for both send/receive.
According to your flow, the code snippet below will never reach wg.Done(), as sending channel is expecting a receiving end to pull the data out. This is a typical deadlock example.
func concurrent(n uint64) {
output <- fmt.Sprint(n) // go routine is blocked until data in channel is fetched.
defer wg.Done()
}
Let's examine the main func:
func main() {
wg.Add(2)
go concurrent(1)
go concurrent(2)
wg.Wait() // the main thread will be waiting indefinitely here.
close(output)
printOutput()
}
My take on the problem:
package main
import (
"fmt"
"os"
"sync"
)
var wg sync.WaitGroup
var output = make(chan string)
var donePrinting = make(chan struct{})
func concurrent(n uint) {
defer wg.Done() // It only makes sense to defer
// wg.Done() before you do something.
// (like sending a string to the output channel)
output <- fmt.Sprint(n)
}
func printOutput() {
f, err := os.OpenFile("output.txt", os.O_CREATE|os.O_RDWR|os.O_APPEND, 0666)
if err != nil {
panic(err)
}
defer f.Close()
for msg := range output {
f.WriteString(msg + "\n")
}
donePrinting <- struct{}{}
}
func main() {
wg.Add(2)
go printOutput()
go concurrent(1)
go concurrent(2)
wg.Wait()
close(output)
<-donePrinting
}
Each concurrent function will deduct from the wait-group.
After the two concurrent goroutines finish, the wg.Wait() will unblock, and the next instruction (close(output)) will be executed. You have to wait for the two goroutines to finish before closing the channel. If, instead, you try the following:
go printOutput()
go concurrent(1)
go concurrent(2)
close(output)
wg.Wait()
you could end up with the close(output) instruction executing before any one of the concurrent goroutines concludes. If the channel closes before the concurrent goroutines run, they will crash at runtime, (while trying to write to a closed channel).
If, then, you don’t wait up for the printOutput() goroutine to finish, you could actually quit main() before printOutput() has gotten the chance to finish writing to its file.
Because I want to wait for the printOutput() goroutine to finish before I quit the program, I also created a separate channel just to signal that printOutput() is done.
The <-donePrinting instruction blocks until main receives something over the donePrinting channel.
Once main receives anything (even the empty structure that printOutput() sends), it will unblock and run to conclusion.
https://play.golang.org/p/nXJoYLI758m
I'm having trouble figuring out how to correctly use sync.Cond. From what I can tell, a race condition exists between locking the Locker and invoking the condition's Wait method. This example adds an artificial delay between the two lines in the main goroutine to simulate the race condition:
package main
import (
"sync"
"time"
)
func main() {
m := sync.Mutex{}
c := sync.NewCond(&m)
go func() {
time.Sleep(1 * time.Second)
c.Broadcast()
}()
m.Lock()
time.Sleep(2 * time.Second)
c.Wait()
}
[Run on the Go Playground]
This causes an immediate panic:
fatal error: all goroutines are asleep - deadlock!
goroutine 1 [semacquire]:
sync.runtime_Syncsemacquire(0x10330208, 0x1)
/usr/local/go/src/runtime/sema.go:241 +0x2e0
sync.(*Cond).Wait(0x10330200, 0x0)
/usr/local/go/src/sync/cond.go:63 +0xe0
main.main()
/tmp/sandbox301865429/main.go:17 +0x1a0
What am I doing wrong? How do I avoid this apparent race condition? Is there a better synchronization construct I should be using?
Edit: I realize I should have better explained the problem I'm trying to solve here. I have a long-running goroutine that downloads a large file and a number of other goroutines that need access to the HTTP headers when they are available. This problem is harder than it sounds.
I can't use channels since only one goroutine would then receive the value. And some of the other goroutines would be trying to retrieve the headers long after they are already available.
The downloader goroutine could simply store the HTTP headers in a variable and use a mutex to safeguard access to them. However, this doesn't provide a way for the other goroutines to "wait" for them to become available.
I had thought that both a sync.Mutex and sync.Cond together could accomplish this goal but it appears that this is not possible.
OP answered his own, but did not directly answer the original question, I am going to post how to correctly use sync.Cond.
You do not really need sync.Cond if you have one goroutine for each write and read - a single sync.Mutex would suffice to communicate between them. sync.Cond could useful in situations where multiple readers wait for the shared resources to be available.
var sharedRsc = make(map[string]interface{})
func main() {
var wg sync.WaitGroup
wg.Add(2)
m := sync.Mutex{}
c := sync.NewCond(&m)
go func() {
// this go routine wait for changes to the sharedRsc
c.L.Lock()
for len(sharedRsc) == 0 {
c.Wait()
}
fmt.Println(sharedRsc["rsc1"])
c.L.Unlock()
wg.Done()
}()
go func() {
// this go routine wait for changes to the sharedRsc
c.L.Lock()
for len(sharedRsc) == 0 {
c.Wait()
}
fmt.Println(sharedRsc["rsc2"])
c.L.Unlock()
wg.Done()
}()
// this one writes changes to sharedRsc
c.L.Lock()
sharedRsc["rsc1"] = "foo"
sharedRsc["rsc2"] = "bar"
c.Broadcast()
c.L.Unlock()
wg.Wait()
}
Playground
Having said that, using channels is still the recommended way to pass data around if the situation permitting.
Note: sync.WaitGroup here is only used to wait for the goroutines to complete their executions.
You need to make sure that c.Broadcast is called after your call to c.Wait. The correct version of your program would be:
package main
import (
"fmt"
"sync"
)
func main() {
m := &sync.Mutex{}
c := sync.NewCond(m)
m.Lock()
go func() {
m.Lock() // Wait for c.Wait()
c.Broadcast()
m.Unlock()
}()
c.Wait() // Unlocks m, waits, then locks m again
m.Unlock()
}
https://play.golang.org/p/O1r8v8yW6h
package main
import (
"fmt"
"sync"
"time"
)
func main() {
m := sync.Mutex{}
m.Lock() // main gouroutine is owner of lock
c := sync.NewCond(&m)
go func() {
m.Lock() // obtain a lock
defer m.Unlock()
fmt.Println("3. goroutine is owner of lock")
time.Sleep(2 * time.Second) // long computing - because you are the owner, you can change state variable(s)
c.Broadcast() // State has been changed, publish it to waiting goroutines
fmt.Println("4. goroutine will release lock soon (deffered Unlock")
}()
fmt.Println("1. main goroutine is owner of lock")
time.Sleep(1 * time.Second) // initialization
fmt.Println("2. main goroutine is still lockek")
c.Wait() // Wait temporarily release a mutex during wating and give opportunity to other goroutines to change the state.
// Because you don't know, whether this is state, that you are waiting for, is usually called in loop.
m.Unlock()
fmt.Println("Done")
}
http://play.golang.org/p/fBBwoL7_pm
Looks like you c.Wait for Broadcast which would never happens with your time intervals.
With
time.Sleep(3 * time.Second) //Broadcast after any Wait for it
c.Broadcast()
your snippet seems to work http://play.golang.org/p/OE8aP4i6gY .Or am I missing something that you try to achive?
I finally discovered a way to do this and it doesn't involve sync.Cond at all - just the mutex.
type Task struct {
m sync.Mutex
headers http.Header
}
func NewTask() *Task {
t := &Task{}
t.m.Lock()
go func() {
defer t.m.Unlock()
// ...do stuff...
}()
return t
}
func (t *Task) WaitFor() http.Header {
t.m.Lock()
defer t.m.Unlock()
return t.headers
}
How does this work?
The mutex is locked at the beginning of the task, ensuring that anything calling WaitFor() will block. Once the headers are available and the mutex unlocked by the goroutine, each call to WaitFor() will execute one at a time. All future calls (even after the goroutine ends) will have no problem locking the mutex, since it will always be left unlocked.
Yes you can use one channel to pass Header to multiple Go routines.
headerChan := make(chan http.Header)
go func() { // This routine can be started many times
header := <-headerChan // Wait for header
// Do things with the header
}()
// Feed the header to all waiting go routines
for more := true; more; {
select {
case headerChan <- r.Header:
default: more = false
}
}
This can be done with channels pretty easily and the code will be clean. Below is the example. Hope this helps!
package main
import (
"fmt"
"net/http"
"sync"
)
func main() {
done := make(chan struct{})
var wg sync.WaitGroup
// fork required number of goroutines
for i := 0; i < 5; i++ {
wg.Add(1)
go func() {
defer wg.Done()
<-done
fmt.Println("read the http headers from here")
}()
}
time.Sleep(1) //download your large file here
fmt.Println("Unblocking goroutines...")
close(done) // this will unblock all the goroutines
wg.Wait()
}
In the excellent book "Concurrency in Go" they provide the following easy solution while leveraging the fact that a channel that is closed will release all waiting clients.
package main
import (
"fmt"
"time"
)
func main() {
httpHeaders := []string{}
headerChan := make(chan interface{})
var consumerFunc= func(id int, stream <-chan interface{}, funcHeaders *[]string)
{
<-stream
fmt.Println("Consumer ",id," got headers:", funcHeaders )
}
for i:=0;i<3;i++ {
go consumerFunc(i, headerChan, &httpHeaders)
}
fmt.Println("Getting headers...")
time.Sleep(2*time.Second)
httpHeaders=append(httpHeaders, "test1");
fmt.Println("Publishing headers...")
close(headerChan )
time.Sleep(5*time.Second)
}
https://play.golang.org/p/cE3SiKWNRIt