Golang: go command inside script? - go

I got a script written in Golang that I do not quite understand. I want to know why he wrote go server.Start() inside the script? Why not simply write server.Start?
package main
import (
"github.com/miekg/dns"
"testing"
"time"
)
const TEST_ADDR = "127.0.0.1:9953"
func TestDNSResponse(t *testing.T) {
server := NewDNSServer(&Config{
dnsAddr: TEST_ADDR,
})
go server.Start()
// Allow some time for server to start
time.Sleep(150 * time.Millisecond)
m := new(dns.Msg)
m.Id = dns.Id()
m.Question = []dns.Question{
dns.Question{"docker.", dns.TypeA, dns.ClassINET},
}
c := new(dns.Client)
_, _, err := c.Exchange(m, TEST_ADDR)
if err != nil {
t.Error("Error response from the server", err)
}
server.Stop()
c = new(dns.Client)
_, _, err = c.Exchange(m, TEST_ADDR)
if err == nil {
t.Error("Server still running but should be shut down.")
}
}

If you invoke a function prefixed with the go keyword it will call the function in as goroutine. A goroutine is a function that is capable of running concurrently with other functions.
Normally when we invoke a function it will execute all the function statements in normal order, then return to the next line following the invocation. With a goroutine we return immediately to the next line and don't wait for the function to complete.

Related

How to wait for the command to finish/ till some seconds in golang?

I am running command exec.Command("cf api https://something.com/") and the response takes sometimes. But when executing this command, there is no wait happens but executed and goes further immediately. I need to wait for some seconds or until output has been received. How to achieve this?
func TestCMDExex(t *testing.T) {
expectedText := "Success"
cmd := exec.Command("cf api https://something.com/")
cmd.Dir = "/root//"
out, err := cmd.Output()
if err != nil {
t.Fail()
}
assert.Contains(t, string(out), expectedText)
}
First: the correct way to create the cmd is:
cmd := exec.Command("cf", "api", "https://something.com/")
Every argument to the child program must be a separate string. This way you can also pass arguments that contain spaces in them. For instance, executing the program with:
cmd := exec.Command("cf", "api https://something.com/")
will pass one command line argument to cf, which is "api https://something.com/", whereas passing two strings will pass two arguments "api" and "https://something.com/".
In your original code, you are trying to execute a program whose name is "cf api https://something.com/".
Then you can run it and get the output:
out, err:=cmd.Output()
This can be solved with a goroutine, a channel and the select statement. The sample code below also does error handling:
func main() {
type output struct {
out []byte
err error
}
ch := make(chan output)
go func() {
// cmd := exec.Command("sleep", "1")
// cmd := exec.Command("sleep", "5")
cmd := exec.Command("false")
out, err := cmd.CombinedOutput()
ch <- output{out, err}
}()
select {
case <-time.After(2 * time.Second):
fmt.Println("timed out")
case x := <-ch:
fmt.Printf("program done; out: %q\n", string(x.out))
if x.err != nil {
fmt.Printf("program errored: %s\n", x.err)
}
}
}
By choosing one of the 3 options in exec.Command(), you can see the code behaving in the 3 possible ways: timed out, normal subprocess termination, errored subprocess termination.
As usual when using goroutines, care must be taken to ensure they terminate, to avoid resource leaks.
Note also that if the executed subprocess is interactive or if it prints its progression to stdout and it is important to see the output while it is happening, then it is better to use cmd.Run(), remove the struct and report only the error in the channel.
You can use a go waitGroup. This is the function we’ll run in every goroutine. Note that a WaitGroup must be passed to functions by pointer. On return, notify the WaitGroup that we’re done. Sleep to simulate an expensive task. (remove it in your case) This WaitGroup is used to wait for all the goroutines launched here to finish. Block until the WaitGroup counter goes back to 0; all the workers notified they’re done.
package main
import (
"fmt"
"sync"
"time"
)
func worker(id int, wg *sync.WaitGroup) {
defer wg.Done()
fmt.Printf("Worker %d starting\n", id)
time.Sleep(time.Second)
fmt.Printf("Worker %d done\n", id)
}
func main() {
var wg sync.WaitGroup
for i := 1; i <= 5; i++ {
wg.Add(1)
go worker(i, &wg)
}
wg.Wait()
}

exec command every one minute and serve the output via http

I want to run a command in the Bash shell every one minute, and serve the output via http, on http://localhost:8080/feed
package main
import (
"fmt"
"os/exec"
)
func main() {
cmd := `<a piped command>`
out, err := exec.Command("bash", "-c", cmd).Output()
if err != nil {
fmt.Sprintf("Failed to execute command: %s", cmd)
}
fmt.Println(string(out))
}
UPDATE :
package main
import (
"fmt"
"log"
"net/http"
"os/exec"
)
func handler(w http.ResponseWriter, r *http.Request) {
cmd := `<a piped command>`
out, err := exec.Command("bash", "-c", cmd).Output()
if err != nil {
fmt.Sprintf("Failed to execute command: %s", cmd)
}
fmt.Fprintf(w, string(out))
}
func main() {
http.HandleFunc("/feed", handler)
log.Fatal(http.ListenAndServe(":8080", nil))
}
With the above code, the command is run every time http://localhost:8080/feed is accessed. How do I make it cache(?) the output of the command for one minute, and then run the command again only after the cache time is expired?
If output is not too big, you can keep it in memory variable.
I took approach to wait for result in case that script is executed (using mutex):
package main
import (
"fmt"
"net/http"
"os/exec"
"sync"
"time"
)
var LoopDelay = 60*time.Second
type Output struct {
sync.Mutex
content string
}
func main() {
var output *Output = new(Output)
go updateResult(output)
http.HandleFunc("/feed", initHandle(output))
err := http.ListenAndServe(":8080", nil)
if err != nil {
fmt.Println("ERROR", err)
}
}
func initHandle(output *Output) func(http.ResponseWriter, *http.Request) {
return func(respw http.ResponseWriter, req *http.Request) {
output.Lock()
defer output.Unlock()
_, err := respw.Write([]byte(output.content))
if err != nil {
fmt.Println("ERROR: Unable to write response: ", err)
}
}
}
func updateResult(output *Output) {
var execFn = func() { /* Extracted so that 'defer' executes at the end of loop iteration */
output.Lock()
defer output.Unlock()
command := exec.Command("bash", "-c", "date | nl ")
output1, err := command.CombinedOutput()
if err != nil {
output.content = err.Error()
} else {
output.content = string(output1)
}
}
for {
execFn()
time.Sleep(LoopDelay)
}
}
Executing
date; curl http://localhost:8080/feed
gives output (on multiple calls):
1 dimanche 13 octobre 2019, 09:41:40 (UTC+0200)
http-server-cmd-output> date; curl http://localhost:8080/feed
dimanche 13 octobre 2019, 09:42:05 (UTC+0200)
1 dimanche 13 octobre 2019, 09:41:40 (UTC+0200)
Few things to consider:
- Used 'date | nl' as command with pipe example
- If output too big, write to file
- Very likely it is good idea to keep mutex only for content update (no need to wait during script execution) - you can try to improve it
- Go routine may have variable (channel) to exit on signal (ex: when program ends)
EDIT: variable moved to main function
How do I make it cache(?) the output of the command for one minute,
and then run the command again only after the cache time is expired?
In below solution two goroutines are declared.
First goroutine loop until the context is done to execute the command at regular interval and send a copy to the second go routine.
The second routine, tries to get from the first goroutine, or distribute its value to other goroutines.
The http handler, third goroutine, only queries the value from the getter and does something with it.
The reason to use three routines instead of two in this example is to prevent blocking the http routines if the command is being executed. With that additional routines, http requests only wait for the synchronization to occur.
type state is a dummy struct to transport the values within channels.
We prevent race conditions because of two facts, the state is passed by value, cmd.Output() allocates a new buffer each time it runs.
To retrieve original command in the http handler, OP should build a custom error type and attach those information to the recorded error, within the http handler, OP should type assert the error to its custom type and get the specific details from there.
package main
import (
"context"
"fmt"
"log"
"net/http"
"os/exec"
"time"
)
type state struct {
out []byte
err error
}
func main() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
set := make(chan state, 1)
go func() {
ticker := time.NewTicker(2 * time.Second)
cmd := []string{"date"}
s := state{}
s.out, s.err = exec.Command(cmd[0], cmd[1:]...).Output()
set <- s
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
s.out, s.err = exec.Command(cmd[0], cmd[1:]...).Output()
set <- s
}
}
}()
get := make(chan state)
go func() {
s := state{}
for {
select {
case <-ctx.Done():
return
case s = <-set: // get the fresh value, if any
case get <- s: // distribute your copy
}
}
}()
http.HandleFunc("/feed", func(w http.ResponseWriter, r *http.Request) {
state := <-get
if state.err != nil {
fmt.Printf("Failed to execute command: %v", state.err)
}
fmt.Fprintf(w, string(state.out))
})
log.Fatal(http.ListenAndServe(":8080", nil))
}
You could:
write the last command instance (the "string(out)") to a log file: see "Go/Golang write log to file"
serve that log file (and only the file) in a Go http server: see "How to serve a file to a specific URL path with the FileServer function in go/golang"
That way, any time you go to your server, you will get the last command output.
How do I make it cache(?) the output of the command for one minute, and then run the command again only after the cache time is expired?
By keeping the execution of the command separate from the command output being served.
That is why you must kept the two asynchronous, typically through a goroutine, a a time.NewTicker.
See "Is there a way to do repetitive tasks at intervals?"

Capture Output and Errors of Goroutine Using Channels

I have a for-loop that calls a function runCommand() which runs a remote command on a switch and prints the output. The function is called in a goroutine on each iteration and I am using a sync.Waitgroup to synchronize the goroutines. Now, I need a way to capture the output and any errors of my runCommand() function into a channel. I have read many articles and watched a lot of videos on using channels with goroutines, but this is the first time I have ever written a concurrent application and I can't seem to wrap my head around the idea.
Basically, my program takes in a list of hostnames from the command line then asynchronously connects to each host, runs a configuration command on it, and prints the output. It is ok for my program to continue configuring the remaining hosts if one has an error.
How would I idiomatically send the output or error(s) of each call to runCommand() to a channel then receive the output or error(s) for printing?
Here is my code:
package main
import (
"fmt"
"golang.org/x/crypto/ssh"
"os"
"time"
"sync"
)
func main() {
hosts := os.Args[1:]
clientConf := configureClient("user", "password")
var wg sync.WaitGroup
for _, host := range hosts {
wg.Add(1)
go runCommand(host, &clientConf, &wg)
}
wg.Wait()
fmt.Println("Configuration complete!")
}
// Run a remote command
func runCommand(host string, config *ssh.ClientConfig, wg *sync.WaitGroup) {
defer wg.Done()
// Connect to the client
client, err := ssh.Dial("tcp", host+":22", config)
if err != nil {
fmt.Println(err)
return
}
defer client.Close()
// Create a session
session, err := client.NewSession()
if err != nil {
fmt.Println(err)
return
}
defer session.Close()
// Get the session output
output, err := session.Output("show lldp ne")
if err != nil {
fmt.Println(err)
return
}
fmt.Print(string(output))
fmt.Printf("Connection to %s closed.\n", host)
}
// Set up client configuration
func configureClient(user, password string) ssh.ClientConfig {
var sshConf ssh.Config
sshConf.SetDefaults()
// Append supported ciphers
sshConf.Ciphers = append(sshConf.Ciphers, "aes128-cbc", "aes256-cbc", "3des-cbc", "des-cbc", "aes192-cbc")
// Create client config
clientConf := &ssh.ClientConfig{
Config: sshConf,
User: user,
Auth: []ssh.AuthMethod{ssh.Password(password)},
HostKeyCallback: ssh.InsecureIgnoreHostKey(),
Timeout: time.Second * 5,
}
return *clientConf
}
EDIT: I got rid of the Waitgroup, as suggested, and now I need to keep track of which output belongs to which host by printing the hostname before printing its output and printing a Connection to <host> closed. message when the gorouttine completes. For example:
$ go run main.go host1[,host2[,...]]
Connecting to <host1>
[Output]
...
[Error]
Connection to <host1> closed.
Connecting to <host2>
...
Connection to <host2> closed.
Configuration complete!
I know the above won't necessarily process host1 and host2 in order, But I need to print the correct host value for the connecting and closing messages before and after the output/error(s), respectively. I tried defering printing the closing message in the runCommand() function, but the message is printed out before the output/error(s). And printing the closing message in the for-loop after each goroutine call doesn't work as expected either.
Updated code:
package main
import (
"fmt"
"golang.org/x/crypto/ssh"
"os"
"time"
)
type CmdResult struct {
Host string
Output string
Err error
}
func main() {
start := time.Now()
hosts := os.Args[1:]
clientConf := configureClient("user", "password")
results := make(chan CmdResult)
for _, host := range hosts {
go runCommand(host, &clientConf, results)
}
for i := 0; i < len(hosts); i++ {
output := <- results
fmt.Println(output.Host)
if output.Output != "" {
fmt.Printf("%s\n", output.Output)
}
if output.Err != nil {
fmt.Printf("Error: %v\n", output.Err)
}
}
fmt.Printf("Configuration complete! [%s]\n", time.Since(start).String())
}
// Run a remote command
func runCommand(host string, config *ssh.ClientConfig, ch chan CmdResult) {
// This is printing before the output/error(s).
// Does the same when moved to the bottom of this function.
defer fmt.Printf("Connection to %s closed.\n", host)
// Connect to the client
client, err := ssh.Dial("tcp", host+":22", config)
if err != nil {
ch <- CmdResult{host, "", err}
return
}
defer client.Close()
// Create a session
session, err := client.NewSession()
if err != nil {
ch <- CmdResult{host, "", err}
return
}
defer session.Close()
// Get the session output
output, err := session.Output("show lldp ne")
if err != nil {
ch <- CmdResult{host, "", err}
return
}
ch <- CmdResult{host, string(output), nil}
}
// Set up client configuration
func configureClient(user, password string) ssh.ClientConfig {
var sshConf ssh.Config
sshConf.SetDefaults()
// Append supported ciphers
sshConf.Ciphers = append(sshConf.Ciphers, "aes128-cbc", "aes256-cbc", "3des-cbc", "des-cbc", "aes192-cbc")
// Create client config
clientConf := &ssh.ClientConfig{
Config: sshConf,
User: user,
Auth: []ssh.AuthMethod{ssh.Password(password)},
HostKeyCallback: ssh.InsecureIgnoreHostKey(),
Timeout: time.Second * 5,
}
return *clientConf
}
If you use an unbuffered channel, you actually don't need the sync.WaitGroup, because you can call the receive operator on the channel once for every goroutine that will send on the channel. Each receive operation will block until a send statement is ready, resulting in the same behavior as a WaitGroup.
To make this happen, change runCommand to execute a send statement exactly once before the function exits, under all conditions.
First, create a type to send over the channel:
type CommandResult struct {
Output string
Err error
}
And edit your main() {...} to execute a receive operation on the channel the same number of times as the number of goroutines that will send to the channel:
func main() {
ch := make(chan CommandResult) // initialize an unbuffered channel
// rest of your setup
for _, host := range hosts {
go runCommand(host, &clientConf, ch) // pass in the channel
}
for x := 0; x < len(hosts); x++ {
fmt.Println(<-ch) // this will block until one is ready to send
}
And edit your runCommand function to accept the channel, remove references to WaitGroup, and execute the send exactly once under all conditions:
func runCommand(host string, config *ssh.ClientConfig, ch chan CommandResult) {
// do stuff that generates output, err; then when ready to exit function:
ch <- CommandResult{output, err}
}
EDIT: Question updated with stdout message order requirements
I'd like to get nicely formatted output that ignores the order of events
In this case, remove all print messages from runCommand, you're going to put all output into the element you're passing on the channel so it can be grouped together. Edit the CommandResult type to contain additional fields you want to organize, such as:
type CommandResult struct {
Host string
Output string
Err error
}
If you don't need to sort your results, you can just move on to printing the data received, e.g.
for x := 0; x < len(hosts); x++ {
r := <-ch
fmt.Printf("Host: %s----\nOutput: %s\n", r.Host, r.Output)
if r.Err != nil {
fmt.Printf("Error: %s\n", r.Err)
}
}
If you do need to sort your results, then in your main goroutine, add the elements received on the channel to a slice:
...
results := make([]CommandResult, 0, len(hosts))
for x := 0; x < len(hosts); x++ {
results = append(results, <-ch) // this will block until one is ready to send
}
Then you can use the sort package in the Go standard library to sort your results for printing. For example, you could sort them alphabetically by host. Or you could put the results into a map with host string as the key instead of a slice to allow you to print in the order of the original host list.

Wait result of multiple goroutines

I am searching a way to execute asynchronously two functions in go which returns different results and errors, wait for them to finish and print both results. Also if one of function returned error I do not want to wait for another function, and just print the error.
For example, I have this functions:
func methodInt(error bool) (int, error) {
<-time.NewTimer(time.Millisecond * 100).C
if error {
return 0, errors.New("Some error")
} else {
return 1, nil
}
}
func methodString(error bool) (string, error) {
<-time.NewTimer(time.Millisecond * 120).C
if error {
return "", errors.New("Some error")
} else {
return "Some result", nil
}
}
Here https://play.golang.org/p/-8StYapmlg is how I implemented it, but it has too much code I think. It can be simplified by using interface{} but I don't want to go this way. I want something simpler as, for example, can be implemented in C# with async/await. Probably there is some library that simplifies such operation.
UPDATE: Thank for your responses! It is awesome how fast I got help! I like the usage of WaitGroup. It obviously makes the code more robust to changes, so I easily can add another async method without changing exact count of methods in the end. However, there is still so much code in comparison to same in C#. I know that in go I don't need to explicitly mark methods as async, making them actually to return tasks, but methods call looks much more simple, for example, consider this link actually catching exception is also needed
By the way, I found that in my task I actually don't need to know returning type of the functions I want to run async because it will be anyway marshaled to json, and now I just call multiple services in the endpoint layer of go-kit.
You should create two channels for errors and results, then first read errors if no erorrs then read the results, this sample should works for your use case:
package main
import (
"errors"
"sync"
)
func test(i int) (int, error) {
if i > 2 {
return 0, errors.New("test error")
}
return i + 5, nil
}
func test2(i int) (int, error) {
if i > 3 {
return 0, errors.New("test2 error")
}
return i + 7, nil
}
func main() {
results := make(chan int, 2)
errors := make(chan error, 2)
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
result, err := test(3)
if err != nil {
errors <- err
return
}
results <- result
}()
wg.Add(1)
go func() {
defer wg.Done()
result, err := test2(3)
if err != nil {
errors <- err
return
}
results <- result
}()
// here we wait in other goroutine to all jobs done and close the channels
go func() {
wg.Wait()
close(results)
close(errors)
}()
for err := range errors {
// here error happend u could exit your caller function
println(err.Error())
return
}
for res := range results {
println("--------- ", res, " ------------")
}
}
I think here sync.WaitGroup can be used. It can waits for different and dynamic number of goroutines.
I have created a smaller, self-contained example of how you can have two go routines run asynchronously and wait for both to finish or quit the program if an error occurs (see below for an explanation):
package main
import (
"errors"
"fmt"
"math/rand"
"time"
)
func main() {
rand.Seed(time.Now().UnixNano())
// buffer the channel so the async go routines can exit right after sending
// their error
status := make(chan error, 2)
go func(c chan<- error) {
if rand.Intn(2) == 0 {
c <- errors.New("func 1 error")
} else {
fmt.Println("func 1 done")
c <- nil
}
}(status)
go func(c chan<- error) {
if rand.Intn(2) == 0 {
c <- errors.New("func 2 error")
} else {
fmt.Println("func 2 done")
c <- nil
}
}(status)
for i := 0; i < 2; i++ {
if err := <-status; err != nil {
fmt.Println("error encountered:", err)
break
}
}
}
What I do is create a channel that is used for synchronization of the two go routines. Writing to and reading from it blocks. The channel is used to pass the error value around, or nil if the function succeeds.
At the end I read one value per async go routine from the channel. This blocks until a value is received. If an error occurs, I exit the loop, thus quitting the program.
The functions either succeed or fail randomly.
I hope this gets you going on how to coordinate go routines, if not, let me know in the comments.
Note that if you run this in the Go Playground, the rand.Seed will do nothing, the playground always has the same "random" numbers, so the behavior will not change.

Passing parameters to function closure

I'm trying to understand the difference in Go between creating an anonymous function which takes a parameter, versus having that function act as a closure. Here is an example of the difference.
With parameter:
func main() {
done := make(chan bool, 1)
go func(c chan bool) {
time.Sleep(50 * time.Millisecond)
c <- true
}(done)
<-done
}
As closure:
func main() {
done := make(chan bool, 1)
go func() {
time.Sleep(50 * time.Millisecond)
done <- true
}()
<-done
}
My question is, when is the first form better than the second? Would you ever use a parameter for this kind of thing? The only time I can see the first form being useful is when returning a func(x, y) from another function.
The difference between using a closure vs using a function parameter has to do with sharing the same variable vs getting a copy of the value. Consider these two examples below.
In the Closure all function calls will use the value stored in i. This value will most likely already reach 3 before any of the goroutines has had time to print it's value.
In the Parameter example each function call will get passed a copy of the value of i when the call was made, thus giving us the result we more likely wanted:
Closure:
for i := 0; i < 3; i++ {
go func() {
fmt.Println(i)
}()
}
Result:
3
3
3
Parameter:
for i := 0; i < 3; i++ {
go func(v int) {
fmt.Println(v)
}(i)
}
Result:
0
1
2
Playground: http://play.golang.org/p/T5rHrIKrQv
When to use parameters
Definitely the first form is preferred if you plan to change the value of the variable which you don't want to observe in the function.
This is the typical case when the anonymous function is inside a for loop and you intend to use the loop's variables, for example:
for i := 0; i < 10; i++ {
go func(i int) {
fmt.Println(i)
}(i)
}
Without passing the variable i you might observe printing 10 ten times. With passing i, you will observe numbers printed from 0 to 9.
When not to use parameters
If you don't want to change the value of the variable, it is cheaper not to pass it and thus not create another copy of it. This is especially true for large structs. Although if you later alter the code and modify the variable, you may easily forget to check its effect on the closure and get unexpected results.
Also there might be cases when you do want to observe changes made to "outer" variables, such as:
func GetRes(name string) (Res, error) {
res, err := somepack.OpenRes(name)
if err != nil {
return nil, err
}
closeres := true
defer func() {
if closeres {
res.Close()
}
}()
// Do other stuff
if err = otherStuff(); err != nil {
return nil, err // res will be closed
}
// Everything went well, return res, but
// res must not be closed, it will be the responsibility of the caller
closeres = false
return res, nil // res will not be closed
}
In this case the GetRes() is to open some resource. But before returning it other things have to be done which might also fail. If those fail, res must be closed and not returned. If everything goes well, res must not be closed and returned.
This is a example of parameter from net/Listen
package main
import (
"io"
"log"
"net"
)
func main() {
// Listen on TCP port 2000 on all available unicast and
// anycast IP addresses of the local system.
l, err := net.Listen("tcp", ":2000")
if err != nil {
log.Fatal(err)
}
defer l.Close()
for {
// Wait for a connection.
conn, err := l.Accept()
if err != nil {
log.Fatal(err)
}
// Handle the connection in a new goroutine.
// The loop then returns to accepting, so that
// multiple connections may be served concurrently.
go func(c net.Conn) {
// Echo all incoming data.
io.Copy(c, c)
// Shut down the connection.
c.Close()
}(conn)
}
}

Resources