On windows I need to have a go program that suspends and unsuspends a process (generated with exec.Command).
This process is run for 5 seconds and then suspended with DebugActiveProcess.
After other 5 seconds it is restarted with DebugActiveProcessStop.
It works flawlessly in this way... the problem arises when I wait for 60min or more (sometimes even less): after that amount of time the DebugActiveProcessStop fails with Permission denied.
this is the code of example:
package main
import (
"bufio"
"fmt"
"os/exec"
"path/filepath"
"syscall"
"time"
)
var (
// DebugActiveProcess function (debugapi.h)
// https://learn.microsoft.com/en-us/windows/win32/api/debugapi/nf-debugapi-debugactiveprocess
modkernel32 = syscall.NewLazyDLL("kernel32.dll")
procDebugActiveProcess = modkernel32.NewProc("DebugActiveProcess")
procDebugActiveProcessStop = modkernel32.NewProc("DebugActiveProcessStop")
)
func main() {
dir := "C:\\path\\to\\folder"
execName := "counter.bat" //using a batch file as an example of a program that runs for long time
cmd := exec.Command(filepath.Join(dir, execName))
cmd.Dir = dir
cmd.SysProcAttr = &syscall.SysProcAttr{
CreationFlags: syscall.CREATE_NEW_PROCESS_GROUP,
}
cmdprinter(cmd)
cmd.Start()
// wait 5 seconds
time.Sleep(time.Duration(5) * time.Second)
fmt.Println("--- STOPPING ---")
// https://github.com/go-delve/delve/blob/master/pkg/proc/native/zsyscall_windows.go#L177-L199
r1, _, e1 := syscall.Syscall(procDebugActiveProcess.Addr(), 1, uintptr(cmd.Process.Pid), 0, 0)
if r1 == 0 {
fmt.Println("error stopping process tree:", e1.Error())
}
fmt.Println("STOPPING done")
// wait 15 min
for n := 0; n < 3600; n++ {
fmt.Print(n, " ")
time.Sleep(time.Second)
}
fmt.Println("\n--- STARTING ---")
r1, r2, e1 := syscall.Syscall(procDebugActiveProcessStop.Addr(), 1, uintptr(cmd.Process.Pid), 0, 0)
if r1 == 0 {
fmt.Println("error starting process tree:", e1.Error(), "r2:", r2, "GetLastError:", syscall.GetLastError())
}
fmt.Println("STARTING done")
cmd.Wait()
fmt.Println("exiting")
}
func cmdprinter(cmd *exec.Cmd) {
outP, _ := cmd.StdoutPipe()
errP, _ := cmd.StderrPipe()
go func() {
scanner := bufio.NewScanner(outP)
for scanner.Scan() {
line := scanner.Text()
fmt.Println(line)
}
}()
go func() {
scanner := bufio.NewScanner(errP)
for scanner.Scan() {
line := scanner.Text()
fmt.Println(line)
}
}()
}
this is counter.bat:
echo off
set /a n = 0
:START
ping 127.0.0.1 -n 2 > nul
echo counter says %n%
set /a "n=%n%+1"
if %n%==30 (EXIT 0)
GOTO START
What might be the cause of this? how can I fix it?
Is there any other way to suspend a process and unsuspend it on windows (like kill -STOP on linux)? (i heard of the windows api suspend function but was unable to make it work)
This is because the DebugActiveProcessStop command must be run in the same thread than the thread which called the original DebugActiveProcess. A very old thread gave a hint in this direction.
Now this presents a problem in GO because you do not have direct control over which thread a goroutine is run... GO abstracts this "nitty gritty" away. The workaround that has held true for me so far is using runtime.LockOSThread().
Something like:
func exampleSuspendResume(){
var wg sync.WaitGroup
runtime.LockOSThread()
defer runtime.UnlockOSThread()
// call DebugActiveProcess
// do more stuff WITHOUT calling `go func`
// call DebugActiveProcessStop
}
Related
I'm using Go's syscall package Ptrace interface to trace a process. The problem is, if the tracee is long-running, the tracing seems to hang. I tried replicating the issue with C implementation, but there everything seems to work fine.
Here's a Go code to reproduce the issue:
import (
"fmt"
"os"
"os/exec"
"syscall"
)
func main() {
len := "9999999"
cmd := exec.Command("openssl", "rand", "-hex", len)
cmd.SysProcAttr = &syscall.SysProcAttr{Ptrace: true}
cmd.Stdout = os.Stdout
cmd.Stdin = os.Stdin
cmd.Start()
pid, _ := syscall.Wait4(-1, nil, syscall.WALL, nil)
for {
syscall.PtraceSyscall(pid, 0)
_, err := syscall.Wait4(-1, nil, syscall.WALL, nil)
if err != nil {
fmt.Println(err)
break
}
}
}
When running the above code, the process never completes and it has to be interrupted. If the len variable is changed to something smaller, for example 9, the process will complete without issues and output will be something following:
$ go run main.go
d2ff963e65e8e1926b
no child processes
Found it. The program hangs when Go runtime changes the thread in which the goroutine is running. Can be verified in the example code by printing fmt.Println(syscall.Gettid()) inside the loop:
package main
import (
"fmt"
"os/exec"
"syscall"
)
func main() {
len := "9999999"
cmd := exec.Command("openssl", "rand", "-hex", len)
cmd.SysProcAttr = &syscall.SysProcAttr{Ptrace: true}
cmd.Start()
pid, _ := syscall.Wait4(-1, nil, syscall.WALL, nil)
for {
fmt.Println(syscall.Gettid())
syscall.PtraceSyscall(pid, 0)
_, err := syscall.Wait4(-1, nil, syscall.WALL, nil)
if err != nil {
fmt.Println(err)
break
}
}
}
Solution: lock the execution of the goroutine to its current thread by using runtime.LockOSThread():
....
func main() {
runtime.LockOSThread()
len := "9999999"
cmd := exec.Command("openssl", "rand", "-hex", len)
....
I have three commands to run, but I'd like to make sure the two first are running before running the third one.
Currently, it does run A and B then C.
I run A and B in goroutines
I communicate their name through chan if there's no stderr
the main functions pushes the names received through chan into a slice
once the slice contains all names of module A and B it starts C
Some context
I'm in the process of learning goroutines and chan as a hobbyist. It's not clear to me how to output exec.Command("foo", "bar").Run() in a reliable way while it's running. It's not clear either how to handle errors received by each process through chan.
The reason why I need A and B to run before C is because A and B are graphql microservices, C needs them to run in order to get their schemas through HTTP and start doing some graphql federation (f.k.a. graphql stitching)
Inconsistencies
With my current approach, I will know if A and B are running only if they print something I guess.
I don't like that each subsequent stdout will hit an if statement, just to know if the process is running.
My error handling is not as clean as I'd like it to be.
Question
How could I have a more reliable way to ensure that A and B are running, event if they don't print anything and that they did not throw errors?
package main
import (
"bufio"
"fmt"
"log"
"os/exec"
"reflect"
"sort"
"strings"
"sync"
)
var wg sync.WaitGroup
var modulesToRun = []string{"micro-post", "micro-hello"}
func main() {
// Send multiple values to chan
// https://stackoverflow.com/a/50857250/9077800
c := make(chan func() (string, error))
go runModule([]string{"go", "run", "micro-post"}, c) // PROCESS A
go runModule([]string{"go", "run", "micro-hello"}, c) // PROCESS B
modulesRunning := []string{}
for {
msg, err := (<-c)()
if err != nil {
log.Fatalln(err)
}
if strings.HasPrefix(msg, "micro-") && err == nil {
modulesRunning = append(modulesRunning, msg)
if CompareUnorderedSlices(modulesToRun, modulesRunning) {
go runModule([]string{"go", "run", "micro-federation"}, c) // PROCESS C
}
}
}
}
func runModule(commandArgs []string, o chan func() (string, error)) {
cmd := exec.Command(commandArgs[0], commandArgs[1], commandArgs[2]+"/main.go")
// Less verbose solution to stream output with io?
// var stdBuffer bytes.Buffer
// mw := io.MultiWriter(os.Stdout, &stdBuffer)
// cmd.Stdout = mw
// cmd.Stderr = mw
c := make(chan struct{})
wg.Add(1)
// Stream command output
// https://stackoverflow.com/a/38870609/9077800
go func(cmd *exec.Cmd, c chan struct{}) {
defer wg.Done()
stdout, err := cmd.StdoutPipe()
if err != nil {
close(o)
panic(err)
}
stderr, err := cmd.StderrPipe()
if err != nil {
close(o)
panic(err)
}
<-c
outScanner := bufio.NewScanner(stdout)
for outScanner.Scan() {
m := outScanner.Text()
fmt.Println(commandArgs[2]+":", m)
o <- (func() (string, error) { return commandArgs[2], nil })
}
errScanner := bufio.NewScanner(stderr)
for errScanner.Scan() {
m := errScanner.Text()
fmt.Println(commandArgs[2]+":", m)
o <- (func() (string, error) { return "bad", nil })
}
}(cmd, c)
c <- struct{}{}
cmd.Start()
wg.Wait()
close(o)
}
// CompareUnorderedSlices orders slices before comparing them
func CompareUnorderedSlices(a, b []string) bool {
if len(a) != len(b) {
return false
}
sort.Strings(a)
sort.Strings(b)
return reflect.DeepEqual(a, b)
}
About process management
Starting the process is the action of calling the binary path with its arguments.
It will fail if the bin path is not found, or some malformed arguments syntax is provided.
As a consequence you might start a process with success, but receive an exit error because somehow its execution fails.
Those details are important to figure out if you need only to startup the process to consider the operation as successful or dig further its state and/or output.
In your code it appears you wait for the first line of stderr to be printed to consider it as started, without any consideration to the content being printed.
It resemble more to a kind of sleeping time to ensure the process has initialized.
Consider that starting the binary happens much faster in comparison to the execution of its bootstrap sequence.
About the code, your exit rules are unclear. What is keeping main from exiting ?
In the current code it will exit before C is executed when A and B has started (not anylising other cases)
Your implementation of job concurrency in main is not standard. It is missing the loop to collect results, quit and close(chan).
The chan signature is awkward, i would rather use a struct {Module string, Err error}
The runModule function is buggy. It might close(o) while another routine might attempt to write it. If starts fails, you are not returning any error signal.
A somewhat solution might look like this, consider it as being opinniated and depending the binary run other strategies can/should be implemented to detect error over the standard FDs.
package main
import (
"bufio"
"fmt"
"log"
"os"
"os/exec"
"strings"
"sync"
"time"
)
type cmd struct {
Module string
Cmd string
Args []string
Err error
}
func main() {
torun := []cmd{
cmd{
Module: "A",
Cmd: "ping",
Args: []string{"8.8.8.8"},
},
cmd{
Module: "B",
Cmd: "ping",
// Args: []string{"8.8.8.8.9"},
Args: []string{"8.8.8.8"},
},
}
var wg sync.WaitGroup // use a waitgroup to ensure all concurrent jobs are done
wg.Add(len(torun))
out := make(chan cmd) // a channel to output cmd status
go func() {
wg.Wait() //wait for the group to finish
close(out) // then close the signal channel
}()
// start the commands
for _, c := range torun {
// go runCmd(c, out, &wg)
go runCmdAndWaitForSomeOutput(c, out, &wg)
}
// loop over the chan to collect errors
// it ends when wg.Wait unfreeze and closes out
for c := range out {
if c.Err != nil {
log.Fatalf("%v %v has failed with %v", c.Cmd, c.Args, c.Err)
}
}
// here all commands started you can proceed further to run the last command
fmt.Println("all done")
os.Exit(0)
}
func runCmd(o cmd, out chan cmd, wg *sync.WaitGroup) {
defer wg.Done()
cmd := exec.Command(o.Cmd, o.Args...)
if err := cmd.Start(); err != nil {
o.Err = err // save err
out <- o // signal completion error
return // return to unfreeze the waitgroup wg
}
go cmd.Wait() // dont wait for command completion,
// consider its done once the program started with success.
// out <- o // useless as main look ups only for error
}
func runCmdAndWaitForSomeOutput(o cmd, out chan cmd, wg *sync.WaitGroup) {
defer wg.Done()
cmd := exec.Command(o.Cmd, o.Args...)
stdout, err := cmd.StdoutPipe()
if err != nil {
o.Err = err // save err
out <- o // signal completion
return // return to unfreeze the waitgroup wg
}
stderr, err := cmd.StderrPipe()
if err != nil {
o.Err = err
out <- o
return
}
if err := cmd.Start(); err != nil {
o.Err = err
out <- o
return
}
go cmd.Wait() // dont wait for command completion
// build a concurrent fd's scanner
outScan := make(chan error) // to signal errors detected on the fd
var wg2 sync.WaitGroup
wg2.Add(2) // the number of fds being watched
go func() {
defer wg2.Done()
sc := bufio.NewScanner(stdout)
for sc.Scan() {
line := sc.Text()
if strings.Contains(line, "icmp_seq") { // the OK marker
return // quit asap to unfreeze wg2
} else if strings.Contains(line, "not known") { // the nOK marker, if any...
outScan <- fmt.Errorf("%v", line)
return // quit to unfreeze wg2
}
}
}()
go func() {
defer wg2.Done()
sc := bufio.NewScanner(stderr)
for sc.Scan() {
line := sc.Text()
if strings.Contains(line, "icmp_seq") { // the OK marker
return // quit asap to unfreeze wg2
} else if strings.Contains(line, "not known") { // the nOK marker, if any...
outScan <- fmt.Errorf("%v", line) // signal error
return // quit to unfreeze wg2
}
}
}()
go func() {
wg2.Wait() // consider that if the program does not output anything,
// or never prints ok/nok, this will block forever
close(outScan) // close the chan so the next loop is finite
}()
// - simple timeout less loop
// for err := range outScan {
// if err != nil {
// o.Err = err // save the execution error
// out <- o // signal the cmd
// return // qui to unfreeze the wait group wg
// }
// }
// - more complex version with timeout
timeout := time.After(time.Second * 3)
for {
select {
case err, ok := <-outScan:
if !ok { // if !ok, outScan is closed and we should quit the loop
return
}
if err != nil {
o.Err = err // save the execution error
out <- o // signal the cmd
return // quit to unfreeze the wait group wg
}
case <-timeout:
o.Err = fmt.Errorf("timed out...%v", timeout) // save the execution error
out <- o // signal the cmd
return // quit to unfreeze the wait group wg
}
}
// exit and unfreeze the wait group wg
}
i have written the following code in order to run until someone exit the program manually.
it does is
----- check if the exists every 1 second
----- if available then read the file and print the file content line by line
for this i have first call a function from the main
and then i call a waitgroup and call a function again from there to do the aforementioned tasks.
please check if i have written the source code correctly as im a newbi on GO
plus this only runs once and stop... i want to it keep alive and see if the file exsists
please help me
package main
import (
"encoding/csv"
"fmt"
"io"
"log"
"os"
"sync"
"time"
)
func main() {
mainfunction()
}
//------------------------------------------------------------------
func mainfunction() {
var wg sync.WaitGroup
wg.Add(1)
go filecheck(&wg)
wg.Wait()
fmt.Printf("Program finished \n")
}
func filecheck(wg *sync.WaitGroup) {
for range time.Tick(time.Second * 1) {
fmt.Println("Foo")
var wgi sync.WaitGroup
wgi.Add(1)
oldName := "test.csv"
newName := "testi.csv"
if _, err := os.Stat(oldName); os.IsNotExist(err) {
fmt.Printf("Path does not exsist \n")
} else {
os.Rename(oldName, newName)
if err != nil {
log.Fatal(err)
}
looping(newName, &wgi)
}
fmt.Printf("Test complete \n")
wgi.Wait()
wg.Done()
time.Sleep(time.Second * 5)
}
}
func looping(newName string, wgi *sync.WaitGroup) {
file, _ := os.Open(newName)
r := csv.NewReader(file)
for {
record, err := r.Read()
if err == io.EOF {
break
}
if err != nil {
log.Fatal(err)
}
var Date = record[0]
var Agent = record[1]
var Srcip = record[2]
var Level = record[3]
fmt.Printf("Data: %s Agent: %s Srcip: %s Level: %s\n", Date, Agent, Srcip, Level)
}
fmt.Printf("Test complete 2 \n")
wgi.Done()
fmt.Printf("for ended")
}
The short answer is that you have this in the loop:
wg.Done()
Which makes the main goroutine proceed to exit as soon as the file is read once.
The longer answer is that you're not using wait groups correctly here, IMHO. For example there's absolutely no point in passing a WaitGroup into looping.
It's not clear what your code is trying to accomplish - you certainly don't need any goroutines to just perform the task you've specified - it can all be gone with no concurrency and thus simpler code.
I'm getting in 'stdin' lines of URL's like:
$ echo -e 'https://golang.org\nhttps://godoc.org\nhttps://golang.org' | go run 1.go .
The task is to get from each WEB-page number of word "Go". But I'm not allowed to start more than 5 goroutines and can use only standard library
Here is my code:
package main
import (
"fmt"
"net/http"
"bufio"
"os"
"regexp"
"io/ioutil"
"time"
)
func worker(id int, jobs<-chan string, results chan<-int) {
t0 := time.Now()
for url := range jobs {
resp, err := http.Get(url)
if err != nil {
fmt.Println("problem while opening url", url)
results<-0
//continue
}
defer resp.Body.Close()
html, err := ioutil.ReadAll(resp.Body)
if err != nil {
continue
}
regExp:= regexp.MustCompile("Go")
matches := regExp.FindAllStringIndex(string(html), -1)
t1 := time.Now()
fmt.Println("Count for", url, ":", len(matches), "Elapsed time:",
t1.Sub(t0), "works id", id)
results<-len(matches)
}
}
func main(){
scanner := bufio.NewScanner(os.Stdin)
jobs := make(chan string, 100)
results := make(chan int, 100)
t0 := time.Now()
for w:= 0; w<5; w++{
go worker(w, jobs, results)
}
var tasks int = 0
res := 0
for scanner.Scan() {
jobs <- scanner.Text()
tasks ++
}
close(jobs)
for a := 1; a <= tasks; a++ {
res+=<-results
}
close(results)
t2 := time.Now()
fmt.Println("Total:",res, "Elapsed total time:", t2.Sub(t0) );
}
I thought it works until I passed more than 5 URL (one of them was incorrect) to stdin. The output was:
goroutine 9 [running]:
panic ...
Obviously, extra goroutnes have been started. How to fix it? May be there are more convenient way to limit number of goroutines?
goroutine 9 [running]:
Some goroutines are started by the runtime, and by web fetches.
Looking at your code, you only started 5 goroutines.
If you really want to know how many go routines you are running use runtime.Numgoroutine
Streaming commands output progress question addresses the problem of printing progress of a long running command.
I tried to put the printing code within a goroutine but the scanner claims to have already hit the EOF immediately and the for block is never executed.
The bufio.scan code that gets executed on the first execution of the Scan() method is:
// We cannot generate a token with what we are holding.
// If we've already hit EOF or an I/O error, we are done.
if s.err != nil {
// Shut it down.
s.start = 0
s.end = 0
return false
}
And if I print s.err the output is EOF.
The code I'm trying to run is:
cmd := exec.Command("some", "command")
c := make(chan int, 1)
go func(cmd *exec.Cmd, c chan int) {
stdout, _ := cmd.StdoutPipe()
<-c
scanner := bufio.NewScanner(stdout)
for scanner.Scan() {
m := scanner.Text()
fmt.Println(m)
}
}(cmd, c)
cmd.Start()
c <- 1
cmd.Wait()
The idea is to start the Goroutine, get a hold of the cmd.stdout, wait that the cmd is started, and start processing its output.
The result is that the long command gets executed and the program waits for its completion, but nothing is printed to terminal.
Any idea why by the time scanner.Scan() is invoked for the first time the stdout has already reached EOF?
There are some problems:
The pipe is being closed before reading all data.
Always check for errors
Start cmd.Start() after c <- struct{}{} and use unbuffered channel c := make(chan struct{})
Two working sample codes:
1: Wait using channel then close the pipe after EOF using defer func() { c <- struct{}{} }(), like this working sample code:
package main
import (
"bufio"
"fmt"
"os/exec"
)
func main() {
cmd := exec.Command("Streamer")
c := make(chan struct{})
go run(cmd, c)
c <- struct{}{}
cmd.Start()
<-c
if err := cmd.Wait(); err != nil {
fmt.Println(err)
}
fmt.Println("done.")
}
func run(cmd *exec.Cmd, c chan struct{}) {
defer func() { c <- struct{}{} }()
stdout, err := cmd.StdoutPipe()
if err != nil {
panic(err)
}
<-c
scanner := bufio.NewScanner(stdout)
for scanner.Scan() {
m := scanner.Text()
fmt.Println(m)
}
fmt.Println("EOF")
}
2: Also you may Wait using sync.WaitGroup, like this working sample code:
package main
import (
"bufio"
"fmt"
"os/exec"
"sync"
)
var wg sync.WaitGroup
func main() {
cmd := exec.Command("Streamer")
c := make(chan struct{})
wg.Add(1)
go func(cmd *exec.Cmd, c chan struct{}) {
defer wg.Done()
stdout, err := cmd.StdoutPipe()
if err != nil {
panic(err)
}
<-c
scanner := bufio.NewScanner(stdout)
for scanner.Scan() {
m := scanner.Text()
fmt.Println(m)
}
}(cmd, c)
c <- struct{}{}
cmd.Start()
wg.Wait()
fmt.Println("done.")
}
And Streamer sample code (just for testing):
package main
import "fmt"
import "time"
func main() {
for i := 0; i < 10; i++ {
time.Sleep(1 * time.Second)
fmt.Println(i, ":", time.Now().UTC())
}
}
And see func (c *Cmd) StdoutPipe() (io.ReadCloser, error) Docs:
StdoutPipe returns a pipe that will be connected to the command's
standard output when the command starts.
Wait will close the pipe after seeing the command exit, so most
callers need not close the pipe themselves; however, an implication is
that it is incorrect to call Wait before all reads from the pipe have
completed. For the same reason, it is incorrect to call Run when using
StdoutPipe. See the example for idiomatic usage.
From godocs:
StdoutPipe returns a pipe that will be connected to the command's
standard output when the command starts.
Wait will close the pipe after seeing the command exit, so most
callers need not close the pipe themselves; however, an implication is
that it is incorrect to call Wait before all reads from the pipe have
completed.
You are calling Wait() immediately after starting the command. So the pipe gets closed as soon as the command completes, before making sure you have read all the data from the pipe. Try moving Wait() to your go routine after the scan loop.
go func(cmd *exec.Cmd, c chan int) {
stdout, _ := cmd.StdoutPipe()
<-c
scanner := bufio.NewScanner(stdout)
for scanner.Scan() {
m := scanner.Text()
fmt.Println(m)
}
cmd.Wait()
c <- 1
}(cmd, c)
cmd.Start()
c <- 1
// This is here so we don't exit the program early,
<-c
There's also a simpler way to do things, which is to just assign os.stdout as the cmd's stdout, causing the command to directly write to the os.stdout:
cmd := exec.Command("some", "command")
cmd.Stdout = os.Stdout
cmd.Run()