Is there a good way to cancel a blocking read? - go

I've got a command I have to run via the OS, let's call it 'login', that is interactive and therefore requires me to read from the stdin and pass it to the command's stdin pipe in order for it to work correctly. The only problem is the goroutine blocks on a read from stdin and I haven't been able to find a way to cancel a Reader in Go in order to get it to not hang on the blocking call. For example, from the perspective of the user, after the command looks as if it completed, you still have the opportunity to write to stdin once more (then the goroutine will move past the blocking read and exit)
Ideally I would like to avoid having to parse output from the command's StdoutPipe as that makes my code frail and prone to error if the strings of the login command were to change.
loginCmd := exec.Command("login")
stdin , err := loginCmd.StdinPipe()
if err != nil {
return err
}
out, err := loginCmd.StdoutPipe()
if err != nil {
return err
}
if err := loginCmd.Start(); err != nil {
return err
}
ctx, cancel := context.WithCancel(context.TODO())
var done sync.WaitGroup
done.Add(1)
ready := make(chan bool, 1)
defer cancel()
go func(ctx context.Context) {
reader := bufio.NewReader(os.Stdin)
for {
select {
case <- ctx.Done():
done.Done()
return
default:
//blocks on this line, if a close can unblock the read, then it should exit normally via the ctx.Done() case
line, err :=reader.ReadString('\n')
if err != nil {
fmt.Println("Error: ", err.Error())
}
stdin.Write([]byte(line))
}
}
}(ctx)
var bytesRead = 4096
output := make([]byte, bytesRead)
reader := bufio.NewReader(out)
for err == nil {
bytesRead, err = reader.Read(output)
if err != nil && err != io.EOF {
return err
}
fmt.Printf("%s", output[:bytesRead])
}
if err := loginCmd.Wait(); err != nil {
return err
}
cancel()
done.Wait()

Related

correct websocket connection closure

I wrote a connection closure function. It sends a closing frame and expects the same in response.
func TryCloseNormally(wsConn *websocket.Conn) error {
closeNormalClosure := websocket.FormatCloseMessage(websocket.CloseNormalClosure, "")
defer wsConn.Close()
if err := wsConn.WriteControl(websocket.CloseMessage, closeNormalClosure, time.Now().Add(time.Second)); err != nil {
return err
}
if err := wsConn.SetReadDeadline(time.Now().Add(time.Second)); err != nil {
return err
}
_, _, err := wsConn.ReadMessage()
if websocket.IsCloseError(err, websocket.CloseNormalClosure) {
return nil
} else {
return errors.New("Websocket doesn't send a close frame in response")
}
}
I wrote a test for this function.
func TestTryCloseNormally(t *testing.T) {
done := make(chan struct{})
exit := make(chan struct{})
ctx := context.Background()
ln, err := net.Listen("tcp", "localhost:")
require.Nil(t, err)
handler := HandlerFunc(func(conn *websocket.Conn) {
for {
_, _, err := conn.ReadMessage()
if err != nil {
require.True(t, websocket.IsCloseError(err, websocket.CloseNormalClosure), err.Error())
return
}
}
})
s, err := makeServer(ctx, handler)
require.Nil(t, err)
go func() {
require.Nil(t, s.Run(ctx, exit, ln))
close(done)
}()
wsConn, _, err := websocket.DefaultDialer.Dial(addr+strconv.Itoa(ln.Addr().(*net.TCPAddr).Port), nil) //nolint:bodyclose
require.Nil(t, err)
require.Nil(t, wsConn.WriteMessage(websocket.BinaryMessage, []byte{'o', 'k'}))
require.Nil(t, TryCloseNormally(wsConn))
close(exit)
<-done
}
To my surprise, it works correctly. Readmessage() reads the closing frame. But in the test, I don't write anything.
Is this happening at the gorilla/websocket level?
Did I write the function correctly? Maybe reading the response frame also happens at the gorilla level.
The function is mostly correct.
Websocket endpoints echo close messages unless the endpoint has already send a close message on its own. See Closing Handshake in the Websocket RFC for more details.
In the normal close scenario, an application should expect to receive a close message after sending a close message.
To handle the case where the peer sent a data message before the sending the close message, read and discard data messages until an error is returned.
func TryCloseNormally(wsConn *websocket.Conn) error {
defer wsConn.Close()
closeNormalClosure := websocket.FormatCloseMessage(websocket.CloseNormalClosure, "")
if err := wsConn.WriteControl(websocket.CloseMessage, closeNormalClosure, time.Now().Add(time.Second)); err != nil {
return err
}
if err := wsConn.SetReadDeadline(time.Now().Add(time.Second)); err != nil {
return err
}
for {
_, _, err := wsConn.ReadMessage()
if websocket.IsCloseError(err, websocket.CloseNormalClosure) {
return nil
}
if err != nil {
return err
}
}
}

script command execution hung forever in go program

func Run() error {
log.Info("In Run Command")
cmd := exec.Command("bash", "/opt/AlterKafkaTopic.sh")
stdout, err := cmd.StdoutPipe()
if err != nil {
return err
}
if err = cmd.Start(); err != nil {
return err
}
f, err := os.Create(filepath.Join("/opt/log/", "execution.log"))
if err != nil {
return err
}
if _, err := io.Copy(f, stdout); err != nil {
return err
}
if err := cmd.Wait(); err != nil {
return err
}
return f.Close()
}
I am trying to execute a bash script from go code. The script changes some kafka topic properties. But the execution get hung io.Copy(f, stdout) and does not continue after it.
This program is running on RHEL7.2 server.
Could someone suggest where I am going wrong
From the docs:
Wait will close the pipe after seeing the command exit.
In other words, io.Copy exits when Wait() is called, but Wait is never called because it's blocked by Copy. Either run Copy in a goroutine, or simply assign f to cmd.Stdout:
f, err := os.Create(filepath.Join("/opt/log/", "execution.log"))
// TODO: Handle error
defer f.Close()
cmd := exec.Command("bash", "/opt/AlterKafkaTopic.sh")
cmd.Stdout = f
err = cmd.Run()

Go Stdin Stdout communication

I want to execute a go file which I will specify in a yaml config file and send it a Struct in bytes. How could I do this?
I thought that I could use Stdin and Stdout for this
But can’t figure it out
Yaml config:
subscribers:
temp:
topics:
- pi/+/temp
action: ./temp/tempBinary
this is my code:
client.Subscribe(NewTopic(a), func(c *Client, msg Message) {
cmd := exec.Command(v.Action)
// I actually want to send [msg] to it so it can be used there
cmd.Stdin = bytes.NewReader(msg.Bytes())
if err := cmd.Start(); err != nil {
c.Logger.Infof("Error while executing action: %v", err)
} else {
c.Logger.Info("Executed command")
}
// I want to handle responses from the called binary
var out bytes.Buffer
cmd.Stdout = &out
c.Logger.Infof("Response: %v", out)
})
I can't figure out how exactly I could do this.
There is a good example of what you need at https://golang.org/pkg/os/exec/#example_Cmd_StdinPip, https://golang.org/pkg/os/exec/#example_Cmd_StdoutPipe and https://golang.org/pkg/io/ioutil/#example_ReadAll
A decent start would be something like:
stdin, err := cmd.StdinPipe()
if err != nil {
log.Fatal(err)
}
stdout, err := cmd.StdoutPipe()
if err != nil {
log.Fatal(err)
}
defer stdin.Close()
io.WriteString(stdin, msg.String())
b, err := ioutil.ReadAll(stdout)
if err != nil {
log.Fatal(err)
}
c.Logger.Infof("Response %s", stdout)
But this solution doesn't even begin to handle edge cases such as pipes being closed early etc.
This video does a good job of talking through stuff like this:
https://www.youtube.com/watch?v=LHZ2CAZE6Gs&feature=youtu.be&list=PL6

writing twice to the same sub process golang

I have a simple scp function that is just a wrapper over the scp cli tool.
type credential struct {
username string
password string
host string
port string
}
func scpFile(filepath, destpath string, c *credential) error {
cmd := exec.Command("scp", filepath, c.username+"#"+c.host+":"+destpath)
if err := cmd.Run(); err != nil {
return err
}
fmt.Println("done")
return nil
}
This works just fine now I want to add the capability of putting in a password the SSH if scp needs it. This is what I came up with
func scpFile(filepath, destpath string, c *credential) error {
cmd := exec.Command("scp", filepath, c.username+"#"+c.host+":"+destpath)
stdin, err := cmd.StdinPipe()
if err != nil {
return err
}
defer stdin.Close()
if err := cmd.Start(); err != nil {
return err
}
io.WriteString(stdin, c.password+"\n")
cmd.Wait()
fmt.Println("done")
return nil
}
This does not work as the password prompt just hangs there. I tried adding a 1 second sleep before I re write to stdin thinking maybe I was writing the password to fast but did not make a difference.
So I was able to find a work around by instead of trying to send the password to stdin I create a ssh session and scp a file through the ssh session. Here is the new scpFile function:
func scpFile(filePath, destinationPath string, session *ssh.Session) error {
defer session.Close()
f, err := os.Open(filePath)
if err != nil {
return err
}
defer f.Close()
s, err := f.Stat()
if err != nil {
return err
}
go func() {
w, _ := session.StdinPipe()
defer w.Close()
fmt.Fprintf(w, "C%#o %d %s\n", s.Mode().Perm(), s.Size(), path.Base(filePath))
io.Copy(w, f)
fmt.Fprint(w, "\x00")
}()
cmd := fmt.Sprintf("scp -t %s", destinationPath)
if err := session.Run(cmd); err != nil {
return err
}
return nil
}
This could probably be made better but the main idea is there

golang scp file using crypto/ssh

I'm trying to download a remote file over ssh
The following approach works fine on shell
ssh hostname "tar cz /opt/local/folder" > folder.tar.gz
However the same approach on golang giving some difference in output artifact size. For example the same folders with pure shell produce artifact gz file 179B and same with go script 178B.
I assume that something has been missed from io.Reader or session got closed earlier. Kindly ask you guys to help.
Here is the example of my script:
func executeCmd(cmd, hostname string, config *ssh.ClientConfig, path string) error {
conn, _ := ssh.Dial("tcp", hostname+":22", config)
session, err := conn.NewSession()
if err != nil {
panic("Failed to create session: " + err.Error())
}
r, _ := session.StdoutPipe()
scanner := bufio.NewScanner(r)
go func() {
defer session.Close()
name := fmt.Sprintf("%s/backup_folder_%v.tar.gz", path, time.Now().Unix())
file, err := os.OpenFile(name, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0644)
if err != nil {
panic(err)
}
defer file.Close()
for scanner.Scan() {
fmt.Println(scanner.Bytes())
if err := scanner.Err(); err != nil {
fmt.Println(err)
}
if _, err = file.Write(scanner.Bytes()); err != nil {
log.Fatal(err)
}
}
}()
if err := session.Run(cmd); err != nil {
fmt.Println(err.Error())
panic("Failed to run: " + err.Error())
}
return nil
}
Thanks!
bufio.Scanner is for newline delimited text. According to the documentation, the scanner will remove the newline characters, stripping any 10s out of your binary file.
You don't need a goroutine to do the copy, because you can use session.Start to start the process asynchronously.
You probably don't need to use bufio either. You should be using io.Copy to copy the file, which has an internal buffer already on top of any buffering already done in the ssh client itself. If an additional buffer is needed for performance, wrap the session output in a bufio.Reader
Finally, you return an error value, so use it rather than panic'ing on regular error conditions.
conn, err := ssh.Dial("tcp", hostname+":22", config)
if err != nil {
return err
}
session, err := conn.NewSession()
if err != nil {
return err
}
defer session.Close()
r, err := session.StdoutPipe()
if err != nil {
return err
}
name := fmt.Sprintf("%s/backup_folder_%v.tar.gz", path, time.Now().Unix())
file, err := os.OpenFile(name, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0644)
if err != nil {
return err
}
defer file.Close()
if err := session.Start(cmd); err != nil {
return err
}
n, err := io.Copy(file, r)
if err != nil {
return err
}
if err := session.Wait(); err != nil {
return err
}
return nil
You can try doing something like this:
r, _ := session.StdoutPipe()
reader := bufio.NewReader(r)
go func() {
defer session.Close()
// open file etc
// 10 is the number of bytes you'd like to copy in one write operation
p := make([]byte, 10)
for {
n, err := reader.Read(p)
if err == io.EOF {
break
}
if err != nil {
log.Fatal("err", err)
}
if _, err = file.Write(p[:n]); err != nil {
log.Fatal(err)
}
}
}()
Make sure your goroutines are synchronized properly so output is completeky written to the file.

Resources