Reading only data added since last ready from buffer in golang - go

I am writing a program that uses SSH shells to execute a series of commands, some of which are long-running. The below function successfully reads the output until an end condition is reached, however each time I read the buffer I get all of the output since the command starting running, which I don't really need. Is there a way to only get the data added to the buffer since the last read?
func RunLongSSHCommand(o *bytes.Buffer, stdin io.Writer, command string, end string) bool {
_, err := fmt.Fprintf(stdin, "%s\n", command)
if err != nil {
log.Fatal(err)
}
o.Reset()
log.Info("buffer:", o.String())
log.Println("Command: ", command)
for {
done := false
scanner := bufio.NewScanner(o)
for scanner.Scan() {
fmt.Println(scanner.Text())
if strings.Contains(scanner.Text(), end) {
fmt.Println(scanner.Text())
done = true
break
}
}
if done {
break
}
//o.Reset()
//log.Info("buffer:", o.String())
time.Sleep(3 * time.Second)
}
return true
}
Code that creates buffer:
sess, client := ssh_ftp.SshConnect(dev3)
defer client.Close()
defer sess.Close()
// StdinPipe for commands
stdin, err := sess.StdinPipe()
if err != nil {
log.Fatal(err)
}
// Start remote shell
err = sess.Shell()
if err != nil {
log.Fatal(err)
}
// Uncomment to store output in variable
var b bytes.Buffer
sess.Stdout = &b
sess.Stderr = &b
New problem: The scan of the stdout pipe only breaks with a manual end condition, otherwise it hangs. Is there a way to avoid having a manual end condition?
func RunLongSSHCommand(stdout io.Reader, stdin io.Writer, command string, end string) bool {
_, err := fmt.Fprintf(stdin, "%s\n", command)
if err != nil {
log.Fatal(err)
}
log.Println("Command: ", command)
scanner := bufio.NewScanner(stdout)
print("scanning output")
for scanner.Scan() {
fmt.Println(scanner.Text())
if strings.Contains(scanner.Text(), end) {
fmt.Println("end")
//done = true
break
}
}
print("done")
return true
}
function call :
RunLongSSHCommand(stdout, stdin, command, "Removing Workstation")
pipe declaration.
stdout, err := sess.StdoutPipe()
if err != nil {
log.Fatal(err)
}

Related

Concurrency but only do each task once

func main() {
//switch statement here that runs grabusernames()
}
func grabusernames() {
f, err := os.OpenFile("longlist.txt", os.O_RDONLY, os.ModePerm)
if err != nil {
log.Fatalf("open file error: %v", err)
return
}
defer f.Close()
rd := bufio.NewReader(f)
for {
line, err := rd.ReadString('\n')
line2 := strings.TrimSpace(line)
if err != nil {
if err == io.EOF {
break
}
log.Fatalf("read file line error: %v", err)
return
}
tellonym(line2)
}
}
func tellonym(line2 string) {
threads := 10
swg := sizedwaitgroup.New(threads)
for i := 0; i < 1000; i++ {
swg.Add()
go func(i int) {
defer swg.Done()
var client http.Client
resp, err := client.Get("https://tellonym.me/" + line2)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
//fmt.Println("Response code: ", resp.StatusCode)
if resp.StatusCode == 404 {
fmt.Println("Username" + line2 + "not taken")
} else if resp.StatusCode == 200 {
fmt.Println("username " + line2 + " taken")
} else {
fmt.Println("Something else, response code: ", resp.StatusCode)
}
}(i)
}
The issue with the code above is that it checks the same username 1,000 times
I'd like it to check each username in the longlist.txt once, but I want to concurrently do it ( it's a long list and I'd like it to be fast
Current output:
Username causenot taken
Username causenot taken
Username causenot taken
Username causenot taken
Desired output:
Username causenot taken
Username billybob taken
Username something taken
Username stacker taken
You have to use goroutines in tellonym(line2) function. In your for loop you are using same username with 1,000 times.
func main() {
//switch statement here that runs grabusernames()
}
func grabusernames() {
f, err := os.OpenFile("longlist.txt", os.O_RDONLY, os.ModePerm)
if err != nil {
log.Fatalf("open file error: %v", err)
return
}
defer f.Close()
rd := bufio.NewReader(f)
for {
line, err := rd.ReadString('\n')
line2 := strings.TrimSpace(line)
if err != nil {
if err == io.EOF {
break
}
log.Fatalf("read file line error: %v", err)
return
}
go tellonym(line2) // use go routines in here
}
}
Also take care about this details:
if you're reading from io.Reader consider it as reading from the stream. It's the single input source, which you can't 'read in parallel' because of it's nature - under the hood, you're getting byte, waiting for another one, getting one more and so on. Tokenizing it in words comes later, in buffer.
Second, I hope you're not trying to use goroutines as a 'silver bullet' in a 'let's add gouroutines and everything will just speed up' manner. If Go gives you such an easy way to use concurrency, it doesn't mean you should use it everywhere.
And finally, if you really need to split huge file into words in parallel and you think that splitting part will be the bottleneck (don't know your case, but I really doubt that) - then you have to invent your own algorithm and use 'os' package to Seek()/Read() parts of the file, each processed by it's own gouroutine and track somehow which parts were already processed.
Try this
func grabusernames() {
f, err := os.OpenFile("longlist.txt", os.O_RDONLY, os.ModePerm)
if err != nil {
log.Fatalf("open file error: %v", err)
return
}
defer f.Close()
rd := bufio.NewReader(f)
ch := make(chan struct{}, 10)
var sem sync.WaitGroup
for {
line, err := rd.ReadString('\n')
line2 := strings.TrimSpace(line)
if err != nil {
if err == io.EOF {
break
}
log.Fatalf("read file line error: %v", err)
return
}
ch <- struct{}{}
sem.Add(1)
go tellonym(line2, ch, &sem)
}
sem.Wait()
}
func tellonym(line2 string, ch chan struct{}, sem *sync.WaitGroup) {
defer func() {
sem.Done()
<-ch
}()
var client http.Client
resp, err := client.Get("https://tellonym.me/" + line2)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
//fmt.Println("Response code: ", resp.StatusCode)
if resp.StatusCode == 404 {
fmt.Println("Username" + line2 + "not taken")
} else if resp.StatusCode == 200 {
fmt.Println("username " + line2 + " taken")
} else {
fmt.Println("Something else, response code: ", resp.StatusCode)
}
}

Redact sensitive data through a custom io.Writer in Go

I am executing some exec.Commands that output sensitive data. I want to filter this data out. Since you can set the stdout writer to the Command struct, my idea is to write a custom io.Writer that basically consumes the output and filters the output by a given word.
type passwordFilter struct {
keyWord string
}
func (pf passwordFilter) Write(p []byte) (n int, err error) {
// this is where I have no idea what to do
// I think I should somehow use a scanner and then filter
// out = strings.Replace(out, pf.keyWord, "*******", -1)
// something like this
// but I have to deal with byte array here
}
func main() {
pf := passwordFilter{keyWord: "password123"}
cmd := exec.Command(someBinaryFile)
cmd.Stdout = pf
cmd.Stderr = &stderr
if err := cmd.Run(); err != nil {
log.Fatal(err)
}
}
I'm not sure if I'm headed the right way here, but I'm sure I can somehow reuse the existing io.Writers or scanners here.
Use Cmd.StdoutPipe to get a reader on the program output. Use a scanner on that reader.
cmd := exec.Command(someBinaryFile)
r, err := cmd.StdoutPipe()
if err != nil {
log.Fatal(err)
}
if err := cmd.Start(); err != nil {
log.Fatal(err)
}
s := bufio.NewScanner(r)
for s.Scan() {
out := s.String()
out = strings.Replace(out, pf.keyWord, "*******", -1)
// write out to destination
}
if s.Err() != nil {
log.Fatal(s.Err())
}
if err := cmd.Wait(); err != nil {
log.Fatal(err)
}

Is there a good way to cancel a blocking read?

I've got a command I have to run via the OS, let's call it 'login', that is interactive and therefore requires me to read from the stdin and pass it to the command's stdin pipe in order for it to work correctly. The only problem is the goroutine blocks on a read from stdin and I haven't been able to find a way to cancel a Reader in Go in order to get it to not hang on the blocking call. For example, from the perspective of the user, after the command looks as if it completed, you still have the opportunity to write to stdin once more (then the goroutine will move past the blocking read and exit)
Ideally I would like to avoid having to parse output from the command's StdoutPipe as that makes my code frail and prone to error if the strings of the login command were to change.
loginCmd := exec.Command("login")
stdin , err := loginCmd.StdinPipe()
if err != nil {
return err
}
out, err := loginCmd.StdoutPipe()
if err != nil {
return err
}
if err := loginCmd.Start(); err != nil {
return err
}
ctx, cancel := context.WithCancel(context.TODO())
var done sync.WaitGroup
done.Add(1)
ready := make(chan bool, 1)
defer cancel()
go func(ctx context.Context) {
reader := bufio.NewReader(os.Stdin)
for {
select {
case <- ctx.Done():
done.Done()
return
default:
//blocks on this line, if a close can unblock the read, then it should exit normally via the ctx.Done() case
line, err :=reader.ReadString('\n')
if err != nil {
fmt.Println("Error: ", err.Error())
}
stdin.Write([]byte(line))
}
}
}(ctx)
var bytesRead = 4096
output := make([]byte, bytesRead)
reader := bufio.NewReader(out)
for err == nil {
bytesRead, err = reader.Read(output)
if err != nil && err != io.EOF {
return err
}
fmt.Printf("%s", output[:bytesRead])
}
if err := loginCmd.Wait(); err != nil {
return err
}
cancel()
done.Wait()

writing twice to the same sub process golang

I have a simple scp function that is just a wrapper over the scp cli tool.
type credential struct {
username string
password string
host string
port string
}
func scpFile(filepath, destpath string, c *credential) error {
cmd := exec.Command("scp", filepath, c.username+"#"+c.host+":"+destpath)
if err := cmd.Run(); err != nil {
return err
}
fmt.Println("done")
return nil
}
This works just fine now I want to add the capability of putting in a password the SSH if scp needs it. This is what I came up with
func scpFile(filepath, destpath string, c *credential) error {
cmd := exec.Command("scp", filepath, c.username+"#"+c.host+":"+destpath)
stdin, err := cmd.StdinPipe()
if err != nil {
return err
}
defer stdin.Close()
if err := cmd.Start(); err != nil {
return err
}
io.WriteString(stdin, c.password+"\n")
cmd.Wait()
fmt.Println("done")
return nil
}
This does not work as the password prompt just hangs there. I tried adding a 1 second sleep before I re write to stdin thinking maybe I was writing the password to fast but did not make a difference.
So I was able to find a work around by instead of trying to send the password to stdin I create a ssh session and scp a file through the ssh session. Here is the new scpFile function:
func scpFile(filePath, destinationPath string, session *ssh.Session) error {
defer session.Close()
f, err := os.Open(filePath)
if err != nil {
return err
}
defer f.Close()
s, err := f.Stat()
if err != nil {
return err
}
go func() {
w, _ := session.StdinPipe()
defer w.Close()
fmt.Fprintf(w, "C%#o %d %s\n", s.Mode().Perm(), s.Size(), path.Base(filePath))
io.Copy(w, f)
fmt.Fprint(w, "\x00")
}()
cmd := fmt.Sprintf("scp -t %s", destinationPath)
if err := session.Run(cmd); err != nil {
return err
}
return nil
}
This could probably be made better but the main idea is there

Golang - Copy exec output to a buffer

I'm writing a function that exec's a program and returns stdout and stderr. It also has the option to display the output to the console. I'm clearly not waiting on something, as if I run the function twice in a row, the outputs are different. Here's a sample program, replace the dir var with a dir with a lot of files to fill up the buffers:
func main() {
dir := "SOMEDIRECTORYWITHALOTOFFILES"
out, err := run("ls -l "+dir, true)
if err != nil {
log.Fatalf("run returned %s", err)
}
log.Printf("Out: %s", out)
out2, err := run("ls -l "+dir, false)
if err != nil {
log.Fatalf("run returned %s", err)
}
log.Printf("Out2: %s", out2)
if out != out2 {
log.Fatalf("Out mismatch")
}
}
func run(cmd string, displayOutput bool) (string, error) {
var command *exec.Cmd
command = exec.Command("/bin/sh", "-c", cmd)
var output bytes.Buffer
stdout, err := command.StdoutPipe()
if err != nil {
return "", fmt.Errorf("Unable to setup stdout for command: %v", err)
}
go func() {
if displayOutput == true {
w := io.MultiWriter(os.Stdout, &output)
io.Copy(w, stdout)
} else {
output.ReadFrom(stdout)
}
}()
stderr, err := command.StderrPipe()
if err != nil {
return "", fmt.Errorf("Unable to setup stderr for command: %v", err)
}
go func() {
if displayOutput == true {
w := io.MultiWriter(os.Stderr, &output)
io.Copy(w, stderr)
} else {
output.ReadFrom(stderr)
}
}()
err = command.Run()
if err != nil {
return "", err
}
return output.String(), nil
}
Here is a simplified and working revision of your example. Note that the test command was swapped out so that I could test within Windows and that your error checks have been omitted only for brevity.
The key change is that a sync.WaitGroup is preventing the run function from printing the output and returning until the goroutine has indicated that it's finished.
func main() {
dir := "c:\\windows\\system32"
command1 := exec.Command("cmd", "/C", "dir", "/s", dir)
command2 := exec.Command("cmd", "/C", "dir", "/s", dir)
out1, _ := run(command1)
out2, _ := run(command2)
log.Printf("Length [%d] vs [%d]\n", len(out1), len(out2))
}
func run(cmd *exec.Cmd) (string, error) {
var output bytes.Buffer
var waitGroup sync.WaitGroup
stdout, _ := cmd.StdoutPipe()
writer := io.MultiWriter(os.Stdout, &output)
waitGroup.Add(1)
go func() {
defer waitGroup.Done()
io.Copy(writer, stdout)
}()
cmd.Run()
waitGroup.Wait()
return output.String(), nil
}
I see some problems:
You should be waiting for the goroutines to finish (e.g., using
sync.WaitGroup).
You're accessing output concurrently in two
goroutines, which is not safe.
You could collect stdout and stderr in two separate buffers and return them separately, if that works for what you're trying to do.

Resources