Redact sensitive data through a custom io.Writer in Go - go

I am executing some exec.Commands that output sensitive data. I want to filter this data out. Since you can set the stdout writer to the Command struct, my idea is to write a custom io.Writer that basically consumes the output and filters the output by a given word.
type passwordFilter struct {
keyWord string
}
func (pf passwordFilter) Write(p []byte) (n int, err error) {
// this is where I have no idea what to do
// I think I should somehow use a scanner and then filter
// out = strings.Replace(out, pf.keyWord, "*******", -1)
// something like this
// but I have to deal with byte array here
}
func main() {
pf := passwordFilter{keyWord: "password123"}
cmd := exec.Command(someBinaryFile)
cmd.Stdout = pf
cmd.Stderr = &stderr
if err := cmd.Run(); err != nil {
log.Fatal(err)
}
}
I'm not sure if I'm headed the right way here, but I'm sure I can somehow reuse the existing io.Writers or scanners here.

Use Cmd.StdoutPipe to get a reader on the program output. Use a scanner on that reader.
cmd := exec.Command(someBinaryFile)
r, err := cmd.StdoutPipe()
if err != nil {
log.Fatal(err)
}
if err := cmd.Start(); err != nil {
log.Fatal(err)
}
s := bufio.NewScanner(r)
for s.Scan() {
out := s.String()
out = strings.Replace(out, pf.keyWord, "*******", -1)
// write out to destination
}
if s.Err() != nil {
log.Fatal(s.Err())
}
if err := cmd.Wait(); err != nil {
log.Fatal(err)
}

Related

Get the amount of bytes streamed from an io.Reader Read

Here is a snippet of my code that does a GET request, and streams the response into cmd.Stdin.
resp, err = httpClient.Get(url)
if err != nil {
err = errors.Wrap(err, "HTTP request failed")
return
}
reader = bufio.NewReader(resp.Body)
args = append(args, "-") // Keep listening on stdin for file data
cmd := exec.Command("exiftool", args...)
stdout, err := cmd.StdoutPipe()
if err != nil {
return
}
cmd.Stdin = reader
err = cmd.Start()
if err != nil {
return
}
I want to know how much data is streamed by the time it finishes executing.
So what I need is to capture what is being read while it's streaming, or at least capture the size of what is being read.
Wrap the reader. Count in the wrapper.
type wrapper struct {
io.Reader
n int
}
func (w *wrapper) Read(p []byte) (int, error) {
n, err := w.Reader.Read(p)
w.n += n
return n, err
}
Plug it into your application like this:
args = append(args, "-")
cmd := exec.Command("exiftool", args...)
stdout, err := cmd.StdoutPipe()
if err != nil {
return
}
reader := &wrapper{Reader: resp.Body}
cmd.Stdin = reader
err = cmd.Run()
if err != nil {
return
}
fmt.Println(reader.n) // prints number of bytes read.
Because the exec package uses a buffer when copying from the response to the stdin, a bufio.Reader is unlikely to provide a benefit. In case there is some benefit, use one of these options;
reader := &wrapper{Reader: bufio.NewReader(resp.Body)} // Option 1
cmd.Stdin = bufio.NewReader(reader) // Option 2

Channels in Golang with TCP/IP socket not working

I just started writting a Golang client for a server that I've made in C with TCP/IP sockets, then I figured out that my channel wasn't working.
Any ideas why ?
func reader(r io.Reader, channel chan<- []byte) {
buf := make([]byte, 2048)
for {
n, err := r.Read(buf[:])
if err != nil {
return
}
channel <- buf[0:n]
}
}
func client(e *gowd.Element) {
f, err := os.Create("/tmp/dat2")
if err != nil {
log.Fatal()
}
read := make(chan []byte)
c, err := net.Dial("tcp", "127.0.0.1:4242")
if err != nil {
log.Fatal(err)
}
go reader(c, read)
for {
buf := <-read
n := strings.Index(string(buf), "\n")
if n == -1 {
continue
}
msg := string(buf[0:n])
if msg == "WELCOME" {
fmt.Fprint(c, "GRAPHIC\n")
}
f.WriteString(msg + "\n")
}
Testing my server with netcat results in the following output :
http://pasted.co/a37b2954
But i only have : http://pasted.co/f13d56b4
I'm new to chan in Golang so maybe I'm wrong (I probably am)
Channel usage looks alright, however retrieving value from channel would overwrite previously read value at buf := <-read since your waiting for newline.
Also you can use bufio.Reader to read string upto newline.
Your code snippet is partial so its not feasible to execute, try and let me know:
func reader(r io.Reader, channel chan<- string) {
bufReader := bufio.NewReader(conn)
for {
msg, err := bufReader.ReadString('\n')
if err != nil { // connection error or connection reset error, etc
break
}
channel <- msg
}
}
func client(e *gowd.Element) {
f, err := os.Create("/tmp/dat2")
if err != nil {
log.Fatal()
}
read := make(chan string)
c, err := net.Dial("tcp", "127.0.0.1:4242")
if err != nil {
log.Fatal(err)
}
go reader(c, read)
for {
msg := <-read
if msg == "WELCOME" {
fmt.Fprint(c, "GRAPHIC\n")
}
f.WriteString(msg + "\n")
}
//...
}
EDIT:
Please find example of generic TCP client to read data. Also I have removed scanner from above code snippet and added buffer reader.
func main() {
conn, err := net.Dial("tcp", "127.0.0.1:4242")
if err != nil {
log.Fatal(err)
}
reader := bufio.NewReader(conn)
for {
msg, err := reader.ReadString('\n')
if err != nil {
break
}
fmt.Println(msg)
}
}

Pipe to exec'ed process

My Go application outputs some amounts of text data and I need to pipe it to some external command (e.g. less). I haven't find any way to pipe this data to syscall.Exec'ed process.
As a workaround I write that text data to a temporary file and then use that file as an argument to less:
package main
import (
"io/ioutil"
"log"
"os"
"os/exec"
"syscall"
)
func main() {
content := []byte("temporary file's content")
tmpfile, err := ioutil.TempFile("", "example")
if err != nil {
log.Fatal(err)
}
defer os.Remove(tmpfile.Name()) // Never going to happen!
if _, err := tmpfile.Write(content); err != nil {
log.Fatal(err)
}
if err := tmpfile.Close(); err != nil {
log.Fatal(err)
}
binary, err := exec.LookPath("less")
if err != nil {
log.Fatal(err)
}
args := []string{"less", tmpfile.Name()}
if err := syscall.Exec(binary, args, os.Environ()); err != nil {
log.Fatal(err)
}
}
It works but leaves a temporary file on a file system, because syscall.Exec replaces the current Go process with another (less) one and deferred os.Remove won't run. Such behaviour is not desirable.
Is there any way to pipe some data to an external process without leaving any artefacts?
You should be using os/exec to build an exec.Cmd to execute, then you could supply any io.Reader you want as the stdin for the command.
From the example in the documentation:
cmd := exec.Command("tr", "a-z", "A-Z")
cmd.Stdin = strings.NewReader("some input")
var out bytes.Buffer
cmd.Stdout = &out
err := cmd.Run()
if err != nil {
log.Fatal(err)
}
fmt.Printf("in all caps: %q\n", out.String())
If you want to write directly to the command's stdin, then you call cmd.StdInPipe to get an io.WriteCloser you can write to.
If you really need to exec the process in place of your current one, you can simply remove the file before exec'ing, and provide that file descriptor as stdin for the program.
content := []byte("temporary file's content")
tmpfile, err := ioutil.TempFile("", "example")
if err != nil {
log.Fatal(err)
}
os.Remove(tmpfile.Name())
if _, err := tmpfile.Write(content); err != nil {
log.Fatal(err)
}
tmpfile.Seek(0, 0)
err = syscall.Dup2(int(tmpfile.Fd()), syscall.Stdin)
if err != nil {
log.Fatal(err)
}
binary, err := exec.LookPath("less")
if err != nil {
log.Fatal(err)
}
args := []string{"less"}
if err := syscall.Exec(binary, args, os.Environ()); err != nil {
log.Fatal(err)
}

golang scp file using crypto/ssh

I'm trying to download a remote file over ssh
The following approach works fine on shell
ssh hostname "tar cz /opt/local/folder" > folder.tar.gz
However the same approach on golang giving some difference in output artifact size. For example the same folders with pure shell produce artifact gz file 179B and same with go script 178B.
I assume that something has been missed from io.Reader or session got closed earlier. Kindly ask you guys to help.
Here is the example of my script:
func executeCmd(cmd, hostname string, config *ssh.ClientConfig, path string) error {
conn, _ := ssh.Dial("tcp", hostname+":22", config)
session, err := conn.NewSession()
if err != nil {
panic("Failed to create session: " + err.Error())
}
r, _ := session.StdoutPipe()
scanner := bufio.NewScanner(r)
go func() {
defer session.Close()
name := fmt.Sprintf("%s/backup_folder_%v.tar.gz", path, time.Now().Unix())
file, err := os.OpenFile(name, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0644)
if err != nil {
panic(err)
}
defer file.Close()
for scanner.Scan() {
fmt.Println(scanner.Bytes())
if err := scanner.Err(); err != nil {
fmt.Println(err)
}
if _, err = file.Write(scanner.Bytes()); err != nil {
log.Fatal(err)
}
}
}()
if err := session.Run(cmd); err != nil {
fmt.Println(err.Error())
panic("Failed to run: " + err.Error())
}
return nil
}
Thanks!
bufio.Scanner is for newline delimited text. According to the documentation, the scanner will remove the newline characters, stripping any 10s out of your binary file.
You don't need a goroutine to do the copy, because you can use session.Start to start the process asynchronously.
You probably don't need to use bufio either. You should be using io.Copy to copy the file, which has an internal buffer already on top of any buffering already done in the ssh client itself. If an additional buffer is needed for performance, wrap the session output in a bufio.Reader
Finally, you return an error value, so use it rather than panic'ing on regular error conditions.
conn, err := ssh.Dial("tcp", hostname+":22", config)
if err != nil {
return err
}
session, err := conn.NewSession()
if err != nil {
return err
}
defer session.Close()
r, err := session.StdoutPipe()
if err != nil {
return err
}
name := fmt.Sprintf("%s/backup_folder_%v.tar.gz", path, time.Now().Unix())
file, err := os.OpenFile(name, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0644)
if err != nil {
return err
}
defer file.Close()
if err := session.Start(cmd); err != nil {
return err
}
n, err := io.Copy(file, r)
if err != nil {
return err
}
if err := session.Wait(); err != nil {
return err
}
return nil
You can try doing something like this:
r, _ := session.StdoutPipe()
reader := bufio.NewReader(r)
go func() {
defer session.Close()
// open file etc
// 10 is the number of bytes you'd like to copy in one write operation
p := make([]byte, 10)
for {
n, err := reader.Read(p)
if err == io.EOF {
break
}
if err != nil {
log.Fatal("err", err)
}
if _, err = file.Write(p[:n]); err != nil {
log.Fatal(err)
}
}
}()
Make sure your goroutines are synchronized properly so output is completeky written to the file.

How to query Redis db from golang using redigo library

I am trying to figure out what is the best way to query Redis db for multiple keys in one command.
I have seen MGET which can be used for redis-cli. But how you do that using redigo library from GOlang code. Imagine I have an array of keys and I want to take from Redis db all the values for those keys in one query.
Thanks in advance!
Assuming that c is a Redigo connection and keys is a []string of your keys:
var args []interface{}
for _, k := range keys {
args = append(args, k)
}
values, err := redis.Strings(c.Do("MGET", args...))
if err != nil {
// handle error
}
for _, v := range values {
fmt.Println(v)
}
The Go FAQ explains why you need to copy the keys. The spec describes how to pass a slice to a variadic param.
http://play.golang.org/p/FJazj_PuCq
func main() {
// connect to localhost, make sure to have redis-server running on the default port
conn, err := redis.Dial("tcp", ":6379")
if err != nil {
log.Fatal(err)
}
defer conn.Close()
// add some keys
if _, err = conn.Do("SET", "k1", "a"); err != nil {
log.Fatal(err)
}
if _, err = conn.Do("SET", "k2", "b"); err != nil {
log.Fatal(err)
}
// for fun, let's leave k3 non-existing
// get many keys in a single MGET, ask redigo for []string result
strs, err := redis.Strings(conn.Do("MGET", "k1", "k2", "k3"))
if err != nil {
log.Fatal(err)
}
// prints [a b ]
fmt.Println(strs)
// now what if we want some integers instead?
if _, err = conn.Do("SET", "k4", "1"); err != nil {
log.Fatal(err)
}
if _, err = conn.Do("SET", "k5", "2"); err != nil {
log.Fatal(err)
}
// get the keys, but ask redigo to give us a []interface{}
// (it doesn't have a redis.Ints helper).
vals, err := redis.Values(conn.Do("MGET", "k4", "k5", "k6"))
if err != nil {
log.Fatal(err)
}
// scan the []interface{} slice into a []int slice
var ints []int
if err = redis.ScanSlice(vals, &ints); err != nil {
log.Fatal(err)
}
// prints [1 2 0]
fmt.Println(ints)
}
UPDATE March 10th 2015: redigo now has a redis.Ints helper.

Resources