I am trying to generate PNG of goroutine profile with this demo, but it reports error parsing profile: unrecognized profile format. failed to fetch any source profiles
package main
import (
"fmt"
"log"
"os"
"os/exec"
"runtime/pprof"
"time"
)
func main() {
// start demo go routine
go func() {
for {
time.Sleep(time.Second)
}
}()
// get executable binary path
exePath, err := os.Executable()
if err != nil {
log.Println(err.Error())
return
}
log.Println("exePath", exePath)
// generate goroutine profile
profilePath := exePath + ".profile"
f, err := os.Create(profilePath)
if err != nil {
log.Println(err.Error())
return
}
defer f.Close()
if err := pprof.Lookup("goroutine").WriteTo(f, 2); err != nil {
log.Println(err.Error())
return
}
// generate PNG
pngPath := exePath + ".png"
result, err := exec.Command("go", "tool", "pprof", "-png", "-output", pngPath, exePath, profilePath).CombinedOutput()
if err != nil {
log.Println("make error:", err.Error())
}
log.Println("make result:", string(result))
}
You are using a debug value of 2 in pprof.Lookup("goroutine").WriteTo(f, 2). This will produce an output which the pprof tool can't parse, it is meant as directly human readable.
From the pprof.(*Profile).WriteTo docs:
The debug parameter enables additional output. Passing debug=0 writes the gzip-compressed protocol buffer described in https://github.com/google/pprof/tree/master/proto#overview. Passing debug=1 writes the legacy text format with comments translating addresses to function names and line numbers, so that a programmer can read the profile without tools.
The predefined profiles may assign meaning to other debug values; for example, when printing the "goroutine" profile, debug=2 means to print the goroutine stacks in the same form that a Go program uses when dying due to an unrecovered panic.
Changing this line to pprof.Lookup("goroutine").WriteTo(f, 0) resolves the issue.
Also, not relevant to the issue, but you might want to consider closing the file f.Close() before using it in the pprof command. WriteTo should flush the file to disk, but it can't hurt.
if err := pprof.Lookup("goroutine").WriteTo(f, 2); err != nil {
log.Println(err.Error())
return
}
if err := f.Close(); err != nil {
log.Println(err.Error())
return
}
// generate PNG
...
Related
I am new to golang so please review the code and suggest any changes required.
So the problem statement goes below,
We have a file whose contents are in binary and are encrypted. The only way to read that contents if by using a custom utility say (named decode_it).. The command just accepts filename like below
decode_it filename.d
Now what I have to do is live monitoring the output of the decode_it utility in GO. I have written the code which is working great but somehow it is not able to process the latest tailed output (it is waiting for some amount of time for reading the last latest chunk before more data comes in ). s.Scan() is the function which is not returning the latest changes in output of that utility. I have another terminal side by side so I know that a line is appended or not. The GO Scan() function only scans when another chunk is appended at the end.
Please help. Suggest any changes required and also if possible you can suggest any other alternative approach for this.
Output of utility is - These are huge and come in seconds
1589261318 493023 8=DECODE|9=59|10=053|34=1991|35=0|49=TEST|52=20200512-05:28:38|56=TEST|57=ADMIN|
1589261368 538427 8=DECODE|9=59|10=054|34=1992|35=0|49=TEST|52=20200512-05:29:28|56=TEST|57=ADMIN|
1589261418 579765 8=DECODE|9=59|10=046|34=1993|35=0|49=TEST|52=20200512-05:30:18|56=TEST|57=ADMIN|
1589261468 627052 8=DECODE|9=59|10=047|34=1994|35=0|49=TEST|52=20200512-05:31:08|56=TEST|57=ADMIN|
1589261518 680570 8=DECODE|9=59|10=053|34=1995|35=0|49=TEST|52=20200512-05:31:58|56=TEST|57=ADMIN|
1589261568 722516 8=DECODE|9=59|10=054|34=1996|35=0|49=TEST|52=20200512-05:32:48|56=TEST|57=ADMIN|
1589261618 766070 8=DECODE|9=59|10=055|34=1997|35=0|49=TEST|52=20200512-05:33:38|56=TEST|57=ADMIN|
1589261668 807964 8=DECODE|9=59|10=056|34=1998|35=0|49=TEST|52=20200512-05:34:28|56=TEST|57=ADMIN|
1589261718 853464 8=DECODE|9=59|10=057|34=1999|35=0|49=TEST|52=20200512-05:35:18|56=TEST|57=ADMIN|
1589261768 898758 8=DECODE|9=59|10=031|34=2000|35=0|49=TEST|52=20200512-05:36:08|56=TEST|57=ADMIN|
1589261818 948236 8=DECODE|9=59|10=037|34=2001|35=0|49=TEST|52=20200512-05:36:58|56=TEST|57=ADMIN|
1589261868 995181 8=DECODE|9=59|10=038|34=2002|35=0|49=TEST|52=20200512-05:37:48|56=TEST|57=ADMIN|
1589261918 36727 8=DECODE|9=59|10=039|34=2003|35=0|49=TEST|52=20200512-05:38:38|56=TEST|57=ADMIN|
1589261968 91253 8=DECODE|9=59|10=040|34=2004|35=0|49=TEST|52=20200512-05:39:28|56=TEST|57=ADMIN|
1589262018 129336 8=DECODE|9=59|10=032|34=2005|35=0|49=TEST|52=20200512-05:40:18|56=TEST|57=ADMIN|
1589262068 173247 8=DECODE|9=59|10=033|34=2006|35=0|49=TEST|52=20200512-05:41:08|56=TEST|57=ADMIN|
1589262118 214993 8=DECODE|9=59|10=039|34=2007|35=0|49=TEST|52=20200512-05:41:58|56=TEST|57=ADMIN|
1589262168 256754 8=DECODE|9=59|10=040|34=2008|35=0|49=TEST|52=20200512-05:42:48|56=TEST|57=ADMIN|
1589262218 299908 8=DECODE|9=59|10=041|34=2009|35=0|49=TEST|52=20200512-05:43:38|56=TEST|57=ADMIN|
1589262268 345560 8=DECODE|9=59|10=033|34=2010|35=0|49=TEST|52=20200512-05:44:28|56=TEST|57=ADMIN|
1589262318 392894 8=DECODE|9=59|10=034|34=2011|35=0|49=TEST|52=20200512-05:45:18|56=TEST|57=ADMIN|
1589262368 439936 8=DECODE|9=59|10=035|34=2012|35=0|49=TEST|52=20200512-05:46:08|56=TEST|57=ADMIN|
1589262418 484959 8=DECODE|9=59|10=041|34=2013|35=0|49=TEST|52=20200512-05:46:58|56=TEST|57=ADMIN|
1589262468 531136 8=DECODE|9=59|10=042|34=2014|35=0|49=TEST|52=20200512-05:47:48|56=TEST|57=ADMIN|
1589262518 577190 8=DECODE|9=59|10=043|34=2015|35=0|49=TEST|52=20200512-05:48:38|56=TEST|57=ADMIN|
1589262568 621673 8=DECODE|9=59|10=044|34=2016|35=0|49=TEST|52=20200512-05:49:28|56=TEST|57=ADMIN|
1589262618 661569 8=DECODE|9=59|10=036|34=2017|35=0|49=TEST|52=20200512-05:50:18|56=TEST|57=ADMIN|
1589262668 704912 8=DECODE|9=59|10=037|34=2018|35=0|49=TEST|52=20200512-05:51:08|56=TEST|57=ADMIN|
1589262718 751844 8=DECODE|9=59|10=043|34=2019|35=0|49=TEST|52=20200512-05:51:58|56=TEST|57=ADMIN|
1589262768 792980 8=DECODE|9=59|10=035|34=2020|35=0|49=TEST|52=20200512-05:52:48|56=TEST|57=ADMIN|
1589262818 840365 8=DECODE|9=59|10=036|34=2021|35=0|49=TEST|52=20200512-05:53:38|56=TEST|57=ADMIN|
1589262868 879185 8=DECODE|9=59|10=037|34=2022|35=0|49=TEST|52=20200512-05:54:28|56=TEST|57=ADMIN|
1589262918 925163 8=DECODE|9=59|10=038|34=2023|35=0|49=TEST|52=20200512-05:55:18|56=TEST|57=ADMIN|
1589262968 961584 8=DECODE|9=59|10=039|34=2024|35=0|49=TEST|52=20200512-05:56:08|56=TEST|57=ADMIN|
1589263018 10120 8=DECODE|9=59|10=045|34=2025|35=0|49=TEST|52=20200512-05:56:58|56=TEST|57=ADMIN|
1589263068 53127 8=DECODE|9=59|10=046|34=2026|35=0|49=TEST|52=20200512-05:57:48|56=TEST|57=ADMIN|
1589263118 92960 8=DECODE|9=59|10=047|34=2027|35=0|49=TEST|52=20200512-05:58:38|56=TEST|57=ADMIN|
1589263168 134768 8=DECODE|9=59|10=048|34=2028|35=0|49=TEST|52=20200512-05:59:28|56=TEST|57=ADMIN|
1589263218 180362 8=DECODE|9=59|10=035|34=2029|35=0|49=TEST|52=20200512-06:00:18|56=TEST|57=ADMIN|
1589263268 220070 8=DECODE|9=59|10=027|34=2030|35=0|49=TEST|52=20200512-06:01:08|56=TEST|57=ADMIN|
1589263318 269426 8=DECODE|9=59|10=033|34=2031|35=0|49=TEST|52=20200512-06:01:58|56=TEST|57=ADMIN|
1589263368 309432 8=DECODE|9=59|10=034|34=2032|35=0|49=TEST|52=20200512-06:02:48|56=TEST|57=ADMIN|
1589263418 356561 8=DECODE|9=59|10=035|34=2033|35=0|49=TEST|52=20200512-06:03:38|56=TEST|57=ADMIN|
Code -
package main
import (
"bytes"
"bufio"
"io"
"log"
"os/exec"
"fmt"
)
// dropCRLR drops a terminal \r from the data.
func dropCRLR(data []byte) []byte {
if len(data) > 0 && data[len(data)-1] == '\r' {
return data[0 : len(data)-1]
}
return data
}
func newLineSplitFunc(data []byte, atEOF bool) (advance int, token []byte, err error) {
if atEOF && len(data) == 0 {
return 0, nil, nil
}
if i := bytes.IndexByte(data, '\n'); i >= 0 {
// We have a full newline-terminated line.
return i + 1, dropCRLR(data[0:i]), nil
}
// If we're at EOF, we have a final, non-terminated line. Return it.
if atEOF {
return len(data), dropCRLR(data), nil
}
// Request more data.
// fmt.Println("Returning 0,nil,nil")
return 0, nil, nil
}
func main() {
cmd := exec.Command("decode_it", "filename.d", "4", "1")
var out io.Reader
{
stdout, err := cmd.StdoutPipe()
if err != nil {
log.Fatal(err)
}
stderr, err := cmd.StderrPipe()
if err != nil {
log.Fatal(err)
}
out = io.MultiReader(stdout, stderr)
}
if err := cmd.Start(); err != nil {
log.Fatal(err)
}
// Make a new channel which will be used to ensure we get all output
done := make(chan struct{})
go func() {
// defer cmd.Process.Kill()
s := bufio.NewScanner(out)
s.Split(newLineSplitFunc)
for s.Scan() {
fmt.Println("---- " + s.Text())
}
if s.Err() != nil {
fmt.Printf("error: %s\n", s.Err())
}
}()
// Wait for all output to be processed
<-done
// Wait for the command to finish
if err := cmd.Wait(); err != nil{
fmt.Println("Error: " + string(err.Error()))
}
// if out closes, cmd closed.
log.Println("all done")
}
Also, Since scan() is taking a lot of time and goes into a loop from which I am not able to break as well. Please help for that too..
try something like this one, i fixed some issues and make it more simple:
package main
import (
"bufio"
"fmt"
"io"
"log"
"os/exec"
)
func main() {
var err error
// change to your command
cmd := exec.Command("sh", "test.sh")
var out io.Reader
{
var stdout, stderr io.ReadCloser
stdout, err = cmd.StdoutPipe()
if err != nil {
log.Fatal(err)
}
stderr, err = cmd.StderrPipe()
if err != nil {
log.Fatal(err)
}
out = io.MultiReader(stdout, stderr)
}
if err = cmd.Start(); err != nil {
log.Fatal(err)
}
scanner := bufio.NewScanner(out)
for scanner.Scan() {
fmt.Println("---- " + scanner.Text())
}
if err = scanner.Err(); err != nil {
fmt.Printf("error: %v\n", err)
}
log.Println("all done")
}
test.sh that i used in test:
#!/bin/bash
while [[ 1 = 1 ]]; do
echo 1
sleep 1
done
:)
I tried resolving the above issue using stdbuf -
cmd := exec.Command("stdbuf", "-o0", "-e0", "decode_it", FILEPATH, "4", "1")
Reference link - STDIO Buffering
When programs write to stdout they write with line bufferring. If they are writing to something else, then they use fully buffered mode. golang exec.Command seems to end up using fully buffered mode so using stdbuf forces no buffering.
I built the golang app in local, then scp to server. I need to stop the process and restart manually. Is there any way to auto-restart the process when binary updated?
While this is generally better to be implemented off-process using something like daemontools or similar, there are some cases when you want/need it to be done inside your program.
Doing it inside your program can be tricky depending on the program characteristics such as connections or files it may have open, etc.
Having said that, here you have an implementation which would work in most cases:
package main
import (
"log"
"os"
"syscall"
"time"
"github.com/fsnotify/fsnotify"
"github.com/kardianos/osext"
)
func setupWatcher() (chan struct{}, error) {
file, err := osext.Executable()
if err != nil {
return nil, err
}
log.Printf("watching %q\n", file)
w, err := fsnotify.NewWatcher()
if err != nil {
return nil, err
}
done := make(chan struct{})
go func() {
for {
select {
case e := <-w.Events:
log.Printf("watcher received: %+v", e)
err := syscall.Exec(file, os.Args, os.Environ())
if err != nil {
log.Fatal(err)
}
case err := <-w.Errors:
log.Printf("watcher error: %+v", err)
case <-done:
log.Print("watcher shutting down")
return
}
}
}()
err = w.Add(file)
if err != nil {
return nil, err
}
return done, nil
}
func main() {
log.Print("program starting")
watcher, err := setupWatcher()
if err != nil {
// do something sensible
log.Fatal(err)
}
// continue with app startup
time.Sleep(100 * time.Minute) // just for testing
// eventually you may need to end the watcher
close(watcher) // this way you can
}
Then you do
% go build main.go
% ./main
2016/12/29 14:15:06 program starting
2016/12/29 14:15:06 watching "/home/plalloni/tmp/v/main"
And here the output it produced after you run (in other terminal) some successive "go build main.go" (which "updates" the running binary).
2016/12/29 14:15:32 watcher received: "/home/plalloni/tmp/v/main": CHMOD
2016/12/29 14:15:32 program starting
2016/12/29 14:15:32 watching "/home/plalloni/tmp/v/main"
2016/12/29 14:15:38 watcher received: "/home/plalloni/tmp/v/main": CHMOD
2016/12/29 14:15:38 program starting
2016/12/29 14:15:38 watching "/home/plalloni/tmp/v/main"
Hope it helps.
You can use https://github.com/slayer/autorestart
package main
import "github.com/slayer/autorestart"
func main() {
autorestart.StartWatcher()
http.ListenAndServe(":8080", nil) // for example
}
Does it need to be sophisticated? You could have entr running and trigger an updater script when the binary changes.
http://entrproject.org/
e.g.
echo 'binary_path' | entr script.sh &
I have a resolution about this case.
See also.
https://github.com/narita-takeru/cmdrevive
example
cmdrevive ./htmls/ ".html$" (application) (arguments)
So, this case applicable.
cmdrevive "/(app path)" "(app filename)" (app full path) (arguments)
If (app filename) changed on (app path) directory, then restart (app full path) with (arguments).
How about this one?
I am using os.Pipes() in my program, but for some reason it gives a bad file descriptor error each time i try to write or read data from it.
Is there some thing I am doing wrong?
Below is the code
package main
import (
"fmt"
"os"
)
func main() {
writer, reader, err := os.Pipe()
if err != nil {
fmt.Println(err)
}
_,err= writer.Write([]byte("hello"))
if err != nil {
fmt.Println(err)
}
var data []byte
_, err = reader.Read(data)
if err != nil {
fmt.Println(err)
}
fmt.Println(string(data))
}
output :
write |0: Invalid argument
read |1: Invalid argument
You are using an os.Pipe, which returns a pair of FIFO connected files from the os. This is different than an io.Pipe which is implemented in Go.
The invalid argument errors are because you are reading and writing to the wrong files. The signature of os.Pipe is
func Pipe() (r *File, w *File, err error)
which shows that the returns values are in the order "reader, writer, error".
and io.Pipe:
func Pipe() (*PipeReader, *PipeWriter)
Also returning in the order "reader, writer"
When you check the error from the os.Pipe function, you are only printing the value. If there was an error, the files are invalid. You need to return or exit on that error.
Pipes are also blocking (though an os.Pipe has a small, hard coded buffer), so you need to read and write asynchronously. If you swapped this for an io.Pipe it would deadlock immediately. Dispatch the Read method inside a goroutine and wait for it to complete.
Finally, you are reading into a nil slice, which will read nothing. You need to allocate space to read into, and you need to record the number of bytes read to know how much of the buffer is used.
A more correct version of your example would look like:
reader, writer, err := os.Pipe()
if err != nil {
log.Fatal(err)
}
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
data := make([]byte, 1024)
n, err = reader.Read(data)
if n > 0 {
fmt.Println(string(data[:n]))
}
if err != nil && err != io.EOF {
fmt.Println(err)
}
}()
_, err = writer.Write([]byte("hello"))
if err != nil {
fmt.Println(err)
}
wg.Wait()
I've modified the official documentation example for the zlib package to use an opened file rather than a set of hardcoded bytes (code below).
The code reads in the contents of a source text file and compresses it with the zlib package. I then try to read back the compressed file and print its decompressed contents into stdout.
The code doesn't error, but it also doesn't do what I expect it to do; which is to display the decompressed file contents into stdout.
Also: is there another way of displaying this information, rather than using io.Copy?
package main
import (
"compress/zlib"
"io"
"log"
"os"
)
func main() {
var err error
// This defends against an error preventing `defer` from being called
// As log.Fatal otherwise calls `os.Exit`
defer func() {
if err != nil {
log.Fatalln("\nDeferred log: \n", err)
}
}()
src, err := os.Open("source.txt")
if err != nil {
return
}
defer src.Close()
dest, err := os.Create("new.txt")
if err != nil {
return
}
defer dest.Close()
zdest := zlib.NewWriter(dest)
defer zdest.Close()
if _, err := io.Copy(zdest, src); err != nil {
return
}
n, err := os.Open("new.txt")
if err != nil {
return
}
r, err := zlib.NewReader(n)
if err != nil {
return
}
defer r.Close()
io.Copy(os.Stdout, r)
err = os.Remove("new.txt")
if err != nil {
return
}
}
Your defer func doesn't do anything, because you're shadowing the err variable on every new assignment. If you want a defer to run, return from a separate function, and call log.Fatal after the return statement.
As for why you're not seeing any output, it's because you're deferring all the Close calls. The zlib.Writer isn't flushed until after the function exits, and neither is the destination file. Call Close() explicitly where you need it.
zdest := zlib.NewWriter(dest)
if _, err := io.Copy(zdest, src); err != nil {
log.Fatal(err)
}
zdest.Close()
dest.Close()
I think you messed up the code logic with all this defer stuff and your "trick" err checking.
Files are definitively written when flushed or closed. You just copy into new.txt without closing it before opening it to read it.
Defering the closing of the file is neat inside a function which has multiple exits: It makes sure the file is closed once the function is left. But your main requires the new.txt to be closed after the copy, before re-opening it. So don't defer the close here.
BTW: Your defense against log.Fatal terminating the code without calling your defers is, well, at least strange. The files are all put into some proper state by the OS, there is absolutely no need to complicate the stuff like this.
Check the error from the second Copy:
2015/12/22 19:00:33
Deferred log:
unexpected EOF
exit status 1
The thing is, you need to close zdest immediately after you've done writing. Close it after the first Copy and it works.
I would have suggested to use io.MultiWriter.
In this way you read only once from src. Not much gain for small files but is faster for bigger files.
w := io.MultiWriter(dest, os.Stdout)
I need to find a way to read a line from a io.ReadCloser object OR find a way to split a byte array on a "end line" symbol. However I don't know the end line symbol and I can't find it.
My application execs a php script and needs to get the live output from the script and do "something" with it when it gets it.
Here's a small piece of my code:
cmd := exec.Command(prog, args)
/* cmd := exec.Command("ls")*/
out, err := cmd.StdoutPipe()
if err != nil {
fmt.Println(err)
}
err = cmd.Start()
if err != nil {
fmt.Println(err)
}
after this I monitor the out buffer in a go routine. I've tried 2 ways.
1) nr, er := out.Read(buf) where buf is a byte array. the problem here is that I need to brake the array for each new line
2) my second option is to create a new bufio.reader
r := bufio.NewReader(out)
line,_,e := r.ReadLine()
it runs fine if I exec a command like ls, I get the output line by line, but if I exec a php script it immediately get an End Of File error and exits(I'm guessing that's because of the delayed output from php)
EDIT: My problem was I was creating the bufio.Reader inside the go routine whereas if I do it right after the StdoutPipe() like minikomi suggested, it works fine
You can create a reader using bufio, and then read until the next line break character (Note, single quotes to denote character!):
stdout, err := cmd.StdoutPipe()
rd := bufio.NewReader(stdout)
if err := cmd.Start(); err != nil {
log.Fatal("Buffer Error:", err)
}
for {
str, err := rd.ReadString('\n')
if err != nil {
log.Fatal("Read Error:", err)
return
}
fmt.Println(str)
}
If you're trying to read from the reader in a goroutine with nothing to stop the script, it will exit.
Another option is bufio.NewScanner:
package main
import (
"bufio"
"os/exec"
)
func main() {
cmd := exec.Command("go", "env")
out, err := cmd.StdoutPipe()
if err != nil {
panic(err)
}
buf := bufio.NewScanner(out)
cmd.Start()
defer cmd.Wait()
for buf.Scan() {
println(buf.Text())
}
}
https://golang.org/pkg/bufio#NewScanner