I generate a trace like this:
func main() {
f, err := os.Create("trace.out")
if err != nil {
panic(err)
}
defer f.Close()
err = trace.Start(f)
if err != nil {
panic(err)
}
defer trace.Stop()
//this is my app:
http.HandleFunc("/", someFunc)
log.Fatal(http.ListenAndServe(":5000", nil))
}
Then i run in the CLI:
$ go run main.go
Refresh browser, trace.out is generated, 1.8 MB, then:
$ go tool trace trace.out
018/09/09 13:25:18 Parsing trace...
failed to parse trace: no EvFrequency event
What am I missing here? Thanks.
Trace data can only be viewed after you stopped the trace (i.e. after trace.Stop() has been called). In the code you supplied http.ListenAndServer(...) will block forever (unless it runs into an error).
Are you trying to view the trace before the trace has been stopped?
One solution might be to wait for an interrupt signal and then exit the function when received which would cause the tracing to be stopped and written.
func main() {
f, err := os.Create("trace.out")
if err != nil {
panic(err)
}
defer f.Close()
err = trace.Start(f)
if err != nil {
panic(err)
}
defer trace.Stop()
http.HandleFunc("/", someFunc)
go func() {
log.Fatal(http.ListenAndServe(":5000", nil))
}()
signalChan := make(chan os.Signal, 1)
signal.Notify(signalChan, os.Interrupt)
<-signalChan
}
Related
I'm trying to do a benchmark with go using rpc and exec.command, here are parts of my code.
I have a master to send rpc to worker to do some job.
func main() {
var wg sync.WaitGroup
var clients []*rpc.Client
client, err := rpc.DialHTTP("tcp", "addr"+":1234")
if err != nil {
log.Fatal("dialing:", err)
}
reply := &Reply{}
args := &Args{}
clients = append(clients, client)
fmt.Println(clients)
err = clients[0].Call("Worker.Init", args, reply)
if err != nil {
log.Fatal("init error:", err)
}
// call for server to init channel
// err = client.Call("Worker.Init", args, reply)
args.A = 1
wg.Add(200)
fmt.Println(time.Now().UnixNano())
for i := 0; i < 200; i++ {
go func() {
defer wg.Done()
err = client.Call("Worker.DoJob", args, reply)
if err != nil {
log.Fatal("dojob error:", err)
}
fmt.Println("Done")
}()
}
wg.Wait()
fmt.Println(time.Now().UnixNano())
}
and worker's code
func (w *Worker) DoJob(args *Args, reply *Reply) error {
// find a channel to do it
w.c <- 1
runtime.LockOSThread()
fmt.Println("exec")
// cmd := exec.Command("docker", "run", "--rm", "ubuntu:16.04", "/bin/bash", "-c", "date +%s%N")
cmd := exec.Command("echo", "hello")
err := cmd.Run()
fmt.Println("exec done")
if err != nil {
reply.Err = err
fmt.Println(err)
}
fmt.Println("done")
<-w.c
return nil
}
I use a chan of size 12 to simulate that the machine has only 12 threads, and after I find it would stuck at cmd.Run(), I changed the command from running a docker to just simply echo hello, but it got still stucked between fmt.Println("exec") and fmt.Println("exec done").
I don'k know why is this happening? Am I sending out too many rpcs so a lot of rpcs will be dropped?
I am trying to debug code by printing/logging from within a gRPC/Go function but the output does not print to the terminal.
fmt.Println() works in my main() method, but not in the server callbacks. How can I print from within these gRPC handlers? I'm open to other methods for debugging if there's a different approach too.
Here's the related code from my main() function. Logging and printing works here:
func main() {
...
lis, err := net.Listen("tcp", fmt.Sprintf("0.0.0.0:%s", os.Getenv("APP_PORT")))
if err != nil {
logger.Error(fmt.Sprintf("Failed start on port: %s", os.Getenv("APP_PORT")), err)
log.Fatalf("Failed to start: %v", err)
}
opts := []grpc.ServerOption{}
s := grpc.NewServer(opts...)
pb.RegisterAuthServiceServer(s, &server{})
// Register reflection service on gRPC server.
reflection.Register(s)
go func() {
fmt.Println("Starting Server...")
if err := s.Serve(lis); err != nil {
log.Fatalf("failed to server: %v", err)
}
}()
// Wait for Control C to exit
ch := make(chan os.Signal, 1)
signal.Notify(ch, os.Interrupt)
// Run until a signal is received
<-ch
fmt.Println("")
fmt.Println("Stopping the server")
s.Stop()
fmt.Println("Closing the listener")
lis.Close()
Then here is one of my handlers, where logging or printing does not output to the terminal where I started the server.
func (*server) AuthUser(ctx context.Context, req *pb.AuthUserRequest) (*pb.AccessToken, error) {
fmt.Println("AuthUser***")
...
I have implemented a CLI using go and I display the status of kubernetese cells. The command is cellery ps
func ps() error {
cmd := exec.Command("kubectl", "get", "cells")
stdoutReader, _ := cmd.StdoutPipe()
stdoutScanner := bufio.NewScanner(stdoutReader)
go func() {
for stdoutScanner.Scan() {
fmt.Println(stdoutScanner.Text())
}
}()
stderrReader, _ := cmd.StderrPipe()
stderrScanner := bufio.NewScanner(stderrReader)
go func() {
for stderrScanner.Scan() {
fmt.Println(stderrScanner.Text())
if (stderrScanner.Text() == "No resources found.") {
os.Exit(0)
}
}
}()
err := cmd.Start()
if err != nil {
fmt.Printf("Error in executing cell ps: %v \n", err)
os.Exit(1)
}
err = cmd.Wait()
if err != nil {
fmt.Printf("\x1b[31;1m Cell ps finished with error: \x1b[0m %v \n", err)
os.Exit(1)
}
return nil
}
However cells need time to get into ready state when they are deployed. Therefore I need to give a flag(wait) which would update the CLI output.
The command would be cellery ps -w . However Kubernetese API have not implemented this yet. So I will have to come up with a command.
Basically what you want is to listen to the event of a cell become ready.
You can register to the events in a cluster and act upon them. A good example can be found here
every one,I am new to golang.I wanna get the data from log file generated by my application.cuz roll-back mechanism, I met some problem.For instance,my target log file is chats.log,it will be renamed to chats.log.2018xxx and a new chats.log will be created.so my go routine that read log file will fail to work.
so I need detect the change and shutdown the previous go routine and then establish the new go routine.
I looked for modules that can help me,and I found
func ExampleNewWatcher(fn string, createnoti chan string, wg sync.WaitGroup) {
wg.Add(1)
defer wg.Done()
watcher, err := fsnotify.NewWatcher()
if err != nil {
log.Fatal(err)
}
defer watcher.Close()
done := make(chan bool)
go func() {
for {
select {
case event := <-watcher.Events:
if event.Op == fsnotify.Create && event.Name==fn{
createnoti <- "has been created"
}
case err := <-watcher.Errors:
log.Println("error:", err)
}
}
}()
err = watcher.Add("./")
if err != nil {
log.Fatal(err)
}
<-done
}
I use fsnotify to detech the change,and make sure the event of file is my log file,and then send some message to a channel.
this is my worker go routine:
func tailer(fn string,isfollow bool, outchan chan string, done <-chan interface{},wg sync.WaitGroup) error {
wg.Add(1)
defer wg.Done()
_, err := os.Stat(fn)
if err != nil{
panic(err)
}
t, err := tail.TailFile(fn, tail.Config{Follow:isfollow})
if err != nil{
panic(err)
}
defer t.Stop()
for line := range t.Lines{
select{
case outchan <- line.Text:
case <- done:
return nil
}
}
return nil
}
I using tail module to read the log file,and I add a done channel to it to shutdown the cycle(I don't know whether I put it in the right way)
And I will send every log content to a channel to consuming it.
So here is the question:how should I put it together?
ps: Actually,I can use some tool to do this job.like apache-flume,but all of those tools need dependency.
Thank you a lot!
Here is a complete example that reloads and rereads the file as it changes or gets deleted and recreated:
package main
import (
"github.com/fsnotify/fsnotify"
"io/ioutil"
"log"
)
const filename = "myfile.txt"
func ReadFile(filename string) string {
data, err := ioutil.ReadFile(filename)
if err != nil {
log.Println(err)
}
return string(data)
}
func main() {
watcher, err := fsnotify.NewWatcher()
if err != nil {
log.Fatal(err)
}
defer watcher.Close()
err = watcher.Add("./")
if err != nil {
log.Fatal(err)
}
for {
select {
case event := <-watcher.Events:
if event.Op == fsnotify.Create && event.Name == filename {
log.Println(ReadFile(filename))
}
case err := <-watcher.Errors:
log.Println("error:", err)
}
}
}
Note this doesn't require goroutines, channels or a WaitGroup. Better to keep things simple and reserve those for when they're actually needed.
I want to test wait4 function, but I'm not really familiar with child processes and so on, but I need to keep it working and during this time send it some signal and see reaction. Can you give me a little example of using wait4 in Go?
wait4 is deprecated on Linux, the proper way is to use exec.Command and call .Wait().
An example with signals:
func bgProcess(app string) (chan error, *os.Process, error) {
cmd := exec.Command(app)
ch := make(chan error, 1)
if err := cmd.Start(); err != nil {
return nil, nil, err
}
go func() {
ch <- cmd.Wait()
}()
return ch, cmd.Process, nil
}
func main() {
ch, proc, err := bgProcess("/usr/bin/cat")
if err != nil {
log.Fatal(err)
}
log.Println("Signal(os.Kill):", proc.Signal(os.Kill))
log.Println("cat returned:", <-ch)
}