Go command to get updated output using kubectl - go

I have implemented a CLI using go and I display the status of kubernetese cells. The command is cellery ps
func ps() error {
cmd := exec.Command("kubectl", "get", "cells")
stdoutReader, _ := cmd.StdoutPipe()
stdoutScanner := bufio.NewScanner(stdoutReader)
go func() {
for stdoutScanner.Scan() {
fmt.Println(stdoutScanner.Text())
}
}()
stderrReader, _ := cmd.StderrPipe()
stderrScanner := bufio.NewScanner(stderrReader)
go func() {
for stderrScanner.Scan() {
fmt.Println(stderrScanner.Text())
if (stderrScanner.Text() == "No resources found.") {
os.Exit(0)
}
}
}()
err := cmd.Start()
if err != nil {
fmt.Printf("Error in executing cell ps: %v \n", err)
os.Exit(1)
}
err = cmd.Wait()
if err != nil {
fmt.Printf("\x1b[31;1m Cell ps finished with error: \x1b[0m %v \n", err)
os.Exit(1)
}
return nil
}
However cells need time to get into ready state when they are deployed. Therefore I need to give a flag(wait) which would update the CLI output.
The command would be cellery ps -w . However Kubernetese API have not implemented this yet. So I will have to come up with a command.

Basically what you want is to listen to the event of a cell become ready.
You can register to the events in a cluster and act upon them. A good example can be found here

Related

How to use commands conveyor in Goland powershell terminal?

I have written two simple applications on Golang. The first app fetch.go displays html content of the main page by incoming links in arguments. The second app findlinks1.go find all href links in html tree. Here's the snippets:
func main() {
for _, url := range os.Args[1:] {
resp, err := http.Get(url)
if err != nil {
fmt.Fprintf(os.Stderr, "fetch: %v\n", err)
os.Exit(1)
}
b, err := ioutil.ReadAll(resp.Body)
resp.Body.Close()
if err != nil {
fmt.Fprintf(os.Stderr, "fetch: reading %s: %v\n", url, err)
os.Exit(1)
}
fmt.Printf("%s", b)
}
}
func main() {
doc, err := html.Parse(os.Stdin)
if err != nil {
fmt.Fprintf(os.Stderr, "findlinks1: %v\n", err)
}
for _, link := range visit(nil, doc) {
fmt.Println(link)
}
//it doesn't matter what the visit function does
}
I want to redirect the output of the 1st program to the input of the 2nd program in powershell terminal in Goland development environment, but I can't.
I have tried to run these commands in terminal:
./fetch.go https://golang.org | ./findlinks1.go
I got an error:
InvalidOperation: Cannot run a document in the middle of a pipeline
go run fetch.go https://golang.org | go run findlinks1.go
I didn't get an error, but nothing happened after running this

How to kill command Exec in difference Function in Golang

i'm making screen record web based using command exec to run FFMPEG. here I created a startRecording function but I am still confused about stopping the command process in the stopRecording function, because the command is executed in the startRecording function. How to stop a process that is already running in the srartRecording function in the stopRecording function?
here my code
//Handler to create room/start record
func RoomCreate(c *fiber.Ctx) error {
fileName := "out.mp4"
fmt.Println(fileName)
if len(os.Args) > 1 {
fileName = os.Args[1]
}
errCh := make(chan error, 2)
ctx, cancelFn := context.WithCancel(context.Background())
// Call to function startRecording
go func() { errCh <- startRecording(ctx, fileName) }()
go func() {
errCh <- nil
}()
err := <-errCh
cancelFn()
if err != nil && err != context.Canceled {
log.Fatalf("Execution failed: %v", err)
}
return c.Redirect(fmt.Sprintf("/room/%s", guuid.New().String()))
}
//Function to run command FFMPEG
func startRecording(ctx context.Context, fileName string) error {
ctx, cancelFn := context.WithCancel(ctx)
defer cancelFn()
// Build ffmpeg
ffmpeg := exec.Command("ffmpeg",
"-f", "gdigrab",
"-framerate", "30",
"-i", "desktop",
"-f", "mp4",
fileName,
)
// Stdin for sending data
stdin, err := ffmpeg.StdinPipe()
if err != nil {
return err
}
//var buf bytes.Buffer
defer stdin.Close()
// Run it in the background
errCh := make(chan error, 1)
go func() {
fmt.Printf("Executing: %v\n", strings.Join(ffmpeg.Args, " "))
if err := ffmpeg.Run(); err != nil {
return
}
//fmt.Printf("FFMPEG output:\n%v\n", string(out))
errCh <- err
}()
// Just start sending a bunch of frames
for {
// Check if we're done, otherwise go again
select {
case <-ctx.Done():
return ctx.Err()
case err := <-errCh:
return err
default:
}
}
}
//Here function to stop Recording
func stopRecording(ctx context.Context) error {
//Code stop recording in here
}
Thanks for advance
As requested from comments.
The basic idea is to use global storage to store your active commands. It doesn't necessarily be global but you need to have bigger scope so that your functions can access it.
var commands = map[string]*exec.Cmd{}
func startRecording(fileName string) error {
ffmpeg := exec.Command("ffmpeg",
"-f", "gdigrab",
"-framerate", "30",
"-i", "desktop",
"-f", "mp4",
fileName,
)
commands[fileName] = ffmpeg
...
}
func stopRecording(fileName string) error {
cmd, ok := commands[fileName]
if !ok {
return errors.New("command not found")
}
defer func() {
delete(commands, fileName)
}()
return cmd.Process.Kill()
}
You probably want to use sync.Mutex or sync.RWMutex to avoid concurrent map writes.
So your commands cloud look like:
type Commands struct {
sync.RWMutex
items map[string]*exec.Cmd
}
// use Commands.Lock() for writing, Commands.RLock() for reading

Failed to parse trace: no EvFrequency event

I generate a trace like this:
func main() {
f, err := os.Create("trace.out")
if err != nil {
panic(err)
}
defer f.Close()
err = trace.Start(f)
if err != nil {
panic(err)
}
defer trace.Stop()
//this is my app:
http.HandleFunc("/", someFunc)
log.Fatal(http.ListenAndServe(":5000", nil))
}
Then i run in the CLI:
$ go run main.go
Refresh browser, trace.out is generated, 1.8 MB, then:
$ go tool trace trace.out
018/09/09 13:25:18 Parsing trace...
failed to parse trace: no EvFrequency event
What am I missing here? Thanks.
Trace data can only be viewed after you stopped the trace (i.e. after trace.Stop() has been called). In the code you supplied http.ListenAndServer(...) will block forever (unless it runs into an error).
Are you trying to view the trace before the trace has been stopped?
One solution might be to wait for an interrupt signal and then exit the function when received which would cause the tracing to be stopped and written.
func main() {
f, err := os.Create("trace.out")
if err != nil {
panic(err)
}
defer f.Close()
err = trace.Start(f)
if err != nil {
panic(err)
}
defer trace.Stop()
http.HandleFunc("/", someFunc)
go func() {
log.Fatal(http.ListenAndServe(":5000", nil))
}()
signalChan := make(chan os.Signal, 1)
signal.Notify(signalChan, os.Interrupt)
<-signalChan
}

how to repeat shutting down and establish go routine?

every one,I am new to golang.I wanna get the data from log file generated by my application.cuz roll-back mechanism, I met some problem.For instance,my target log file is chats.log,it will be renamed to chats.log.2018xxx and a new chats.log will be created.so my go routine that read log file will fail to work.
so I need detect the change and shutdown the previous go routine and then establish the new go routine.
I looked for modules that can help me,and I found
func ExampleNewWatcher(fn string, createnoti chan string, wg sync.WaitGroup) {
wg.Add(1)
defer wg.Done()
watcher, err := fsnotify.NewWatcher()
if err != nil {
log.Fatal(err)
}
defer watcher.Close()
done := make(chan bool)
go func() {
for {
select {
case event := <-watcher.Events:
if event.Op == fsnotify.Create && event.Name==fn{
createnoti <- "has been created"
}
case err := <-watcher.Errors:
log.Println("error:", err)
}
}
}()
err = watcher.Add("./")
if err != nil {
log.Fatal(err)
}
<-done
}
I use fsnotify to detech the change,and make sure the event of file is my log file,and then send some message to a channel.
this is my worker go routine:
func tailer(fn string,isfollow bool, outchan chan string, done <-chan interface{},wg sync.WaitGroup) error {
wg.Add(1)
defer wg.Done()
_, err := os.Stat(fn)
if err != nil{
panic(err)
}
t, err := tail.TailFile(fn, tail.Config{Follow:isfollow})
if err != nil{
panic(err)
}
defer t.Stop()
for line := range t.Lines{
select{
case outchan <- line.Text:
case <- done:
return nil
}
}
return nil
}
I using tail module to read the log file,and I add a done channel to it to shutdown the cycle(I don't know whether I put it in the right way)
And I will send every log content to a channel to consuming it.
So here is the question:how should I put it together?
ps: Actually,I can use some tool to do this job.like apache-flume,but all of those tools need dependency.
Thank you a lot!
Here is a complete example that reloads and rereads the file as it changes or gets deleted and recreated:
package main
import (
"github.com/fsnotify/fsnotify"
"io/ioutil"
"log"
)
const filename = "myfile.txt"
func ReadFile(filename string) string {
data, err := ioutil.ReadFile(filename)
if err != nil {
log.Println(err)
}
return string(data)
}
func main() {
watcher, err := fsnotify.NewWatcher()
if err != nil {
log.Fatal(err)
}
defer watcher.Close()
err = watcher.Add("./")
if err != nil {
log.Fatal(err)
}
for {
select {
case event := <-watcher.Events:
if event.Op == fsnotify.Create && event.Name == filename {
log.Println(ReadFile(filename))
}
case err := <-watcher.Errors:
log.Println("error:", err)
}
}
}
Note this doesn't require goroutines, channels or a WaitGroup. Better to keep things simple and reserve those for when they're actually needed.

Chaining command with wait functionality

I have the following code which works, I need to execute commands in chain that need to finish before the other command is executed.
I do it with the wait command with ugly ifElse and if I need to chain more command it become uglier...is there a better way to write it in go?
cmd, buf := exec.CommandContext("npm", dir+"/"+path, "install")
//Wait
if err := cmd.Wait(); err != nil {
log.Printf("executing npm install returned error: %v", err)
} else {
log.Println(buf.String())
gulpCmd, gulpBuf := exec.CommandContext(“gulp”, pdir+"/"+n.path)
//Wait
if err := gulpCmd.Wait(); err != nil {
log.Printf("error: %v", err)
} else {
log.Println(gulpBuf.String())
pruneCmd, pruneBuf := exec.CommandContext("npm", pdir+"/"+n.path, "prune", "--production")
//Wait
if err := pruneCmd.Wait(); err != nil {
log.Printf("error: %v", err)
} else {
log.Println(pruneBuf.String())
}
}
update:
if I try to run this simple program it works and I get message
added 563 packages in 19.218s*
This is the code
cmd := exec.Command("npm", "install")
cmd.Dir = filepath.Join(pdir, n.path)
cmdOutput := &bytes.Buffer{}
cmd.Stdout = cmdOutput
err := cmd.Run()
if err != nil {
os.Stderr.WriteString(err.Error())
}
fmt.Print(string(cmdOutput.Bytes()))
But If I try like following, I get error and it not able to execute the first command which is npm install, any idea?
cmdParams := [][]string{
{"npm", filepath.Join(dir,path), "install"},
{"gulp", filepath.Join(pdir,n.path)},
{"npm", filepath.Join(pdir, n.path), "prune", "--production"},
}
for _, cmdParam := range cmdParams {
out, err := exec.Command(cmdParam[0], cmdParam[1:]...).Output()
if err != nil {
log.Printf("error running %s: %v\n", cmdParam[0], err)
return
}
log.Println(string(out))
}
The error I get is error running npm: exit status 1
update 2
The commands are and should be run one after another, when the first finish just then run the gulp etc, and also I need to provide the output from the commands
1. npm install
2. gulp
3. npm prune
List your commands in a slice, and use a simple loop to execute all sequentially. And use filepath.Join() to build folders.
Also I'm not sure what package you're using to run the commands, using os/exec we can simplify further the execution of the commands in the loop body. For example Command.Output() runs the command and returns its standard output:
cmdParams := [][]string{
{filepath.Join(dir,path), "npm", "install"},
{filepath.Join(pdir,n.path), "gulp"},
{filepath.Join(pdir, n.path), "npm", "prune", "--production"},
}
for _, cp := range cmdParams {
log.Printf("Starting %s in folder %s...", cp[1:], cp[0])
cmd := exec.Command(cp[1], cp[2:]...)
cmd.Dir = cp[0]
// Wait to finish, get output:
out, err := cmd.Output()
if err != nil {
log.Printf("Error running %s: %v\n", cp[1:], err)
return
}
log.Println("Finished %s, output: %s", cp[1:], out)
}
To avoid ugly if-else you can write code like this:
err := someFunction1()
if err != nil {
return err
}
err := someFunction2()
if err != nil {
return err
}
err := someFunction3()
if err != nil {
return err
}
but you will have ugly (IMHO) multiply return statements

Resources