i am using shell with golang to access apache log file and get some data. first i used to write output to the file directly and it was working, but now i need to get the output and use it in the program directly. and also i need to convert it to float64. i tried converting it into a string and then to float64, but it is not working?
func Mem_usage_data(j int) (Mem_predict float64, err error) {
awkPart := fmt.Sprintf("awk '{print $%d/1024}'", j)
out1, err := exec.Command("bash", "-c", "tail -n 1 /var/log/apache2/access.log| "+awkPart+" ").Output()
fmt.Println("memory usage is", out1)
s1 := string(out1)
v1, err1 := strconv.ParseFloat(s1, 64)
if err1 != nil {
fmt.Println(err)
}
if err != nil {
fmt.Println(err)
}
return v1, err
}
when i print the out1 i get something like this [48 46 49 50 48 49 49 55 10]. CAn you please help how to get the exact output in out1 and how to convert it into float64?
The conversion fails probably because there is some whitespace or newline character on s1. Try to trim it first before doing the conversion. Use strings.TrimSpace() to achieve that.
v1, err1 := strconv.ParseFloat(strings.TrimSpace(s1), 64)
Related
I'm trying to find files that end in hello.go
statement:= "find "CurrentDirectory" -print | grep -i hello.go"
result, error := exec.Command("bash", "-c", statement).Output()
This gives me a list containing 2 file paths and I try to turn them into arrays that I can individually address using:
Directory_array := strings.Split(string(result),"\n")
fmt.Println(len(Directory_array) )
The length of the array shows as "3" but the array is empty except for the 0th position.
Independently the code lines work but not together. How can I get the array to fill with individual paths?
In your case, use strings.FieldsFunc instead of strings.Split. For example:
statement:= "find . -print | grep -i hello"
result, err := exec.Command("bash", "-c", statement).Output()
if err != nil {
panic(err)
}
ds := strings.FieldsFunc(string(result), func(r rune) bool {
return r == '\n'
})
for i, d := range ds {
fmt.Println(i, d)
}
Further reading:
strings.FieldsFunc - go.dev
strings.FieldsFunc vs strings.Split - Medium
You could use the stdout pipe and read line by line using a Scanner.
func main() {
cmd := exec.Command("ls", "-l")
out, _ := cmd.StdoutPipe()
s := bufio.NewScanner(out)
cmd.Start()
defer cmd.Wait()
for s.Scan() {
fmt.Println("new line:")
fmt.Println(s.Text())
}
}
If you want to store this into a slice, you can create the slice beforehand and append to it.
lines := make([]string, 0, 10)
for s.Scan() { lines = append(lines, s.Text()) }
You have missed calling Run() in your code. Here is how to fix it:
statement:= "find "CurrentDirectory" -print | grep -i hello.go"
cmd := exec.Command("bash", "-c", statement)
err := cmd.Run()
if err != nil {
...
}
result, error := cmd.Output()
P.S. Why not to use os.Walk()?
I'm using the io package to work with an executable defined in my PATH.
The executable is called "Stockfish" (Chess Engine) and obviously usable via command line tools.
In order to let the engine search for the best move, you use "go depth n" - the higher the depth - the longer it takes to search.
Using my command line tool it searches for about 5 seconds using a depth of 20, and it looks like this:
go depth 20
info string NNUE evaluation using nn-3475407dc199.nnue enabled
info depth 1 seldepth 1 multipv 1 score cp -161 nodes 26 nps 3714 tbhits 0 time 7 pv e7e6
info depth 2 seldepth 2 multipv 1 score cp -161 nodes 51 nps 6375 tbhits 0 time 8 pv e7e6 f1d3
info depth 3 seldepth 3 multipv 1 score cp -161 nodes 79 nps 7900 tbhits 0 time 10 pv e7e6 f1d3 g8f6
info depth 4 seldepth 4 multipv 1 score cp -161 nodes 113 nps 9416 tbhits 0 time 12 pv e7e6 f1d3 g8f6 b1c3
[...]
bestmove e7e6 ponder h2h4
Now, using io.WriteString it finishes after milliseconds without any (visible) calculation:
(That's also the output of the code below)
Stockfish 14 by the Stockfish developers (see AUTHORS file)
info string NNUE evaluation using nn-3475407dc199.nnue enabled
bestmove b6b5
Here's the code I use:
func useStockfish(commands []string) string {
cmd := exec.Command("stockfish")
stdin, err := cmd.StdinPipe()
if err != nil {
log.Fatal(err)
}
for _, cmd := range commands {
writeString(cmd, stdin)
}
err = stdin.Close()
if err != nil {
log.Fatal(err)
}
out, err := cmd.CombinedOutput()
if err != nil {
log.Fatal(err)
}
return string(out)
}
func writeString(cmd string, stdin io.WriteCloser) {
_, err := io.WriteString(stdin, cmd)
if err != nil {
log.Fatal(err)
}
And this is an example of how I use it. The first command is setting the position, the second one is calculation the next best move with a depth of 20. The result is showed above.
func FetchComputerMove(game *internal.Game) {
useStockfish([]string{"position exmaplepos\n", "go depth 20"})
}
To leverage engines like stockfish - you need to start the process and keep it running.
You are executing it, passing 2 commands via a Stdin pipe, then closing the pipe. Closing the pipe indicates to the program that you are no longer interested in what the engine has to say.
To run it - and keep it running - you need something like:
func startEngine(enginePath string) (stdin io.WriteCloser, stdout io.ReadCloser, err error) {
cmd := exec.Command(enginePath )
stdin, err = cmd.StdinPipe()
if err != nil {
return
}
stdout, err = cmd.StdoutPipe()
if err != nil {
return
}
err = cmd.Start() // start command - but don't wait for it to complete
return
}
The returned pipes allow you to send commands & see the output live:
stdin, stdout, err := startEngine("/usr/local/bin/stockfish")
sendCmd := func(cmd string) error {
_, err := stdin.Write([]byte(cmd + "\n"))
return err
}
sendCmd("position examplepos")
sendCmd("go depth 20")
then to crudely read the asynchronous response:
b := make([]byte, 10240)
for {
n, err := stdout.Read(b)
if err != nil {
log.Fatalf("read error: %v", err)
}
log.Println(string(b[:n]))
}
once a line like bestmove d2d4 ponder g8f6 appears, you know the current analysis command has completed.
You can then either close the engine (by closing the stdin pipe) if that's all you need, or keep it open for further command submissions.
I've been working on serializing a radix tree (used for indexing) to a file in golang. The radix tree nodes are storing 6-bit roaring bitmaps (see https://github.com/RoaringBitmap/roaring). The following code is what I am using, and the output I am getting when trying to load it back into memory:
serializedTree := i.index.ToMap()
encodeFile, err := os.Create(fmt.Sprintf("./serialized/%s/%s", appindex.name, i.field))
if err != nil {
panic(err)
}
e := gob.NewEncoder(encodeFile)
err = e.Encode(serializedTree)
encodeFile.Close()
// Turn it back for testing
decodeFile, err := os.Open(fmt.Sprintf("./serialized/%s/%s", appindex.name, i.field))
defer decodeFile.Close()
d := gob.NewDecoder(decodeFile)
decoded := make(map[string]interface{})
err = d.Decode(&decoded)
fmt.Println("before decode", serializedTree)
fmt.Println("after decode", decoded)
if err != nil {
fmt.Println("!!! Error serializing", err)
panic(err)
}
Output:
before decode map[dan:{1822509180252590512} dan1:{6238704462486574203} goodman:{1822509180252590512,6238704462486574203}]
after decode map[]
!!! Error serializing EOF
panic: EOF
goroutine 1 [running]:
main.(*appIndexes).SerializeIndex(0xc000098240)
(I understand the decode is empty because the gob package doesn't modify on EOF error)
I've noticed that when trying with bytes directly, only 15 bytes are being stored on disk (which is way too few). Trying with the encoding/json package with json.Marshall() and json.Unmarshall() and I see 33 bytes stored, but they are loading in empty (the roaring bitmaps are gone):
post encode map[dan:map[] dan1:map[] goodman:map[]]
I feel like this has something to do with the fact that I am trying to serialize a map[string]interface{} rather than something like a map[string]int, but I am still fairly green with golang.
See https://repl.it/#danthegoodman/SelfishMoralCharactermapping#main.go for an example and my testing.
I believe I fixed it by converting the map[string]interface{} into a map[string]*roaring64.Bitmap before writing to disk, then decoding it back into a map[string]*roaring64.Bitmap then converting it back to a map[string]interface{}
m2 := make(map[string]*roaring64.Bitmap)
// Convert m1 to m2
for key, value := range m1 {
m2[key] = value.(*roaring64.Bitmap)
}
fmt.Println("m1", m1)
fmt.Println("m2", m2)
encodeFile, err := os.Create("./test")
if err != nil {
panic(err)
}
e := gob.NewEncoder(encodeFile)
err = e.Encode(m2)
encodeFile.Close()
// Turn it back for testing
decodeFile, err := os.Open("./test")
defer decodeFile.Close()
d := gob.NewDecoder(decodeFile)
decoded := make(map[string]*roaring64.Bitmap)
err = d.Decode(&decoded)
fmt.Println("before decode", m2)
fmt.Println("after decode", decoded)
if err != nil {
fmt.Println("!!! Error serializing", err)
panic(err)
}
m3 := make(map[string]interface{})
// Convert m2 to m3
for key, value := range m2 {
m3[key] = value
}
afterDecTree := radix.NewFromMap(m3)
See https://repl.it/#danthegoodman/VictoriousUtterMention#main.go for a working example
I'm new to Go!
I'm doing a simple test that is reading the output from ffmpeg and writing to a file.
I know I can do it in a different way, simply convert, but this is the beginning of a project where I want to later manipulate the bytes read, changing them and then sending them to the output. And the input will be UDP and output will be UDP too, that is, I will get the ffmpeg output I will treat the bytes as I wish to do and then I will throw these bytes as input into another ffmpeg process which output is UDP as well.
With this simple test the result of the file does not run in VLC, I believe I'm writing the bytes in the output file correctly, but the output file always has 1MB less than the input file.
I would like some help to elucidate what would be the best way to write this test that I am doing, based on that I can get out of the place. I do not know if it's exactly wrong, but I have the impression that it is.
The input file is a video in 4K, h264, I believe the output should be the same, because in this simple test I am simply reading what goes out in the cmd writing in the file.
Follow the code for analysis:
package main
import (
"os/exec"
"os"
)
func verificaErro(e error) {
if e != nil {
panic(e)
}
}
func main() {
dir, _ := os.Getwd()
cmdName := "ffmpeg"
args := []string{
"-hide_banner",
"-re",
"-i",
dir + "\\teste-4k.mp4",
"-preset",
"superfast",
"-c:v",
"h264",
"-crf",
"0",
"-c",
"copy",
"-f", "rawvideo", "-",
}
cmd := exec.Command(cmdName, args...)
stdout, err := cmd.StdoutPipe()
verificaErro(err)
err2 := cmd.Start()
verificaErro(err2)
fileOutput := dir + "/out.raw"
var _, err3 = os.Stat(fileOutput)
if os.IsNotExist(err3) {
var file, err = os.Create(fileOutput)
verificaErro(err)
defer file.Close()
}
f, err4 := os.OpenFile(dir+"/out.raw", os.O_RDWR|os.O_APPEND, 0666)
verificaErro(err4)
bytes := make([]byte, 1024)
for {
_, err5 := stdout.Read(bytes)
if err5 != nil {
continue
}
if len(bytes) > 0 {
_, err6 := f.Write(bytes)
verificaErro(err6)
} else {
break
}
}
f.Close()
}
You must check return values of stdout.Read. Please note that the number of bytes read (nr) may be smaller than the buffer size, so you need to re-slice the buffer to get a valid content. Modify reading loop as follows:
chunk := make([]byte, 40*1024)
for {
nr, err5 := stdout.Read(chunk)
fmt.Printf("Read %d bytes\n", nr)
//do something with the data
//e.g. write to file
if nr > 0 {
validData := chunk[:nr]
nw, err6 := f.Write(validData)
fmt.Printf("Write %d bytes\n", nw)
verificaErro(err6)
}
if err5 != nil {
//Reach end of file (stream), exit from loop
if err5 == io.EOF {
break
}
fmt.Printf("Error = %v\n", err5)
continue
}
}
if err := cmd.Wait(); err != nil {
fmt.Printf("Wait command error: %v\n", err)
}
Another solution is utilizing io.Copy to copy the whole output into golang buffer. The code snippet will look like:
var buf bytes.Buffer
n, err := io.Copy(&buf, stdout)
verificaErro(err)
fmt.Printf("Copied %d bytes\n", n)
err = cmd.Wait()
fmt.Printf("Wait error %v\n", err)
//do something with the data
data := buf.Bytes()
f, err4 := os.OpenFile(dir+"/out.raw", os.O_RDWR|os.O_APPEND, 0666)
verificaErro(err4)
defer f.Close()
nw, err := f.Write(data)
f.Sync()
fmt.Printf("Write size %d bytes\n", nw)
I managed to Pipeline multiple HGETALL commands, but I can't manage to convert them to strings.
My sample code is this:
// Initialize Redis (Redigo) client on port 6379
// and default address 127.0.0.1/localhost
client, err := redis.Dial("tcp", ":6379")
if err != nil {
panic(err)
}
defer client.Close()
// Initialize Pipeline
client.Send("MULTI")
// Send writes the command to the connection's output buffer
client.Send("HGETALL", "post:1") // Where "post:1" contains " title 'hi' "
client.Send("HGETALL", "post:2") // Where "post:1" contains " title 'hello' "
// Execute the Pipeline
pipe_prox, err := client.Do("EXEC")
if err != nil {
panic(err)
}
log.Println(pipe_prox)
It is fine as long as you're comfortable showing non-string results.. What I'm getting is this:
[[[116 105 116 108 101] [104 105]] [[116 105 116 108 101] [104 101 108 108 111]]]
But what I need is:
"title" "hi" "title" "hello"
I've tried the following and other combinations as well:
result, _ := redis.Strings(pipe_prox, err)
log.Println(pipe_prox)
But all I get is: []
I should note that it works with multiple HGET key value commands, but that's not what I need.
What am I doing wrong? How should I do it to convert the "numerical map" to strings?
Thanks for any help
Each HGETALL returns it's own series of values, which need to be converted to strings, and the pipeline is returning a series of those. Use the generic redis.Values to break down this outer structure first then you can parse the inner slices.
// Execute the Pipeline
pipe_prox, err := redis.Values(client.Do("EXEC"))
if err != nil {
panic(err)
}
for _, v := range pipe_prox {
s, err := redis.Strings(v, nil)
if err != nil {
fmt.Println("Not a bulk strings repsonse", err)
}
fmt.Println(s)
}
prints:
[title hi]
[title hello]
you can do it like this:
pipe_prox, err := redis.Values(client.Do("EXEC"))
for _, v := range pipe_prox.([]interface{}) {
fmt.Println(v)
}