Golang: file.Seek and file.WriteAt not working as expected - go

I am trying to make a program which writes at provided offsets in the file, like i can start writing from 20th offset etc.
here is one of sample code i was using as reference
package main
import (
"fmt"
"io/ioutil"
"os"
)
const (
filename = "sample.txt"
start_data = "12345"
)
func printContents() {
data, err := ioutil.ReadFile(filename)
if err != nil {
panic(err)
}
fmt.Println("CONTENTS:", string(data))
}
func main() {
err := ioutil.WriteFile(filename, []byte(start_data), 0644)
if err != nil {
panic(err)
}
printContents()
f, err := os.OpenFile(filename, os.O_RDWR, 0644)
if err != nil {
panic(err)
}
defer f.Close()
if _, err := f.Seek(20, 0); err != nil {
panic(err)
}
if _, err := f.WriteAt([]byte("A"), 15); err != nil {
panic(err)
}
printContents()
}
But i am always getting the same file content which is beginning from start like
12345A
I tried changing the seek values to (0,0) and (20,0) and (10,1) randomly which results in same output
Also i tried changing WriteAt offset to other offset like 10, 20 but this also resulted in same.
I want to get a solution so that i can write at any specified position in file, suggest me what is wrong in this code.

It works as expected.
After running your code, your "sample.txt" file content is (16 bytes):
[49 50 51 52 53 0 0 0 0 0 0 0 0 0 0 65]
try:
package main
import (
"fmt"
"io/ioutil"
)
const (
filename = "sample.txt"
start_data = "12345"
)
func printContents() {
data, err := ioutil.ReadFile(filename)
if err != nil {
panic(err)
}
fmt.Println(data)
}
func main() {
printContents()
}
you need to write enough bytes first, the use WriteAt offset:
e.g. edit :
start_data = "1234567890123456789012345678901234567890"
then test your code:
package main
import (
"fmt"
"io/ioutil"
"os"
)
const (
filename = "sample.txt"
start_data = "1234567890123456789012345678901234567890"
)
func printContents() {
data, err := ioutil.ReadFile(filename)
if err != nil {
panic(err)
}
fmt.Println(string(data))
}
func main() {
err := ioutil.WriteFile(filename, []byte(start_data), 0644)
if err != nil {
panic(err)
}
printContents()
f, err := os.OpenFile(filename, os.O_RDWR, 0644)
if err != nil {
panic(err)
}
defer f.Close()
if _, err := f.Seek(20, 0); err != nil {
panic(err)
}
if _, err := f.WriteAt([]byte("A"), 15); err != nil {
panic(err)
}
printContents()
}
output:
1234567890123456789012345678901234567890
123456789012345A789012345678901234567890

Related

Why is the file empty after writing to it with bufio.Writer?

file, err := os.OpenFile("filename.db", os.O_CREATE|os.O_APPEND, 0666)
if err != nil {
log.Fatal(err)
}
defer file.Close()
res := 0
writer := bufio.NewWriter(file)
for _, data := range manager {
bin, err := json.Marshal(data)
if err != nil {
log.Println(err)
return
}
res++
if debug {
log.Println(res)
}
fmt.Printf("%s\n", bin)
_, err = writer.Write(bin)
if err != nil {
log.Println(err)
}
_, _ = writer.WriteRune('\n')
}
playground
full code
The file filename.db is created (if didn't exist), but ...is empty...
Why could this happen?
Why is the file empty?
I tried this both on my home pc and a linux server
And in both cases it's empty
As per the suggestion from comment using writer.Flush results in foo and bar values being written in to the document filename.db.
package main
import (
"bufio"
"encoding/json"
"fmt"
"log"
"os"
)
type Valuable struct {
Value string `json:"value"`
}
var debug = true
var manager []Valuable
func main() {
manager = append(manager, Valuable{"foo"}, Valuable{"bar"})
file, err := os.OpenFile("filename.db", os.O_CREATE|os.O_APPEND, 0666)
if err != nil {
log.Fatal(err)
}
defer file.Close()
res := 0
writer := bufio.NewWriter(file)
defer writer.Flush()
for _, data := range manager {
bin, err := json.Marshal(data)
if err != nil {
log.Println(err)
return
}
res++
if debug {
log.Println(res)
}
fmt.Printf("%s\n", bin)
_, err = writer.Write(bin)
if err != nil {
log.Println(err)
}
_, _ = writer.WriteRune('\n')
}
}

reading golang websocket returns random bytes

My program:
package main
import (
"fmt"
"io"
"log"
"net"
"github.com/gobwas/ws"
)
func HandleConn(conn net.Conn) {
for {
header, err := ws.ReadHeader(conn)
if err != nil {
log.Fatal(err)
}
buf := make([]byte, header.Length)
_, err = io.ReadFull(conn, buf)
if err != nil {
log.Fatal(err)
}
fmt.Println(buf)
fmt.Println(string(buf))
}
}
func main() {
ln, err := net.Listen("tcp", "localhost:8080")
if err != nil {
log.Fatal(err)
}
for {
conn, err := ln.Accept()
if err != nil {
log.Fatal(err)
}
_, err = ws.Upgrade(conn)
if err != nil {
log.Fatal(err)
}
go HandleConn(conn)
}
}
I do in browser console:
let socket = new WebSocket("ws://127.0.0.1:8080")
socket.send("Hello world")
I see random bytes in the my terminal. Each call to socket.send("Hello world") return different bytes. But the length of the byte array is always equal to the length of the string. Where does golang get these random bytes? How can I fix this? My program is an example from the docs.
If you are going to not use the wsutil you need to unmask the payload:
buff := make([]byte, header.Length)
_, err = io.ReadFull(conn, buff)
if err != nil {
// handle error
}
if header.Masked {
ws.Cipher(buff, header.Mask, 0)
}
fmt.Println(string(buff))

golang os.Close() function works, but os.Remove() function does not

I am trying to create a file, open it, do some processing on it & close it. Finally, I want to delete the file.
All these operations are executed successfully, except the deletion.
My code is:
package main
import (
"fmt"
"os"
"log"
)
func main() {
fmt.Println("Hello")
metaFileName := "./metadata.txt"
_, err2 := os.Create(metaFileName)
if err2 != nil {
log.Fatal(err2)
}
openMetaFile, err := os.Open(metaFileName)
if err != nil {
log.Fatal(err)
}
err = openMetaFile.Close()
if err != nil {
log.Fatal(err)
}
err = os.Remove(metaFileName)
if err != nil {
log.Fatal(err)
}
fmt.Println("Success")
}
The output is:
Hello
2020/08/24 00:00:00 remove ./metadata.txt: The process cannot access the file be
cause it is being used by another process.
I am clueless about this
The problem is related to the first opened file.
package main
import (
"fmt"
"log"
"os"
)
const metaFileName = "./metadata.txt"
func main() {
var (
err error
tmpFile, openMetaFile *os.File
)
fmt.Println("Hello")
if tmpFile, err = os.Create(metaFileName); err != nil {
log.Fatal(err)
}
if err = tmpFile.Close(); err != nil {
log.Fatal(err)
}
if openMetaFile, err = os.Open(metaFileName); err != nil {
log.Fatal(err)
}
if err = openMetaFile.Close(); err != nil {
log.Fatal(err)
}
if err = os.Remove(metaFileName); err != nil {
log.Fatal(err)
}
fmt.Println("Success")
}
As you can see, i've used the var a in order to close the first file that you have opened. The result is following one:
Hello
Success

Saving a continuous stream of images from ffmpeg image2pipe

I am trying to save a sequence/continuous images from ffmpeg image2pipe in go. The problem with the code below that it does only save the first image in the stream due to the blocking nature of io.Copy since it waits for the reader or the writer to close.
package main
import (
"fmt"
"io"
"log"
"os"
"os/exec"
"strconv"
"time"
)
//Trying to get png from stdout pipe
func main() {
fmt.Println("Running the camera stream")
ffmpegCmd := exec.Command("ffmpeg", "-loglevel", "quiet", "-y", "-rtsp_transport", "tcp", "-i", "rtsp://admin:123456#192.168.1.41:554/h264Preview_01_main", "-r", "1", "-f", "image2pipe", "pipe:1")
ffmpegOut, err := ffmpegCmd.StdoutPipe()
if err != nil {
return
}
err = ffmpegCmd.Start()
if err != nil {
log.Fatal(err)
}
count := 0
for {
count++
t := time.Now()
fmt.Println("writing image" + strconv.Itoa(count))
filepath := "image-" + strconv.Itoa(count) + "-" + t.Format("20060102150405.png")
out, err := os.Create(filepath)
if err != nil {
log.Fatal(err)
}
defer out.Close()
_, err = io.Copy(out, ffmpegOut)
if err != nil {
log.Fatalf("unable to copy to file: %s", err.Error())
}
}
if err := ffmpegCmd.Wait(); err != nil {
log.Fatal("Error while waiting:", err)
}
}
I implemented my own save and copy function based on the io.Copy code https://golang.org/src/io/io.go
func copyAndSave(w io.Writer, r io.Reader) error {
buf := make([]byte, 1024, 1024)
for {
n, err := r.Read(buf[:])
if n == 0 {
}
if n > 0 {
d := buf[:n]
_, err := w.Write(d)
if err != nil {
return err
}
}
if err != nil {
return err
}
}
return nil
}
then I updated the for loop in my main function to the below block but still I am only getting the first image in the sequence. due to r.Read(buf[:]) is being a blocking call.
for {
count++
t := time.Now()
fmt.Println("writing image" + strconv.Itoa(count))
filepath := "image-" + strconv.Itoa(count) + "-" + t.Format("20060102150405.png")
out, err := os.Create(filepath)
if err != nil {
log.Fatal(err)
}
defer out.Close()
err = copyAndSave(out, ffmpegOut)
if err != nil {
if err == io.EOF {
break
}
log.Fatalf("unable to copy to file: %s", err.Error())
break
}
}

io.Copy() erase the Reader content

package main
import (
"fmt"
"io"
"io/ioutil"
"os"
)
func main() {
file, err := os.Open("HelloWorld")
if nil != err {
fmt.Println(err)
}
defer file.Close()
fileTo, err := os.Create("fileTo")
if nil != err {
fmt.Println(err)
}
defer file.Close()
_, err = io.Copy(fileTo, file)
if nil != err {
fmt.Println(err)
}
fileByteOne, err := ioutil.ReadAll(file)
if nil != err {
fmt.Println(err)
}
fmt.Println(fileByteOne)
}
io.Copy() will erase the file content, the output is :
[]
Copy(dst Writer, src Reader) copies from src to dst, it will erase the src content. Is there
any way to avoid erasing?
io.Copy(fileTo, file) will erase the file content
It won't. But it will move the read position to EOF, meaning the next ioutil.ReadAll() will start at ... EOF.
You could close it and re-open 'file'before your ioutil.ReadAll().
By the way, you have two defer file.Close() instances: the second one should be defer fileTo.Close().
Or, simpler, reset it with a SectionReader.Seek(), as suggested by PeterSO's answer.
_, err = file.Seek(0, io.SeekStart)
It is also illustrated in GoByExamples Reading Files:
There is no built-in rewind, but Seek(0, 0) accomplishes this.
(os.SEEK_SET is define in os constants, as 0)
const SEEK_SET int = 0 // seek relative to the origin of the file
Now (2020) deprecated and replaced with io.SeekStart.
See also "Golang, a proper way to rewind file pointer".
Reset from the end of the file to the start of the file with a seek. For example.
package main
import (
"fmt"
"io"
"io/ioutil"
"os"
)
func main() {
file, err := os.Open("HelloWorld")
if err != nil {
fmt.Println(err)
}
defer file.Close()
fileTo, err := os.Create("fileTo")
if err != nil {
fmt.Println(err)
}
defer fileTo.Close()
_, err = io.Copy(fileTo, file)
if err != nil {
fmt.Println(err)
}
_, err = file.Seek(0, os.SEEK_SET) // start of file
if err != nil {
fmt.Println(err)
}
fileByteOne, err := ioutil.ReadAll(file)
if err != nil {
fmt.Println(err)
}
fmt.Println(fileByteOne)
}
Output:
[72 101 108 108 111 44 32 87 111 114 108 100 33 10]

Resources