Transfering file using tcp golang - go

I'm trying to make a music app that sends file through tcp protocol using go and microservice architecture. Now I'm creating a player service that should:
Get user token and get claims from it
Check is user exists using claims and user_service microservice
Get song from redis
Check is song exists using music_service
Read file by chunks and send it to client using tcp
Redis data looks like this:
{
"user_id": [{
"song_id": "<song_id>"
}]
}
But I faced with a small problem. My music files stored in a flac format and when I receive it on the client, my player doesn't play it. I don't really know what can be the problem. So here's my code:
SERVER
service_setup.go
//this function is called in main function
func setService() {
ln, err := net.Listen("tcp", config.TCPAddress)
if err != nil {
panic("couldn't start tcp server")
}
defer ln.Close()
for {
conn, err := ln.Accept()
if err != nil {
logger.ErrorLog(fmt.Sprintf("Error: couldn't accept connection. Details: %v", err))
return
}
service.DownloadSong(conn)
}
}
downloader_service.go
func DownloadSong(conn net.Conn) {
token, err := bufio.NewReader(conn).ReadString('\n')
if err != nil {
logger.ErrorLog(fmt.Sprintf("Error: couldn't get token. Details: %v", token))
conn.Close()
return
}
claims, err := jwt_funcs.DecodeJwt(token)
if err != nil {
conn.Close()
return
}
songs, err := redis_repo.Get(claims.Id)
if err != nil {
conn.Close()
return
}
for _, song := range songs {
download(song, conn)
}
}
func download(song models.SongsModel, conn net.Conn) {
filePath, err := filepath.Abs(fmt.Sprintf("./songs/%s.flac", song.SongId))
if err != nil {
logger.ErrorLog(fmt.Sprintf("Errror: couldn't create filepath. Details: %v", err))
conn.Close()
return
}
file, err := os.Open(filePath)
defer file.Close()
if err != nil {
logger.ErrorLog(fmt.Sprintf("Errror: couldn't open file. Details: %v", err))
conn.Close()
return
}
read(file, conn)
}
func read(file *os.File, conn net.Conn) {
reader := bufio.NewReader(file)
buf := make([]byte, 15)
defer conn.Close()
for {
_, err := reader.Read(buf)
if err != nil && err == io.EOF {
logger.InfoLog(fmt.Sprintf("Details: %v", err))
fmt.Println()
return
}
conn.Write(buf)
}
}
CLIENT
main.go
func main() {
conn, _ := net.Dial("tcp", "127.0.0.1:6060")
var glMessage []byte
text := "eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6IjYzYzlhNmE1OWI3ZmQyNTQ2ZjA4ZWEyYSIsInVzZXJuYW1lIjoiMTIiLCJleHAiOjE2NzQyMTE5ODl9.aarSDhrFF1df3i2pIRyjNxTfSHKObqLU3kHJiPreredIhLNCzs7z7jMgRHQIcLaIvCOECN7bX0OaSvKdW7VKsQ\n"
fmt.Fprint(conn, text)
reader := bufio.NewReader(conn)
b := make([]byte, 15)
c := 0
for i, _ := reader.Read(b); int(i) != 0; i, _ = reader.Read(b) {
c += i
glMessage = append(glMessage, b...)
}
os.WriteFile("./test.flac", glMessage, 0644)
}
If you know what can be the problem, please tell me. I'd really appreciate it!

It looks like you're trying to send the music file over the network in 15 byte chunks, which is likely not enough to play the song on the client side.
You can try increasing the chunk size, for example, to 8192 bytes. To do this, replace buf := make([]byte, 15) with buf := make([]byte, 8192).
Also, it's better to write the received data directly to the file rather than storing it in memory. You can do this by creating a file and using os.Create to write the received data to it:
file, err := os.Create("./test.flac")
if err != nil {
fmt.Println("Error: couldn't create file")
return
}
defer file.Close()
for {
i, err := reader.Read(buf)
if err != nil && err == io.EOF {
break
}
file.Write(buf[:i])
}
I believe that this can solve the issue.

Related

How to use TLS Conn.Read to read a large size of data

I'm trying with tls.Conn i.e, conn.Read from golang crypto/tls to read a large size of data. However when all the data is read out, then the program will never stop. Why? My simplified code is as follow:
// server
func main() {
log.SetFlags(log.Lshortfile)
cer, err := tls.LoadX509KeyPair("server.pem", "server.key")
if err != nil {
log.Println(err)
return
}
config := &tls.Config{Certificates: []tls.Certificate{cer}}
ln, err := tls.Listen("tcp", ":2000", config)
if err != nil {
log.Println(err)
return
}
defer ln.Close()
for {
conn, err := ln.Accept()
if err != nil {
log.Println(err)
continue
}
go handleConnection(conn)
}
}
func handleConnection(conn net.Conn) {
defer conn.Close()
buf := make([]byte, 0)
tmp := make([]byte, 1000)
totalSize := 0
for {
n, err := conn.Read(tmp) //after read all the data, server stucked here
totalSize += n
if err != nil {
if err != io.EOF {
log.Printf("prover: conn: read: %s", err)
}
break
}
buf = append(buf, tmp[:n]...)
}
n, err := conn.Write([]byte("finished"))
if err != nil {
log.Println(n, err)
return
}
}
The logic of the client:
// client
func main() {
log.SetFlags(log.Lshortfile)
conf := &tls.Config{
InsecureSkipVerify: true,
}
conn, err := tls.Dial("tcp", "127.0.0.1:2000", conf)
if err != nil {
log.Println(err)
return
}
defer conn.Close()
writeData := make([]byte, 4096)
n, err := conn.Write(writeData)
if err != nil {
log.Println(n, err)
return
}
fmt.Println("finish writing")
buf := make([]byte, 4096)
n, err = conn.Read(buf)
if err != nil {
log.Println(n, err)
}
fmt.Print("finish reading")
}
How can I read a large size of data in the server and after the data is completely read out, then the server will send a response to the client. I tried with net.Conn (TCPConn), this logic works. WHy?

Zip a Directory and not Have the Result Saved in File System

I am able to zip a file using logic similar to the zip writer seen here.
This results in an array of bytes ([]byte) being created within the bytes.Buffer object that is returned. I would just like to know if there is there any way I can upload this 'zipped' array of bytes to an API endpoint that expects a 'multipart/form-data' request body (without having to save it locally).
Supplementary information:
I have code that utilizes this when compressing a folder. I am able to successfully execute an HTTP POST request with the zip file to the endpoint with this logic.
However, this unfortunately saves zipped files in a user's local file system. I would like to try to avoid this :)
You can create multipart writer and write []byte zipped data into field with field name you like and file name like below.
func addZipFileToReq(zipped []byte) (*http.Request, error){
body := bytes.NewBuffer(nil)
writer := multipart.NewWriter(body)
part, err := writer.CreateFormFile(`fileField`, `filename`)
if err != nil {
return nil, err
}
_, err = part.Write(zipped)
if err != nil {
return nil, err
}
err = writer.Close()
if err != nil {
return nil, err
}
r, err := http.NewRequest(http.MethodPost, "https://example.com", body)
if err != nil {
return nil, err
}
r.Header.Set("Content-Type", writer.FormDataContentType())
return r, nil
}
If you want to stream-upload the zip, you should be able to do so with io.Pipe. The following is an incomplete and untested example to demonstrate the general idea. To make it work you'll need to modify it and potentially fix whatever bugs you encounter.
func UploadReader(r io.Reader) error {
req, err := http.NewRequest("POST", "<UPLOAD_URL>", r)
if err != nil {
return err
}
// TODO set necessary headers (content type, auth, etc)
res, err := http.DefaultClient.Do(req)
if err != nil {
return err
} else if res.StatusCode != 200 {
return errors.New("not ok")
}
return nil
}
func ZipDir(dir string, w io.Writer) error {
zw := zip.NewWriter(w)
defer zw.Close()
return filepath.Walk(dir, func(path string, fi os.FileInfo, err error) error {
if err != nil {
return err
}
if !fi.Mode().IsRegular() {
return nil
}
header, err := zip.FileInfoHeader(fi)
if err != nil {
return err
}
header.Name = path
header.Method = zip.Deflate
w, err := zw.CreateHeader(header)
if err != nil {
return err
}
f, err := os.Open(path)
if err != nil {
return err
}
defer f.Close()
if _, err := io.Copy(w, f); err != nil {
return err
}
return nil
})
}
func UploadDir(dir string) error {
r, w := io.Pipe()
ch := make(chan error)
wg := sync.WaitGroup{}
wg.Add(1)
go func() {
defer wg.Done()
defer w.Close()
if err := ZipDir(dir, w); err != nil {
ch <- err
}
}()
wg.Add(1)
go func() {
defer wg.Done()
defer r.Close()
if err := UploadReader(r); err != nil {
ch <- err
}
}()
go func() {
wg.Wait()
close(ch)
}()
return <-ch
}

Golang io.copy not copying entire data

I have small code which reads 100 MB file from Google cloud storage and then return output.
Code works fine for 1MB file but fails for 100 mb file.
Below is the code which is not working
rc, err := client.Bucket("mabucket").Object(gcpurl).NewReader(ctx)
if err != nil {
fmt.Println(err)
return
}
defer rc.Close()
w.Header().Set("Content-Length", strconv.Itoa(int(rc.Size())))
w.Header().Set("Cache-Control", "max-age=2592000")
w.Header().Set("Content-Type", rc.ContentType())
spew.Dump(rc.ContentType())
if rc.ContentType() == "audio/wav" || rc.ContentType() == "audio/wave" {
w.Header().Set("Accept-Ranges", "bytes")
tilrange := rc.Size() - 1
newRangeString := "bytes 0-" + strconv.Itoa(int(tilrange)) + "/" + strconv.Itoa(int(rc.Size()))
w.Header().Set("Content-Range", newRangeString)
w.WriteHeader(206)
}
//spew.Dump(rc.Attrs)
io.Copy(w, rc)
I have written another code which is able to download same file and create a local file of 100 mb.
this time I am using ioutil.ReadAll. what can be problem with io.copy when receiving large date from GCP?
func main() {
ctx := context.Background()
client, _ := storage.NewClient(ctx)
data, err := downloadFile(client, "mabucket", "606ff2b71a916907409a953f/606ff2ed1a916907409a9540/60a38a967b291f7b44488824/123/audio/210415164000M29713363.wav")
//210415164000M29713363.wav
if err != nil {
log.Fatalf("Cannot read object: %v", err)
}
fmt.Printf("Object contents: %d\n", len(data))
f, err := os.Create("a.wav")
check(err)
defer f.Close()
n2, err := f.Write(data)
check(err)
fmt.Printf("wrote %d bytes\n", n2)
}
// downloadFile downloads an object.
func downloadFile(client *storage.Client, bucket, object string) ([]byte, error) {
// [START download_file]
ctx := context.Background()
ctx, cancel := context.WithTimeout(ctx, time.Second*50)
defer cancel()
rc, err := client.Bucket(bucket).Object(object).NewReader(ctx)
if err != nil {
return nil, err
}
defer rc.Close()
data, err := ioutil.ReadAll(rc)
if err != nil {
return nil, err
}
return data, nil
// [END download_file]
}
io.copy was trying to copy the details but OS was not allowing it. it was throwing error as (*net.OpError)(0xc0003302d0)(write tcp [::1]:80->[::1]:63014: wsasend: An established connection was aborted by the software in your host machine

Golang broken pipe

I have a code segment that is reading from a TCP connection and after the first few connections, the server is outputing broken pipe however no error is occurring in my go code. The server sending the messages is at its core the coda hale metrics library, more specifically the PickledGraphite class.
here is the Go code that is reading:
func handleConn(conn net.Conn, id int) {
fmt.Println("handleConn")
defer conn.Close()
buf := make([]byte, 0, 10240)
tmp := make([]byte, 256)
fmt.Printf("%v Reading...\n", id)
for {
n, err := conn.Read(tmp)
fmt.Printf("%v Read %v\n", id, n)
if err != nil {
fmt.Printf("%v Got err: %v\n", id, err)
if err != io.EOF {
fmt.Printf("%v read error: %v\n", id, err)
}
buf = append(buf, tmp[:n]...)
break
}
buf = append(buf, tmp[:n]...)
}
fmt.Printf("%v Done Reading\n", id)
// Do stuff with buf
}
func main() {
ln, err := net.Listen("tcp", ":5555")
if err != nil {
fmt.Println(err)
os.Exit(-1)
}
id := 1
for {
fmt.Println("getting connection\n")
conn, err := ln.Accept()
if err != nil {
fmt.Println(err)
break
}
conn.SetReadDeadline(time.Now().Add(20 * time.Second))
fmt.Println("Got connection")
go handleConn(conn, id)
id = id + 1
fmt.Println("sent handleConn\n")
}
}
My code is still running, I can still execute nc commands and see my code receive it, so I am not sure how I am losing the connection.
If I remove the conn.SetReadDeadline() line then what happens is my code no longer receives a EOF after the first message.
Thanks in advance

golang scp file using crypto/ssh

I'm trying to download a remote file over ssh
The following approach works fine on shell
ssh hostname "tar cz /opt/local/folder" > folder.tar.gz
However the same approach on golang giving some difference in output artifact size. For example the same folders with pure shell produce artifact gz file 179B and same with go script 178B.
I assume that something has been missed from io.Reader or session got closed earlier. Kindly ask you guys to help.
Here is the example of my script:
func executeCmd(cmd, hostname string, config *ssh.ClientConfig, path string) error {
conn, _ := ssh.Dial("tcp", hostname+":22", config)
session, err := conn.NewSession()
if err != nil {
panic("Failed to create session: " + err.Error())
}
r, _ := session.StdoutPipe()
scanner := bufio.NewScanner(r)
go func() {
defer session.Close()
name := fmt.Sprintf("%s/backup_folder_%v.tar.gz", path, time.Now().Unix())
file, err := os.OpenFile(name, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0644)
if err != nil {
panic(err)
}
defer file.Close()
for scanner.Scan() {
fmt.Println(scanner.Bytes())
if err := scanner.Err(); err != nil {
fmt.Println(err)
}
if _, err = file.Write(scanner.Bytes()); err != nil {
log.Fatal(err)
}
}
}()
if err := session.Run(cmd); err != nil {
fmt.Println(err.Error())
panic("Failed to run: " + err.Error())
}
return nil
}
Thanks!
bufio.Scanner is for newline delimited text. According to the documentation, the scanner will remove the newline characters, stripping any 10s out of your binary file.
You don't need a goroutine to do the copy, because you can use session.Start to start the process asynchronously.
You probably don't need to use bufio either. You should be using io.Copy to copy the file, which has an internal buffer already on top of any buffering already done in the ssh client itself. If an additional buffer is needed for performance, wrap the session output in a bufio.Reader
Finally, you return an error value, so use it rather than panic'ing on regular error conditions.
conn, err := ssh.Dial("tcp", hostname+":22", config)
if err != nil {
return err
}
session, err := conn.NewSession()
if err != nil {
return err
}
defer session.Close()
r, err := session.StdoutPipe()
if err != nil {
return err
}
name := fmt.Sprintf("%s/backup_folder_%v.tar.gz", path, time.Now().Unix())
file, err := os.OpenFile(name, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0644)
if err != nil {
return err
}
defer file.Close()
if err := session.Start(cmd); err != nil {
return err
}
n, err := io.Copy(file, r)
if err != nil {
return err
}
if err := session.Wait(); err != nil {
return err
}
return nil
You can try doing something like this:
r, _ := session.StdoutPipe()
reader := bufio.NewReader(r)
go func() {
defer session.Close()
// open file etc
// 10 is the number of bytes you'd like to copy in one write operation
p := make([]byte, 10)
for {
n, err := reader.Read(p)
if err == io.EOF {
break
}
if err != nil {
log.Fatal("err", err)
}
if _, err = file.Write(p[:n]); err != nil {
log.Fatal(err)
}
}
}()
Make sure your goroutines are synchronized properly so output is completeky written to the file.

Resources