Encoding a file to send to Google AutoML - go

I am writing a golang script to send an image to the prediction engine of Google AutoML API.
It accepts most files using the code below, but certain .jpeg or .jpeg it returns error 500 saying invalid file. Mostly it works, but I can't figure out the exceptions. They are perfectly valid jpg's.
I am encoding the payload using EncodeToString.
Among other things, I have tried decoding it, saving it to a PNG, nothing seems to work. It doesn't like some images.
I wonder if I have an error in my method? Any help would be really appreciated. Thanks
PS the file saves to the filesystem and uploads to S3 just fine. It's just the encoding to a string when it goes to Google that it fails.
imgFile, err := os.Open(filename)
if err != nil {
fmt.Println(err)
}
img, fname, err := image.Decode(imgFile)
if err != nil {
fmt.Println(fname)
}
buf := new(bytes.Buffer)
err = jpeg.Encode(buf, img, nil)
// Encode as base64.
imgBase64Str := base64.StdEncoding.EncodeToString(buf.Bytes())
defer imgFile.Close()
payload := fmt.Sprintf(`{"payload": {"image": {"imageBytes": "%v"},}}`, imgBase64Str)
// send as a byte
pay := bytes.NewBuffer([]byte(payload))
req, err := http.NewRequest(http.MethodPost, URL.String(), pay)

I believe I fixed it.
I looked in the Google docs again and for the speech to text (which is a different API) it says to do encode64 -w 0
So, looking in Go docs, it seems RawStdEncoding is right to use to replicate this behaviour, not StdEncoding
No image failures yet. Hope this helps someone else one day.

Related

Is there a way to serve the video part, providing start end seconds?

I am trying to use the backend to serve the video from the storage. I use Go + GIN It works but I need to implement video requests with start and end parameters. For example, I have a video with 10 mins duration and I want to request a fragment from 2 to 3 mins. Is it possible or are there examples somewhere?
This is what I have now:
accessKeyID := ""
secretAccessKey := ""
useSSL := false
ctx := context.Background()
endpoint := "127.0.0.1:9000"
bucketName := "mybucket"
// Initialize minio client object.
minioClient, err := minio.New(endpoint, &minio.Options{
Creds: credentials.NewStaticV4(accessKeyID, secretAccessKey, ""),
Secure: useSSL,
})
if err != nil {
log.Fatalln(err)
}
// Get file
object, err := minioClient.GetObject(ctx, bucketName, "1.mp4", minio.GetObjectOptions{})
if err != nil {
fmt.Println(err)
return
}
objInfo, err := object.Stat()
if err != nil {
return
}
buffer := make([]byte, objInfo.Size)
object.Read(buffer)
c.Writer.Header().Set("Content-Length", fmt.Sprintf("%d", objInfo.Size))
c.Writer.Header().Set("Content-Type", "video/mp4")
c.Writer.Header().Set("Connection", "keep-alive")
c.Writer.Header().Set("Content-Range", fmt.Sprintf("bytes 0-%d/%d", objInfo.Size, objInfo.Size))
//c.Writer.Write(buffer)
c.DataFromReader(200, objInfo.Size, "video/mp4", bytes.NewReader(buffer), nil)
This will require your program to at least demux the media stream to get time information out of it, in case you're using a container that supports that, or to actually decode the video stream in case it doesn't - in general, you can't know how many bytes you need to seek into a video file to go to a specific location¹.
As the output again needs to be a valid media container so that whoever requested it can deal with it, there's going to be remixing into an output container.
So, pick yourself a library that can do that and read its documentation. Ffmpeg / avlib is the classical choice there, but I have positively no idea about whether someone else has already written go bindings for it. If not, doing that works be worthwhile.
¹ there is cases where you can, that would probably apply to MPEG Transport Streams with a fixed mux bitrate. But unless you're working in streaming of video for actual TV towers or actual TV satellites that need a constant rate data stream, you will not likely be dealing with these

What is privatekey in oracle objectstorage?

c, clerr := objectstorage.NewObjectStorageClientWithConfigurationProvider(common.NewRawConfigurationProvider(
"ocid1.tenancy.oc1..aaaaaaa5jo3pz1alm1o45rzx1ucaab4njxbwaqqbc7ld3l6biayjaert5la",
"ocid1.user.oc1..aaaaaaaauax5bo2gg3az46h53467u57ue86rk9h2wax8w7zzamxgwvsi34ja",
"ap-seoul-1",
"98:bc:6b:13:c1:64:ds:8b:9c:15:11:d2:8d:e5:92:db",
))
I'm trying to use oracle object storage, I checked the official manual, but there is something I don't understand. As above, I need the privateKey, and pricateKeyPassphrase arguments, but I don't know where to get them. Is there a detailed explanation or example?
What i want, is to upload a file to storage.
Where can I go to the page in the oracle console to get the keys I need? please give me some advice
config, err := common.ConfigurationProviderFromFile("./config", "")
if err != nil {
t.Error(err.Error())
}
c, err := objectstorage.NewObjectStorageClientWithConfigurationProvider(config)
if err != nil {
t.Error(err.Error())
}
https://cloud.oracle.com/identity/domains/my-profile/api-keys
I generated a key on this page, put it in my project, and with the above code I was able to get started without any problems.

Trying to detect if multiple files are present in multipart/form-data request, and rejecting multiple attachments

I am building an API using Go that needs to store a file that is sent in a multipart form request. It also needs to return an error if more than one file is attached, and the files do not have key values attached. I'm running into an issue where the Part for the Multipart Reader changes upon iteration. So I can either successfully upload the first file but not return the error, or it returns an error when needed, but when a valid request comes in - it iterates past it and uploads nothing.
I have written a couple for loops trying this, and some without.
i := 0
var data io.Reader
for part, err := reader.NextPart(); err != io.EOF; part, err := reader.NextPart() {
i++
data = part
}
if i > 1 {
return nil, errors.New("too many files")
}
req := storeRequest{
Data: data,
FileNAme: r.URL.Path,
}
return req, nil
Any suggestions on how I could handle this? Thanks in advance.

How to handle chunked file upload

I'm creating a simple application where it allows users to upload big files using simple-uploader since this plugin sends the files in chunks instead of one big file. The problem is that when I save the file the first chunk is the only one that is being saved. Is there a way in Go where I'll wait for all the chunks to arrive in the server then save it afterward?
Here's a snippet of the code I'm doing:
dFile, err := c.FormFile("file")
if err != nil {
return SendError(c, err)
}
filename := dFile.Filename
f, err := dFile.Open()
if err != nil {
return SendError(c, err)
}
defer f.Close()
// save file in s3
duration := sss.UploadFile(f, "temp/"+filename")
... send response
By the way for this project, I'm using the fiber framework.
While working on this I encountered tus-js-client which is doing the same as the simple-uploader and implementation in go called tusd which will reassemble the chunks so you don't have to worry about it anymore.
Here's a discussion where I posted my solution: https://stackoverflow.com/a/65785097/549529.

how to set DPI using Golang?

Golang image package is very handy to some extent but lack the support to set DPI of an image. I checked the file header of generated file, FF D8 FF DB which looks like jpeg raw. AFAIK, raw doesn't come with DPI like what jfif has. So here's my question, how to set DPI of the generated image? Or how to convert a raw to jfif, from which I know I can edit a specific bit of the file to set DPI? Previously I embedded an AdvancedBatchConverter executable in my app and used exec.Command(fmt.Sprintf("%s/AdvancedBatchConverter/abc.exe", cwd), outputFile, "/jfif", fmt.Sprintf("/convert=%s", jfifFileName))
to do the trick, but really, disgusted by it every time I looked at the code.
I believe you're looking for the exif values XResolution and YResolution
My understanding is the native jpeg encoder doesn't have any options for exif data.
https://github.com/dsoprea/go-exif will let you modify the exif data.
Additionally I believe if you first write the jpeg to a bytes.Buffer or similar and then append the exif you can do the entire thing in memory without flushing to disk first.
I hope that helps.
github.com/dsoprea/go-exif/v2 can reader and write exif data.
with other package github.com/dsoprea/go-jpeg-image-structure
here is code example . for write DPI(XResolution, YResolution) to Image.
import(
exif2 "github.com/dsoprea/go-exif/v2"
exifcommon "github.com/dsoprea/go-exif/v2/common"
jpegstructure "github.com/dsoprea/go-jpeg-image-structure"
)
func SetExifData(filepath string) error {
jmp := jpegstructure.NewJpegMediaParser()
intfc, err := jmp.ParseFile(filepath)
log.PanicIf(err)
sl := intfc.(*jpegstructure.SegmentList)
// Make sure we don't start out with EXIF data.
wasDropped, err := sl.DropExif()
log.PanicIf(err)
if wasDropped != true {
fmt.Printf("Expected the EXIF segment to be dropped, but it wasn't.")
}
im := exif2.NewIfdMapping()
err = exif2.LoadStandardIfds(im)
log.PanicIf(err)
ti := exif2.NewTagIndex()
rootIb := exif2.NewIfdBuilder(im, ti, exifcommon.IfdPathStandard, exifcommon.EncodeDefaultByteOrder)
err = rootIb.AddStandardWithName("XResolution", []exifcommon.Rational{{Numerator: uint32(96), Denominator: uint32(1)}})
log.PanicIf(err)
err = rootIb.AddStandardWithName("YResolution", []exifcommon.Rational{{Numerator: uint32(96), Denominator: uint32(1)}})
log.PanicIf(err)
err = sl.SetExif(rootIb)
log.PanicIf(err)
b := new(bytes.Buffer)
err = sl.Write(b)
log.PanicIf(err)
if err := ioutil.WriteFile(filepath, b.Bytes(), 0644); err != nil {
fmt.Printf("write file err: %v", err)
}
return nil
}

Resources