I will be working on a project connected with GIF images and I tried to do some basic operations on them in Go (such as retrieving frames or creating GIF from a bunch of images). But for now let's do a simple example in which I am only trying to decode a GIF and then to encode it again. I tried to use "image/gif" package, but I am unable to get it to do what I want.
Here is the code :
package main
import (
"os"
"image/gif"
)
func main() {
inputFile , err := os.Open("travolta.gif")
defer inputFile.Close()
if err != nil {
panic(err)
}
g, err := gif.DecodeAll(inputFile)
if err != nil {
panic(err)
}
outputFile, err := os.OpenFile("travolta2.gif", os.O_WRONLY|os.O_CREATE, 0777)
defer outputFile.Close()
if err != nil {
panic(err)
}
err = gif.EncodeAll(outputFile, g)
if err != nil {
panic(err)
}
}
When I run the code it does not panic and another gif is indeed created. Unfortunetely it is corrupted. Moreover the gif's size changes from 3,4MB to 4,4MB. Is it not a way to save/read a gif? What mistake do I make?
EDIT:
By corrupted I mean that when I try to open it an error occurs- screnshot here : http://www.filedropper.com/obrazekpociety.
GIF:
http://vader.joemonster.org/upload/rwu/1539970c1d48acceLw7m.gif
Go version 1.7.4
The problem is you are passing a File* to DecodeAll rather than something that supports the io.Reader interface.
I have adapted your code so that it creates a bufio.Reader from the file and then hands that to DecodeAll as below:
import (
"bufio"
"os"
"image/gif"
)
func main() {
inputFile , err := os.Open("travolta.gif")
defer inputFile.Close()
if err != nil {
panic(err)
}
r := bufio.NewReader(inputFile)
g, err := gif.DecodeAll(r)
if err != nil {
panic(err)
}
// ... remaining code
}
Related
I just tried to download webp image from url, but I found something different when I try to process the stored image.
If I download the image from the browser, it can be decoded using x/image/webp package, but if I store the image using http.Get() then create a new file then io.Copy() the image, it says:
"missing RIFF chunk header"
I assume that I need to write some RIFF chunk header when I store it using golang code.
func main(){
response, e := http.Get(URL)
if e != nil {
log.Fatal(e)
}
defer response.Body.Close()
//open a file for writing
file, err := os.Create('tv.webp')
if err != nil {
log.Fatal(err)
}
defer file.Close()
// Use io.Copy to just dump the response body to the file. This supports huge files
_, err = io.Copy(file, response.Body)
if err != nil {
log.Fatal(err)
}
fmt.Println("Success!")
imgData, err := os.Open("tv.webp")
if err != nil {
fmt.Println(err)
return
}
log.Printf("%+v", imgData)
image, err := webp.Decode(imgData)
if err != nil {
fmt.Println(err)
return
}
fmt.Println(image.Bounds())
}
Here is the URL IMG URL
download file is not webp type. it's png.
package main
import (
"fmt"
"image"
"io"
"log"
"net/http"
"os"
_ "image/png"
)
func main() {
response, e := http.Get("https://www.sony.com/is/image/gwtprod/0abe7672ff4c6cb4a0a4d4cc143fd05b?fmt=png-alpha")
if e != nil {
log.Fatal(e)
}
defer response.Body.Close()
file, err := os.Create("dump")
if err != nil {
log.Fatal(err)
}
defer file.Close()
_, err = io.Copy(file, response.Body)
if err != nil {
log.Fatal(err)
}
fmt.Println("Success!")
imageFile, err := os.Open("dump")
if err != nil {
panic(err)
}
m, name, err := image.Decode(imageFile)
if err != nil {
panic(err)
}
fmt.Println("image type is ", name, m.Bounds())
}
I am trying to get the content of a publicly available file using ioutil.ReadFile() but it doesn't find the file: panic: open http://www.pdf995.com/samples/pdf.pdf: No such file or directory
Here's my code:
// Reading and writing files are basic tasks needed for
// many Go programs. First we'll look at some examples of
// reading files.
package main
import (
"fmt"
"io/ioutil"
)
// Reading files requires checking most calls for errors.
// This helper will streamline our error checks below.
func check(e error) {
if e != nil {
panic(e)
}
}
func main() {
fileInUrl, err := ioutil.ReadFile("http://www.pdf995.com/samples/pdf.pdf")
if err != nil {
panic(err)
}
fmt.Printf("HERE --- fileInUrl: %+v", fileInUrl)
}
Here's a go playground example
ioutil.ReadFile() does not support http.
If you look at the source code(https://golang.org/src/io/ioutil/ioutil.go?s=1503:1549#L42), open the file using os.Open.
I think I can do this coding.
package main
import (
"io"
"net/http"
"os"
)
func main() {
fileUrl := "http://www.pdf995.com/samples/pdf.pdf"
if err := DownloadFile("example.pdf", fileUrl); err != nil {
panic(err)
}
}
func DownloadFile(filepath string, url string) error {
// Get the data
resp, err := http.Get(url)
if err != nil {
return err
}
defer resp.Body.Close()
// Create the file
out, err := os.Create(filepath)
if err != nil {
return err
}
defer out.Close()
// Write the body to file
_, err = io.Copy(out, resp.Body)
return err
}
but, go playgound not protocol(go error dial tcp: Protocol not available).
so, You have to do it PC.
Why would f.Write() not return any error if I remove the file before I write?
package main
import (
"fmt"
"os"
"time"
)
func main() {
f, err := os.Create("foo")
if err != nil {
panic(err)
}
if err := os.Remove("foo"); err != nil {
panic(err)
}
if _, err := f.Write([]byte("hello")); err != nil {
panic(err) // would expect panic here
}
fmt.Println("no panic?")
}
http://play.golang.org/p/0QllIB6L9O
Apparently this is expected.
When you delete a file you really remove a link to the file (to the inode). If someone already has that file open, they get to keep the file descriptor they have. The file remains on disk, taking up space, and can be written to and read from if you have access to it.
Source: https://unix.stackexchange.com/questions/146929/how-can-a-log-program-continue-to-log-to-a-deleted-file
I've written the following code to tar a file, code works but strangely if I untar the archive the file permissions are gone so I can't read it unless I then chmod the file:
package main
import (
"archive/tar"
"io/ioutil"
"log"
"os"
)
func main() {
c, err := os.Create("/path/to/tar/file/test.tar")
if err != nil {
log.Fatalln(err)
}
tw := tar.NewWriter(c)
f, err := os.Open("sample.txt")
if err != nil {
log.Fatalln(err)
}
fi, err := f.Stat()
if err != nil {
log.Fatalln(err)
}
hdr := &tar.Header{Name: f.Name(),
Size: fi.Size(),
}
if err := tw.WriteHeader(hdr); err != nil {
log.Fatalln(err)
}
r, err := ioutil.ReadFile("sample.txt")
if err != nil {
log.Fatalln(err)
}
if _, err := tw.Write(r); err != nil {
log.Fatalln(err)
}
if err := tw.Close(); err != nil {
log.Fatalln(err)
}
}
Any idea what I'm doing wrong?
You're not preserving the original permissions of the file. You're manually creating a header, and specifying only the name and size. Instead, use tar.FileInfoHeader to build the header.
package main
import (
"archive/tar"
"io/ioutil"
"log"
"os"
)
func main() {
c, err := os.Create("/path/to/tar/file/test.tar")
if err != nil {
log.Fatalln(err)
}
tw := tar.NewWriter(c)
f, err := os.Open("sample.txt")
if err != nil {
log.Fatalln(err)
}
fi, err := f.Stat()
if err != nil {
log.Fatalln(err)
}
// create header from FileInfo
hdr, err := tar.FileInfoHeader(fi, "")
if err != nil {
log.Fatalln(err)
}
if err := tw.WriteHeader(hdr); err != nil {
log.Fatalln(err)
}
// instead of reading the whole file into memory, prefer io.Copy
r, err := io.Copy(tw, f)
if err != nil {
log.Fatalln(err)
}
log.Printf("Wrote %d bytes\n", r)
}
Also note that I used io.Copy to copy data from the file (an io.Reader) to the tar writer (an io.Writer). This will work much better for larger files.
Also - pay special attention to this note from the docs:
Because os.FileInfo's Name method returns only the base name of the file it describes, it may be necessary to modify the Name field of the returned header to provide the full path name of the file.
In this simple example, you're just using sample.txt so you shouldn't run into trouble. If you wanted to preserve a directory structure in your tar, you may have to modify the Name field in the header.
Is it possible to extract a tar.xz package in golang? My understanding is it's possible to use the library for tar and sending it to an xz go library.
I recently created an XZ decompression package so it is now
possible to extract a tar.xz using only Go code.
The following code extracts the file myfile.tar.xz to the current
directory:
package main
import (
"archive/tar"
"fmt"
"io"
"log"
"os"
"github.com/xi2/xz"
)
func main() {
// Open a file
f, err := os.Open("myfile.tar.xz")
if err != nil {
log.Fatal(err)
}
// Create an xz Reader
r, err := xz.NewReader(f, 0)
if err != nil {
log.Fatal(err)
}
// Create a tar Reader
tr := tar.NewReader(r)
// Iterate through the files in the archive.
for {
hdr, err := tr.Next()
if err == io.EOF {
// end of tar archive
break
}
if err != nil {
log.Fatal(err)
}
switch hdr.Typeflag {
case tar.TypeDir:
// create a directory
fmt.Println("creating: " + hdr.Name)
err = os.MkdirAll(hdr.Name, 0777)
if err != nil {
log.Fatal(err)
}
case tar.TypeReg, tar.TypeRegA:
// write a file
fmt.Println("extracting: " + hdr.Name)
w, err := os.Create(hdr.Name)
if err != nil {
log.Fatal(err)
}
_, err = io.Copy(w, tr)
if err != nil {
log.Fatal(err)
}
w.Close()
}
}
f.Close()
}
http://golang.org/pkg/archive/tar/#example_
also you can do
import "os/exec"
cmd := exec.Command("tar", "-x", "/your/archive.tar.xz")
err := cmd.Run()
There is no Lempel-Ziv-Markow encoder or decoder in the Go standard library. If you are allowed to assume that the platform your code runs on provides the xz utility, you could use stub functions like these:
import "os/exec"
// decompress xz compressed data stream r.
func UnxzReader(r io.Reader) (io.ReadCloser, error) {
unxz := exec.Command("xz", "-d")
unxz.Stdin = r
out, err := unxz.StdoutPipe()
if err != nil {
return nil, err
}
err = unxz.Start()
if err != nil {
return nil, err
}
// we are not interested in the exit status, but we should really collect
// that zombie process
go unxz.Wait()
return out, nil
}