Get cookie from URL - go

I am wondering if there is a way to get a cookie from a URL using golang? I have tried to use a few examples that come up when I google it but it never seems to return the cookie

This seems to do it:
package http
import "net/http"
func get_cookie(ref, name string) (*http.Cookie, error) {
res, err := http.Get(ref)
if err != nil {
return nil, err
}
defer res.Body.Close()
for _, cook := range res.Cookies() {
if cook.Name == name {
return cook, nil
}
}
return nil, http.ErrNoCookie
}

Related

Is there a good way to do cache saves in goroutine?

Let's say I have a handler that makes a request and gets the latest data on the selected stock:
func (ss *stockService) GetStockInfo(ctx *gin.Context) {
code := ctx.Param("symbol")
ss.logger.Info("code", code)
url := fmt.Sprintf("URL/%v", code)
ss.logger.Info(url)
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
if err != nil {
errs.HTTPErrorResponse(ctx, &ss.logger, errs.New(errs.Internal, err))
return
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
errs.HTTPErrorResponse(ctx, &ss.logger, errs.New(errs.Internal, err))
return
}
defer resp.Body.Close()
var chart ChartResponse
err = json.NewDecoder(resp.Body).Decode(&chart)
if err != nil {
errs.HTTPErrorResponse(ctx, &ss.logger, errs.New(errs.Internal, err))
return
}
ctx.JSON(http.StatusOK, chart)
}
And I want to add caching here. Since I don't have a lot of experience right now, I'm interested in proper interaction with the cache.
I think that if, for example, it is not possible to save to the cache for some reason, then you can simply make a request to the api. Then I wonder if it would be right to save to the cache in a separate goroutine and immediately return the response:
func (ss *stockService) GetStockInfo(ctx *gin.Context) {
code := ctx.Param("symbol")
stockInfo, err := ss.cache.Get(code)
if err == nil {
// FIND
...
ctx.JSON(http.StatusOK, chart)
} else {
ss.logger.Info("code", code)
url := fmt.Sprintf("URL/%v", code)
ss.logger.Info(url)
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
...
err = json.NewDecoder(resp.Body).Decode(&chart)
// IS IT A GOOD WAY ?
go ss.cache.Save(code,chart,expireAt)
ctx.JSON(http.StatusOK, chart)
}
}
I use redis as a cache.
I will be glad if someone says what is wrong with this approach.

Read response content from gorilla toolkit Client.get

I´m using Gorilla Toolkit for golang to request a web resource (GET) and I want to process the response body but don´t know how to access it. Here is my main.go
package main
import (
"log"
"github.com/gorilla/http"
)
func main() {
url := "http://ubuntu.com"
status, h, r, err := http.DefaultClient.Get(url, nil)
if err != nil {
log.Fatal(err)
}
if r != nil {
defer r.Close()
}
log.Printf("Status: %v", status)
log.Printf("Headers: %v", h)
var p []byte
_, err = r.Read(p)
if err != nil {
log.Fatal(err)
}
log.Printf("MSG: %v", p)
}
Gorillas response object is of the type io.ReadCloser and I can´t wrap my head around how to access it. Any help is appreciated.
Use ioutil.ReadAll to read the entire response body as a []byte:
status, h, r, err := http.DefaultClient.Get(url, nil)
if err != nil {
log.Fatal(err)
}
var p []byte
if r != nil {
p, err = ioutil.ReadAll(r)
r.Close()
if err != nil {
log.Fatal(err)
}
}
I suggest that you use the net/http client instead of the Gorilla client. There are more examples of how use the net/http client and the net/http client better maintained.

Golang server "write tcp ... use of closed network connection"

I am beginner at Go, I had wrote small server to testing and deploy it on heroku platform. I have /logout request, which almost works, but sometimes I see something like this:
PANIC: write tcp 172.17.110.94:36641->10.11.189.195:9951: use of closed network connection
I don't know why it happens, and why sometimes it works perfectly.
My steps:
I send 1st POST request to /token-auth with body then generate token and send as response.
At 2nd I do /logout GET request with that token, and set token to Redis store
Here is full code of my redil_cli.go
package store
import (
"github.com/garyburd/redigo/redis"
)
type RedisCli struct {
conn redis.Conn
}
var instanceRedisCli *RedisCli = nil
func Connect() (conn *RedisCli) {
if instanceRedisCli == nil {
instanceRedisCli = new(RedisCli)
var err error
//this is works!!!
instanceRedisCli.conn, err = redis.Dial("tcp", "lab.redistogo.com:9951")
if err != nil {
panic(err)
}
if _, err := instanceRedisCli.conn.Do("AUTH", "password"); err != nil {
//instanceRedisCli.conn.Close()
panic(err)
}
}
return instanceRedisCli
}
func (redisCli *RedisCli) SetValue(key, value string, expiration ...interface{}) error {
_, err := redisCli.conn.Do("SET", key, value)
if err == nil && expiration != nil {
redisCli.conn.Do("EXPIRE", key, expiration[0])
}
return err
}
func (redisCli *RedisCli) GetValue(key string) (interface{}, error) {
data, err := redisCli.conn.Do("GET", key)
if err != nil{
panic(err)
}
return data, err
}
After that my function that checks Authorization header will panic while trying to do GetValue(key string) method
func (redisCli *RedisCli) GetValue(key string) (interface{}, error) {
data, err := redisCli.conn.Do("GET", key)
if err != nil{
panic(err)
}
return data, err
}
Can anyone point me, what I doing wrong?

golang http handler context

I'm trying to understand variable scopes in golang with the following code.
In this example, calling in http a page will echo the uri query combined with a stored value in Boltdb.
The problem is that the database driver doesn't seem to run correctly in the http handler context: it doesn't print anything to stdout nor to the http request.
I was expecting it to print :
He's loving <'uri query content'> but prefers pizza (data from bolt.db driver)
How to fix this code?
package main
import (
"fmt"
"net/http"
"log"
"github.com/boltdb/bolt"
)
var db bolt.DB
func handler(w http.ResponseWriter, r *http.Request) {
dberr := db.Update(func(tx *bolt.Tx) error {
log.Println("here")
b := tx.Bucket([]byte("MyBucket"))
loving := b.Get([]byte("loving"))
log.Printf("He's loving %s but prefers %s",r.URL.Path[1:], string(loving))
fmt.Fprintf(w,"He's loving %s but prefers %s",r.URL.Path[1:], string(loving) )
return nil
})
if dberr != nil {
fmt.Errorf("db update: %s", dberr)
}
log.Printf("Finished handling")
}
func main() {
db, err := bolt.Open("my.db", 0600, nil)
if err != nil {
log.Fatal(err)
}else{
log.Println("database opened")
}
dberr := db.Update(func(tx *bolt.Tx) error {
b, err := tx.CreateBucketIfNotExists([]byte("MyBucket"))
if err != nil {
return fmt.Errorf("create bucket: %s", err)
}
err2 := b.Put([]byte("loving"), []byte("pizza"))
if err2 != nil {
return fmt.Errorf("put loving: %s", err2)
}
loving := b.Get([]byte("loving"))
log.Printf("He's loving %s", string(loving))
return nil
})
if dberr != nil {
fmt.Errorf("db update: %s", err)
}
defer db.Close()
http.HandleFunc("/", handler)
http.ListenAndServe(":8080", nil)
}
I think I see your bug. This one is usually a little difficult to track because its just the : in front of the equals. It was basically a scoping issue because you declared db as a global while at the same time creating a db variable that was scoped to your main function.
You used db, err := ... to assign the values instead of just =. := will both declare and infer the type. Since its also doing declaration, the db you're using in the main function is not the db you have declared in the global scope. Meanwhile the handler is still attempting to use the db that was declared in the global scope. The below code is the same code as you initially had with a few comments in the code to outline what the working changes are. Hope this helps!
package main
import (
"fmt"
"log"
"net/http"
"github.com/boltdb/bolt"
)
var db *bolt.DB // this is going to be a pointer and is going to be nil until its set by the main function
func handler(w http.ResponseWriter, r *http.Request) {
dberr := db.Update(func(tx *bolt.Tx) error {
log.Println("here")
b := tx.Bucket([]byte("MyBucket"))
loving := b.Get([]byte("loving"))
log.Printf("He's loving %s but prefers %s", r.URL.Path[1:], string(loving))
fmt.Fprintf(w, "He's loving %s but prefers %s", r.URL.Path[1:], string(loving))
return nil
})
if dberr != nil {
fmt.Errorf("db update: %s", dberr)
}
log.Printf("Finished handling")
}
func main() {
var err error // this will have to be declared because of the next line to assign db the first value returned from `bolt.Open`
db, err = bolt.Open("my.db", 0600, nil) // notice that this has changed and is no longer `db, err := ...` rather its `db, err = ...`
if err != nil {
log.Fatal(err)
} else {
log.Println("database opened")
}
dberr := db.Update(func(tx *bolt.Tx) error {
b, err := tx.CreateBucketIfNotExists([]byte("MyBucket"))
if err != nil {
return fmt.Errorf("create bucket: %s", err)
}
err2 := b.Put([]byte("loving"), []byte("pizza"))
if err2 != nil {
return fmt.Errorf("put loving: %s", err2)
}
loving := b.Get([]byte("loving"))
log.Printf("He's loving %s", string(loving))
return nil
})
if dberr != nil {
fmt.Errorf("db update: %s", err)
}
defer db.Close()
http.HandleFunc("/", handler)
http.ListenAndServe(":3000", nil)
}

Requesting multiple URLs in Go

I have the following Go program: https://play.golang.org/p/-TUtJ7DIhi
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"strconv"
)
func main() {
body, err := get("https://hacker-news.firebaseio.com/v0/topstories.json")
if err != nil {
panic(err)
}
var ids [500]int
if err = json.Unmarshal(body, &ids); err != nil {
panic(err)
}
var contents []byte
for _, value := range ids[0:10] {
body, err := get("https://hacker-news.firebaseio.com/v0/item/" + strconv.Itoa(value) + ".json")
if err != nil {
fmt.Println(err)
} else {
contents = append(contents, body...)
}
}
fmt.Println(contents)
}
func get(url string) ([]byte, error) {
res, err := http.Get(url)
if err != nil {
return nil, err
}
body, err := ioutil.ReadAll(res.Body)
res.Body.Close()
return body, err
}
When run it throws EOF json errors on the iterative get requests, but when I hit the URLs individually they do not appear to be malformed.
What am I missing?
It looks like there's something wrong with their server, and it's closing connections without sending a Connection: close header. The client therefore tries to reuse the connection per the HTTP/1.1 specification.
You can work around this by creating your own request, and setting Close = true, or using a custom Transport with DisableKeepAlives = true
req, err := http.NewRequest("GET", url, nil)
if err != nil {
return nil, err
}
req.Close = true
res, err := http.DefaultClient.Do(req)
if err != nil {
return nil, err
}

Resources