Redis instance missing cache often using go-lang client redigo - caching

I'm developing an api for blog or online publishing website to develop a recommendation engine for their content.
Since my api returns same json for same url request, I decided to use Redis as a cache for high traffic websites by passing the url as key and json as value. I am developing this api in go-lang recently and have been using redigo to talk to our Redis instance. The way I decided to architect my system is to check the url of the query sent by the client (blog) and search for it in redis. If however, the url response in not cached I do a 301 redirect to another api that applied the logic to generate the json response for that particular url and also set the redis cache. However, while I'm testing if my Redis is working properly, I realised that it is missing cache far too often than what I would like. It's definitely caching the json response mapped to the url as confirmed by doing a simple GET in Redis-cli but after 3-4 hits I could see Redis missing cache again. I'm still very new to go-lang and caching world so I'm not sure if I'm missing something in my implementation. Also, I would like to know under what circumstances can Redis instance miss caches ? It can't be timeout because Redis docs says "By default recent versions of Redis don't close the connection with the client if the client is idle for many seconds: the connection will remain open forever." so I'm not sure what exactly is happening with my setup. Relevant part of my code is below:
package main
import (
"flag"
"fmt"
"github.com/garyburd/redigo/redis"
"log"
"net/http"
"time"
)
var (
port int
folder string
pool *redis.Pool
redisServer = flag.String("redisServer", "redisip:22121", "")
redisPassword = flag.String("redisPassword", "", "")
)
func init() {
flag.IntVar(&port, "port", 80, "HTTP Server Port")
flag.StringVar(&folder, "folder", "www", "Serve this folder")
}
func newPool(server, password string) *redis.Pool {
return &redis.Pool{
MaxIdle: 3,
MaxActive: 25000,
IdleTimeout: 30 * time.Second,
Dial: func() (redis.Conn, error) {
c, err := redis.Dial("tcp", server)
if err != nil {
return nil, err
}
return c, err
},
TestOnBorrow: func(c redis.Conn, t time.Time) error {
_, err := c.Do("PING")
return err
},
}
}
func main() {
flag.Parse()
pool = newPool(*redisServer, *redisPassword)
httpAddr := fmt.Sprintf(":%v", port)
log.Printf("Listening to %v", httpAddr)
http.HandleFunc("/api", api)
http.Handle("/static/", http.StripPrefix("/static/", http.FileServer(http.Dir(folder))))
log.Fatal(http.ListenAndServe(httpAddr, nil))
}
func api(w http.ResponseWriter, r *http.Request) {
link := r.URL.Query().Get("url")
fmt.Println(link)
heading := r.URL.Query().Get("heading")
conn := pool.Get()
reply, err := redis.String(conn.Do("GET", link))
defer conn.Close()
if err != nil {
fmt.Println("Error for link %v:%v", heading, err)
http.Redirect(w, r, "json-producing-api", 301)
}
fmt.Fprint(w, reply)
}
I must also mention here that in the above code, my redis instance is actually a twemproxy client built by twitter which proxies three different redis client running behind on three different ports. Everything seemed to worked normal yesterday and I did a successful load test for 5k concurrent reuquests. However, when I checked the log today some queries were being missed by redis and were being redirected to my json-producing-api and I could see redigo:nil error. I'm totally confused as to what exactly is going wrong? Any help will be greatly appreciated.
EDIT: As per discussions below, I'm detailing the code that I use to set the data in Redis
func SetIntoRedis(key string, value string) bool {
// returns true if successfully set, returns false in case of an error
conn := pool.Get()
_, err := conn.Do("SET", key, value)
if err != nil {
log.Printf("Error Setting %v : %v", key, err)
return false
}
return true
}
Configuration of my twemproxy client
leaf:
listen: 0.0.0.0:22121
hash: fnv1a_64
distribution: ketama
redis: true
auto_eject_hosts: true
server_retry_timeout: 3000
server_failure_limit: 3
servers:
- 127.0.0.1:6379:1
- 127.0.0.1:6380:1
- 127.0.0.1:6381:1

Related

How is the api implemented in grpc?

I used the official documentation https://grpc.io/docs/languages/go/basics/, but after implementation, questions arose.
When I create a TCP server I have to specify host and port (in my case mcrsrv-book:7561).
But what if I want to implement another API for GRPC? Do I need to start another server on a new port (e.g. mcrsrv-book:7562)?
How is routing and api implemented in grpc?
My server code is:
type routeGuideServer struct {
pb.UnimplementedRouteGuideServer
savedFeatures []*pb.Response // read-only after initialized
}
// GetFeature returns the feature at the given point.
func (s *routeGuideServer) GetFeature(ctx context.Context, request *pb.Request) (*pb.Response, error) {
context := localContext.LocalContext{}
book := bookRepository.FindOrFailBook(context, int(request.BookId))
return &pb.Response{
Name: book.Name,
BookId: int32(book.BookId),
AuthorId: int32(book.AuthorId),
Category: book.Category,
Description: "Описание",
}, nil
}
func newServer() *routeGuideServer {
s := &routeGuideServer{}
return s
}
func SomeAction() {
lis, err := net.Listen("tcp", fmt.Sprintf("mcrsrv-book:7561"))
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
var opts []grpc.ServerOption
grpcServer := grpc.NewServer(opts...)
pb.RegisterRouteGuideServer(grpcServer, newServer())
grpcServer.Serve(lis)
}
I think there should be options other than opening a separate port for each grpc service.
How is the api implemented in grpc?
If you want to use the same address for a different service, you can just re-register the other service before initiating the grpc server.
grpcServer := grpc.NewServer(opts...)
pb.RegisterRouteGuideServer(grpcServer, newServer())
#register other server here with the same 'grpcServer'
grpcServer.Serve(lis)
This stackoverflow thread would probably help as an example of what you want to achieve. The question provided a sample code that I believe align with what you asked.
Access multiple gRPC services over the same connection

In Go, what is the proper way to use context with pgx within http handlers?

Update 1: it seems that using a context tied to the HTTP request may lead to the 'context canceled' error. However, using the context.Background() as the parent seems to work fine.
// This works, no 'context canceled' errors
ctx, cancel := context.WithTimeout(context.Background(), 100*time.Second)
// However, this creates 'context canceled' errors under mild load
// ctx, cancel := context.WithTimeout(r.Context(), 100*time.Second)
defer cancel()
app.Insert(ctx, record)
(updated code sample below to produce a self-contained example for repro)
In go, I have an http handler like the following code. On the first HTTP request to this endpoint I get a context cancelled error. However, the data is actually inserted into the database. On subsequent requests to this endpoint, no such error is given and data is also successfully inserted into the database.
Question: Am I setting up and passing the context correctly between the http handler and pgx QueryRow method? (if not is there a better way?)
If you copy this code into main.go and run go run main.go, go to localhost:4444/create and hold ctrl-R to produce a mild load, you should see some context canceled errors produced.
package main
import (
"context"
"fmt"
"log"
"math/rand"
"net/http"
"time"
"github.com/jackc/pgx/v4/pgxpool"
)
type application struct {
DB *pgxpool.Pool
}
type Task struct {
ID string
Name string
Status string
}
//HTTP GET /create
func (app *application) create(w http.ResponseWriter, r *http.Request) {
fmt.Println(r.URL.Path, time.Now())
task := &Task{Name: fmt.Sprintf("Task #%d", rand.Int()%1000), Status: "pending"}
// -------- problem code here ----
// This line works and does not generate any 'context canceled' errors
//ctx, cancel := context.WithTimeout(context.Background(), 100*time.Second)
// However, this linegenerates 'context canceled' errors under mild load
ctx, cancel := context.WithTimeout(r.Context(), 100*time.Second)
// -------- end -------
defer cancel()
err := app.insertTask(ctx, task)
if err != nil {
fmt.Println("insert error:", err)
return
}
fmt.Fprintf(w, "%+v", task)
}
func (app *application) insertTask(ctx context.Context, t *Task) error {
stmt := `INSERT INTO task (name, status) VALUES ($1, $2) RETURNING ID`
row := app.DB.QueryRow(ctx, stmt, t.Name, t.Status)
err := row.Scan(&t.ID)
if err != nil {
return err
}
return nil
}
func main() {
rand.Seed(time.Now().UnixNano())
db, err := pgxpool.Connect(context.Background(), "postgres://test:test123#localhost:5432/test")
if err != nil {
log.Fatal(err)
}
log.Println("db conn pool created")
stmt := `CREATE TABLE IF NOT EXISTS public.task (
id uuid NOT NULL DEFAULT gen_random_uuid(),
name text NULL,
status text NULL,
PRIMARY KEY (id)
); `
_, err = db.Exec(context.Background(), stmt)
if err != nil {
log.Fatal(err)
}
log.Println("task table created")
defer db.Close()
app := &application{
DB: db,
}
mux := http.NewServeMux()
mux.HandleFunc("/create", app.create)
log.Println("http server up at localhost:4444")
err = http.ListenAndServe(":4444", mux)
if err != nil {
log.Fatal(err)
}
}
TLDR: Using r.Context() works fine in production, testing using Browser is a problem.
An HTTP request gets its own context that is cancelled when the request is finished. That is a feature, not a bug. Developers are expected to use it and gracefully shutdown execution when the request is interrupted by client or timeout. For example, a cancelled request can mean that client never see the response (transaction result) and developer can decide to roll back that transaction.
In production, request cancelation does not happen very often for normally design/build APIs. Typically, flow is controlled by the server and the server returns the result before the request is cancelled.
Multiple Client requests does not affect each other because they get independent go-routine and context. Again, we are talking about happy path for normally designed/build applications. Your sample app looks good and should work fine.
The problem is how we test the app. Instead of creating multiple independent requests, we use Browser and refresh a single browser session. I did not check what exactly is going on, but assume that the Browser terminates the existing request in order to run a new one when you click ctrl-R. The server sees that request termination and communicates it to your code as context cancelation.
Try to test your code using curl or some other script/utility that creates independent requests. I am sure you will not see cancelations in that case.

Memcached Ping() doesn't return an error on an invalid server

I use memcache for caching and the client I use is https://github.com/bradfitz/gomemcache. When I tried initiate new client with dummy/invalid server address and then pinging to it, it return no error.
package main
import (
"fmt"
m "github.com/bradfitz/gomemcache"
)
func main() {
o := m.New("dummy_adress")
fmt.Println(o.Ping()) // return no error
}
I think it suppose to return error as the server is invalid. What do I miss?
It looks like the New() call ignores the return value for SetServers:
func New(server ...string) *Client {
ss := new(ServerList)
ss.SetServers(server...)
return NewFromSelector(ss)
}
The SetServers() function will only set the server list to valid servers (in
your case: no servers) and the Ping() funtion will only ping servers that are
set, and since there are no servers set it doesn't really do anything.
This is arguably a feature; if you have 4 servers and one is down then that's
not really an issue. Even with just 1 server memcache is generally optional.
You can duplicate the New() logic with an error check:
ss := new(memcache.ServerList)
err := ss.SetServers("example.localhost:11211")
if err != nil {
panic(err)
}
c := memcache.NewFromSelector(ss)
err = c.Ping()
if err != nil {
panic(err)
}
Which gives:
panic: dial tcp 127.0.0.1:11211: connect: connection refused

Connections stuck at CLOSE_WAIT in golang server

I am using gorilla mux to create a golang server to support a simple health GET endpoint.
The endpoint responds with a status of ok whenver the server is up.
I see a lot of connections (over 400) in CLOSE_WAIT state on one system.
This does not happen on other systems with the same code.
Output of netstat (9003 is my server port):
tcp 164 0 ::1:9003 ::1:60702 CLOSE_WAIT -
tcp 164 0 ::1:9003 ::1:44472 CLOSE_WAIT -
tcp 164 0 ::1:9003 ::1:31504 CLOSE_WAIT -
This seems to imply that I have a connection I need to close.
Most of the questions I read online seem to suggest that open connections pertain to the client not issuing a response.body.close() after a GET.
As per https://blog.cloudflare.com/the-complete-guide-to-golang-net-http-timeouts/, I could add read/write timeouts on server side but I would like to understand the root cause of CLOSE_WAITS before adding the improvements.
Am I missing any close on the server side?
My code is below:
import "github.com/gorilla/mux"
...
func (server *Srvr) healthHandler(w http.ResponseWriter, r *http.Request) {
resp := map[string]string{"status": "ok"}
respJSON, err := json.Marshal(resp)
if err != nil {
w.WriteHeader(http.StatusInternalServerError)
fmt.Fprintf(w, "Error creating JSON response %s", err)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
w.Write(respJSON)
}
// Load initializes the servers
func Load(port string) *Srvr {
srvrPort := ":" + port
log.Infof("Will listen on port %s", srvrPort)
serverMux := mux.NewRouter()
srvr := &Srvr{Port: port, Srv: &http.Server{Addr: srvrPort, Handler: serverMux}}
serverMux.HandleFunc("/api/v1.0/health", srvr.healthHandler).Methods("GET")
return srvr
}
// Run starts the server
func (server *Srvr) Run() {
log.Info("Starting the server")
// Starting a server this way to allow for shutdown.
// https://stackoverflow.com/questions/39320025/how-to-stop-http-listenandserve
err := server.Srv.ListenAndServe()
if err != http.ErrServerClosed {
log.Fatalf("ListenAndServe(): %s", err)
}
}
// Main resides outside the server package
func main() {
srvr := server.Load("9003")
// Now that all setup is done successfully, lets start the server
go srvr.Run()
// An unrelated forever loop executes below for different business logic
for {
glog.Info("Evaluation iteration begins now")
...
time.Sleep(time.Duration(evalFreq) * time.Minute)
}
}

How do you send websocket data after a page is rendered in Golang?

I am new to Golang and am trying to send data using web-sockets to a page. I have a handler and I want to be able to serve a file and after it is rendered send it a message. This is some code that I have now.
package main
import (
"github.com/gorilla/websocket"
"log"
"fmt"
"net/http"
)
var upgrader = websocket.Upgrader{
ReadBufferSize: 1024,
WriteBufferSize: 1024,
}
func serveRoot(w http.ResponseWriter, r *http.Request) {
http.ServeFile(w, r, "views/index.html")
_, err := upgrader.Upgrade(w, r, nil)
if err != nil {
log.Println(err)
return
}
}
func main() {
http.HandleFunc("/", serveRoot)
fmt.Println("Started")
if err := http.ListenAndServe(":9090", nil); err != nil {
log.Fatal("ListenAndServe:", err)
}
}
The problem is that using the gorilla library I have no idea how to send data and I am getting some output when I load the page.
2018/01/23 08:35:24 http: multiple response.WriteHeader calls
2018/01/23 08:35:24 websocket: the client is not using the websocket protocol: 'upgrade' token not found in 'Connection' header
2018/01/23 08:35:24 http: multiple response.WriteHeader calls
2018/01/23 08:35:24 websocket: 'Origin' header value not allowed
Intention: Send some data after the page is rendered, then (later) hook it up to stdin/stderr
Disclaimer: I am just learning to code, so it would be a great help is you could take that into consideration and not be too vague.
So, as some of the comments mentioned, you can't upgrade a connection that has already been served html. The simple way to do this is just have one endpoint for your websockets, and one endpoint for your html.
So in your example, you might do:
http.HandleFunc("/", serveHtml)
http.HandleFunc("/somethingElse", serveWebsocket)
Where serveHtml has your http.ServeFile call, and serveWebsocket has the upgrading and wotnot.

Resources