How to track number of connections with ReverseProxy balancing - go

I'm trying to write some simple load balancer on Go. I'm using simple http.Server and httputil.NewSingleHostReverseProxy like so:
func main() {
...
server := http.Server{
Addr: ":8080",
ConnState: connStateHook,
Handler: http.HandlerFunc(loadBalance),
}
}
HandlerFunc is as follows:
func loadBalance(w http.ResponseWriter, r *http.Request) {
// let's imagine I have a hardcoded proxy to simplify things
serverURL, _ := url.Parse("http://127.0.0.1:5000")
proxy := httputil.NewSingleHostReverseProxy(serverURL)
proxy.ServeHTTP(w, r)
}
So whenever request will be sent to my localhost:8080 it'll be proxied to 127.0.0.1:5000.
My goal is to handle how many active request's are handled by that proxy endpoint. I know there is http.Server.ConnState hook which allows you to track state of http request and fire "callbacks".
I've made a simple function to store the number of connections in global variable:
var connectionsAmount int
func connStateHook(connection net.Conn, state http.ConnState) {
if state == http.StateActive {
connectionsAmount = connectionsAmount + 1
} else if state == http.StateIdle {
connectionsAmount = connectionsAmount - 1
} else if state == http.StateClosed {
connectionsAmount = connectionsAmount - 1
}
}
It works pretty well until I add additional N endpoints, because now I can't pair a request with specific proxy server.
And I want to track this number, let's say, in Endpoint struct:
type Endpoint struct {
URL *url.URL
Proxy *httputil.ReverseProxy
ActiveConnections int
}
In what way I can implement such tracking?

Related

In a Golang RoundTripper can I read a response Header?

I'm using Envoy as mTLS proxies on load-balanced servers.
The Go client makes an HTTPS call to the load balancer which responds with a redirect to another server. I need to modify the port number in the response to use the Envoy port.
Is it possible to read Response.Header.Get("Location") and create a new response in order to change the port?
The docs say
// RoundTrip should not attempt to interpret the response.
It would look like
type EnvoyRoundTripper struct {
PortMapper http.RoundTripper
}
func (ert EnvoyRoundTripper) RoundTrip(req *http.Request) (res *http.Response, e error) {
res, e = ert.PortMapper.RoundTrip(req)
if e == nil && res.StatusCode == http.StatusTemporaryRedirect {
redirect := res.Header.Get("Location")
// res = Create a new Response with a modified redirect
}
return
}

Detecting if a goroutine is already running

I am writing a simple TCP proxy server that is started, stopped, and reconfigured by an HTTP handler. The proxy part works fine.
The problem I am having is starting it again each time the HTTP handler runs. i.e. it works on the first pass. But the second pass panics because the listener is already running.
My problem seems to be how to detect that the goroutine is already running, so I can stop it and start it again with new parameters. i.e. I can't figure out how to get a persistent handle on it from each execution of the web handler. It ends up crashing because the listener is already listening...
In another language, I would make it a Singleton. But what is the best way to do it in Go?
My Server has a NewServer(), Server.Stop, using a channel, etc. And they work if I keep them within the one block of code.
i.e. this works
ps = utils.NewProxyServer(listening, target[0])
time.Sleep(...)
ps.Stop()
How can I persist the handle on it between passes through the handler?
var ps *utils.Server
func MyHandler(w http.ResponseWriter, r *http.Request) {
switch r.Method {
case "GET":
...
// how can I know that ps exists so I can close it?
// and create a new one? Elegantly...?
ps.Stop() // this panics
ps = utils.NewServer(listening, target[0])
log.Print(ps.Status)
http.Redirect(w, r, redirurl, http.StatusSeeOther)
} else {
http.Redirect(w, r, "/network", http.StatusSeeOther)
}
}
}
I'd recommend having a goroutine manage ps, and using a channel to communicate with it:
var ps *utils.Server
// Buffer size of 1 to prevent blocking assuming it's not triggered frequently
var psControl = make(chan psConfig,1)
type psConfig struct {
// Correct the types as needed
listening string
target string
}
// This probably shouldn't be in an init(), put it wherever is appropriate
func init() {
go func() {
for cfg := range psControl {
if ps != nil {
ps.Stop()
}
ps = utils.NewServer(cfg.listening, cfg.target)
log.Print(ps.Status)
}
}()
}
func MyHandler(w http.ResponseWriter, r *http.Request) {
switch r.Method {
case "GET":
...
psControl <- psConfig{listening, target[0]}
http.Redirect(w, r, redirurl, http.StatusSeeOther)
} else {
http.Redirect(w, r, "/network", http.StatusSeeOther)
}
}
}

How to implement efficient IP whitelist in gin-gonic/gin middleware

I have an application which I need to restrict to a handful of IPs. I can write a middleware and return if request IP is not from allowed list, however I would like this process to be as efficient as possible. I.e. I would like to drop the connection as early as possible. What is the earliest stage I can drop connection, preferably with an HTTP response.
I do not have control on host firewall or border firewall to filter traffic, and again, I won't be able to provide an HTTP response, even if I had control of firewall.
Also I would prefer if I could get a description of a life cycle of an HTTP request in gin.
Add a middleware as Lansana described.
It's important that you declare it as early in the chain as possible.
r := gin.New()
whitelist := make(map[string]bool)
whitelist["127.0.0.1"] = true
r.Use(middleware.IPWhiteList(whitelist))
I'd write the middleware like this, if you're not in the whitelist, return an error that is appropriate, in the following snippet i'm returning a JSON error.
package middleware
import (
"log"
"net/http"
"github.com/gin-gonic/gin"
)
func IPWhiteList(whitelist map[string]bool) gin.HandlerFunc {
return func(c *gin.Context) {
if !whitelist[c.ClientIP()] {
c.AbortWithStatusJSON(http.StatusForbidden, gin.H{
"status": http.StatusForbidden,
"message": "Permission denied",
})
return
}
}
}
Have you read the Gin documentation?
You can add middleware based on the examples in this section. Use router.Use() to chain middleware. If you want it to be as efficient as possible, make it the first middleware in the chain, and use a data structure that allows O(1) read access to the IP's that are allowed.
For instance:
var myWhiteList map[string]bool = map[string]bool{
"1.2.3.4": true,
"4.3.2.1": true,
}
func main() {
// Creates a router without any middleware by default
r := gin.New()
// Add whitelist middleware
r.Use(middleware.IPWhitelist(myWhiteList))
// Listen and serve on 0.0.0.0:8080
r.Run(":8080")
}
The implementation of middleware.IPWhitelist can look something like this:
func IPWhiteList(whitelist map[string]bool) func(http.Handler) http.Handler {
f := func(h http.Handler) http.Handler {
fn := func(w http.ResponseWriter, r *http.Request) {
// Get IP of this request
ip := doSomething()
// If the IP isn't in the whitelist, forbid the request.
if !whitelist[ip] {
w.Header().Set("Content-Type", "text/plain")
w.WriteHeader(http.StatusForbidden)
w.Write([]byte("."))
return
}
h.ServeHTTP(w, r)
}
return http.HandlerFunc(fn)
}
return f
}

How to broadcast in gRPC from server to client?

I'm creating a small chat application in gRPC right now and I've run into the issue where if a user wants to connect to the gRPC server as a client, I'd like to broadcast that the event has occurred to all other connected clients.
I'm thinking of using some sort of observer but I"m confused as to how the server knows of who is connected and how I would broadcast the event to all clients and not just one or two.
I know using streams is part of the answer, but because each client is creating it's own stream with the server, I'm unsure of how it can subscribe to other server-client streams.
Another option would be to use a long-polling approach. That is try something like below (code in Python, since that is what I'm most familiar with, but go should be very similar). This was not tested, and is meant to just give you an idea of how to do long-polling in gRPC:
.PROTO defs
-------------------------------------------------
service Updater {
rpc GetUpdates(GetUpdatesRequest) returns (GetUpdatesResponse);
}
message GetUpdatesRequest {
int64 last_received_update = 1;
}
message GetUpdatesResponse {
repeated Update updates = 1;
int64 update_index = 2;
}
message Update {
// your update structure
}
SERVER
-----------------------------------------------------------
class UpdaterServer(UpdaterServicer):
def __init__(self):
self.condition = threading.Condition()
self.updates = []
def post_update(self, update):
"""
Used whenever the clients should be updated about something. It will
trigger their long-poll calls to return
"""
with self.condition:
# TODO: You should probably remove old updates after some time
self.updates.append(updates)
self.condition.notify_all()
def GetUpdates(self, req, context):
with self.condition:
while self.updates[req.last_received_update + 1:] == []:
self.condition.wait()
new_updates = self.updates[req.last_received_update + 1:]
response = GetUpdatesResponse()
for update in new_updates:
response.updates.add().CopyFrom(update)
response.update_index = req.last_received_update + len(new_updates)
return response
SEPARATE THREAD IN THE CLIENT
----------------------------------------------
request = GetUpdatesRequest()
request.last_received_update = -1
while True:
stub = UpdaterStub(channel)
try:
response = stub.GetUpdates(request, timeout=60*10)
handle_updates(response.updates)
request.last_received_update = response.update_index
except grpc.FutureTimeoutError:
pass
Yup, I don't see any other way than keeping a global data structure containing all the connected streams and looping through them, telling each about the even that just occurred.
Another approach is to spawn a grpc-server on client side too. On app-level you have some handshake from client to server to exchange the clients grpc-server ip and port. You probably want to create a client for that address at this point and store the client in a list.
Now you can push messages to the clients from the list with default unary RPC calls. No [bidi] stream needed.
Pros:
Possible to separate the clients "Push"-API from the server API.
Unary RPC push calls.
Cons:
Additional "server". Don't know if that is possible in every scenario.
A global map structure is needed, you can create a new chan for each connection. What I come up with is an intermediate channel to deal with the global map structure.
An example for server streaming:
func (s *server) Subscribe(req *pb.SubscribeRequest, srv pb.SubscribeServer) error {
//get trace id or generated a random string or whatever you want to indicate this goroutine
ID:="randomString"
//create a chan to receive response message
conn := make(chan *pb.SubscribeResponse)
//an intermediate channel which has the ownership of the `map`
s.broadcast <- &broadcastPayload {
//an unique identifier
ID: ID
//the chan corresponse to the ID
Conn: conn
//event to indicate add, remove or send message to broadcast channel
Event: EventEnum.AddConnection
}
for {
select {
case <-srv.Context().Done():
s.broadcast <- &entity.BroadcastPayload{
ID: ID,
Event: EventEnum.RemoveConnection
}
return nil
case response := <-conn:
if status, ok := status.FromError(srv.Send(response)); ok {
switch status.Code() {
case codes.OK:
//noop
case codes.Unavailable, codes.Canceled, codes.DeadlineExceeded:
return nil
default:
return nil
}
}}
}
}
For the broadcast go routine:
//this goroutine has the ownership of the map[string]chan *pb.SubscribeResponse
go func(){
for v:=range s.broadcast {
//do something based on the event
switch v.Event {
//add the ID and conn to the map
case EventEnum.AddConnection:
...
//delete map key and close conn channel here
case EventEnum.RemoveConnection:
...
//receive message from business logic, send the message to suiteable conn in the map as you like
case EventEnum.ReceiveResponse:
...
}
}
}
I put some details here
A simple chat server/client implemented with gRPC in Go sample
All clients are stored in the map[string]chan *chat.StreamResponse
type server struct {
Host, Password string
Broadcast chan *chat.StreamResponse
ClientNames map[string]string
ClientStreams map[string]chan *chat.StreamResponse
namesMtx, streamsMtx sync.RWMutex
}
And broadcast messages to all clients
func (s *server) broadcast(_ context.Context) {
for res := range s.Broadcast {
s.streamsMtx.RLock()
for _, stream := range s.ClientStreams {
select {
case stream <- res:
// noop
default:
ServerLogf(time.Now(), "client stream full, dropping message")
}
}
s.streamsMtx.RUnlock()
}
}
// send messages in individual client
func (s *server) sendBroadcasts(srv chat.Chat_StreamServer, tkn string) {
stream := s.openStream(tkn)
defer s.closeStream(tkn)
for {
select {
case <-srv.Context().Done():
return
case res := <-stream:
if s, ok := status.FromError(srv.Send(res)); ok {
switch s.Code() {
case codes.OK:
// noop
case codes.Unavailable, codes.Canceled, codes.DeadlineExceeded:
DebugLogf("client (%s) terminated connection", tkn)
return
default:
ClientLogf(time.Now(), "failed to send to client (%s): %v", tkn, s.Err())
return
}
}
}
}
}

Syncing websocket loops with channels in Golang

I'm facing a dilemma here trying to keep certain websockets in sync for a given user. Here's the basic setup:
type msg struct {
Key string
Value string
}
type connStruct struct {
//...
ConnRoutineChans []*chan string
LoggedIn bool
Login string
//...
Sockets []*websocket.Conn
}
var (
//...
/* LIST OF CONNECTED USERS AN THEIR IP ADDRESSES */
guestMap sync.Map
)
func main() {
post("Started...")
rand.Seed(time.Now().UTC().UnixNano())
http.HandleFunc("/wss", wsHandler)
panic(http.ListenAndServeTLS("...", "...", "...", nil))
}
func wsHandler(w http.ResponseWriter, r *http.Request) {
if r.Header.Get("Origin")+":8080" != "https://...:8080" {
http.Error(w, "Origin not allowed", 403)
fmt.Println("Client origin not allowed! (https://"+r.Host+")")
fmt.Println("r.Header Origin: "+r.Header.Get("Origin"))
return
}
///
conn, err := websocket.Upgrade(w, r, w.Header(), 1024, 1024)
if err != nil {
http.Error(w, "Could not open websocket connection", http.StatusBadRequest)
fmt.Println("Could not open websocket connection with client!")
}
//ADD CONNECTION TO guestMap IF CONNECTION IS nil
var authString string = /*gets device identity*/;
var authChan chan string = make(chan string);
authValue, authOK := guestMap.Load(authString);
if !authOK {
// NO SESSION, CREATE A NEW ONE
newSession = getSession();
//defer newSession.Close();
guestMap.Store(authString, connStruct{ LoggedIn: false,
ConnRoutineChans: []*chan string{&authChan},
Login: "",
Sockets: []*websocket.Conn{conn}
/* .... */ });
}else{
//SESSION STARTED, ADD NEW SOCKET TO Sockets
var tempConn connStruct = authValue.(connStruct);
tempConn.Sockets = append(tempConn.Sockets, conn);
tempConn.ConnRoutineChans = append(tempConn.ConnRoutineChans, &authChan)
guestMap.Store(authString, tempConn);
}
//
go echo(conn, authString, &authChan);
}
func echo(conn *websocket.Conn, authString string, authChan *chan string) {
var message msg;
//TEST CHANNEL
authValue, _ := guestMap.Load(authString);
go sendToChans(authValue.(connStruct).ConnRoutineChans, "sup dude?")
fmt.Println("got past send...");
for true {
select {
case val := <-*authChan:
// use value of channel
fmt.Println("AuthChan for user #"+strconv.Itoa(myConnNumb)+" spat out: ", val)
default:
// if channels are empty, this is executed
}
readError := conn.ReadJSON(&message)
fmt.Println("got past readJson...");
if readError != nil || message.Key == "" {
//DISCONNECT USER
//.....
return
}
//
_key, _value := chief(message.Key, message.Value, &*conn, browserAndOS, authString)
if writeError := conn.WriteJSON(_key + "|" + _value); writeError != nil {
//...
return
}
fmt.Println("got past writeJson...");
}
}
func sendToChans(chans []*chan string, message string){
for i := 0; i < len(chans); i++ {
*chans[i] <- message
}
}
I know, a big block of code eh? And I commented out most of it...
Anyway, if you've ever used a websocket most of it should be quite familiar:
1) func wsHandler() fires every time a user connects. It makes an entry in guestMap (for each unique device that connects) which holds a connStruct which holds a list of channels: ConnRoutineChans []*chan string. This all gets passed to:
2) echo(), which is a goroutine that constantly runs for each websocket connection. Here I'm just testing out sending a message to other running goroutines, but it seems my for loop isn't actually constantly firing. It only fires when the websocket receives a message from the open tab/window it's connected to. (If anyone can clarify this mechanic, I'd love to know why it's not looping constantly?)
3) For each window or tab that the user has open on a given device there is a websocket and channel stored in an arrays. I want to be able to send a message to all the channels in the array (essentially the other goroutines for open tabs/windows on that device) and receive the message in the other goroutines to change some variables set in the constantly running goroutine.
What I have right now works only for the very first connection on a device, and (of course) it sends "sup dude?" to itself since it's the only channel in the array at the time. Then if you open a new tab (or even many), the message doesn't get sent to anyone at all! Strange?... Then when I close all the tabs (and my commented out logic removes the device item from guestMap) and start up a new device session, still only the first connection gets it's own message.
I already have a method for sending a message to all the other websockets on a device, but sending to a goroutine seems to be a little more tricky than I thought.
To answer my own question:
First, I've switched from a sync.map to a normal map. Secondly, in order for nobody to be reading/writing to it at the same time I've made a channel that you call to do any read/write operation on the map. I've been trying my best to keep my data access and manipulation quick to execute so the channel doesn't get crowded so easily. Here's a small example of that:
package main
import (
"fmt"
)
var (
guestMap map[string]*guestStruct = make(map[string]*guestStruct);
guestMapActionChan = make (chan actionStruct);
)
type actionStruct struct {
Action func([]interface{})[]interface{}
Params []interface{}
ReturnChan chan []interface{}
}
type guestStruct struct {
Name string
Numb int
}
func main(){
//make chan listener
go guestMapActionChanListener(guestMapActionChan)
//some guest logs in...
newGuest := guestStruct{Name: "Larry Josher", Numb: 1337}
//add to the map
addRetChan := make(chan []interface{})
guestMapActionChan <- actionStruct{Action: guestMapAdd,
Params: []interface{}{&newGuest},
ReturnChan: addRetChan}
addReturned := <-addRetChan
fmt.Println(addReturned)
fmt.Println("Also, numb was changed by listener to:", newGuest.Numb)
// Same kind of thing for removing, except (of course) there's
// a lot more logic to a real-life application.
}
func guestMapActionChanListener (c chan actionStruct){
for{
value := <-c;
//
returned := value.Action(value.Params);
value.ReturnChan <- returned;
close(value.ReturnChan)
}
}
func guestMapAdd(params []interface{}) []interface{} {
//.. do some parameter verification checks
theStruct := params[0].(*guestStruct)
name := theStruct.Name
theStruct.Numb = 75
guestMap[name] = &*theStruct
return []interface{}{"Added '"+name+"' to the guestMap"}
}
For communication between connections, I just have each socket loop hold onto their guestStruct, and have more guestMapActionChan functions that take care of distributing data to other guests' guestStructs
Now, I'm not going to mark this as the correct answer unless I get some better suggestions as how to do something like this the right way. But for now this is working and should guarantee no races for reading/writing to the map.
Edit: The correct approach should really have been to just use a sync.Mutex like I do in the (mostly) finished project GopherGameServer

Resources