i'm write a httpserver in Golang , but i find the http.HandleFunc will be block when multi request from the web browser. how can i do make the server handle multi request in the same time ? thanks.
my code is:
func DoQuery(w http.ResponseWriter, r *http.Request) {
r.ParseForm()
fmt.Printf("%d path %s\n", time.Now().Unix(), r.URL.Path)
time.Sleep(10 * time.Second)
fmt.Fprintf(w, "hello...")
//why this function block when multi request ?
}
func main() {
fmt.Printf("server start working...\n")
http.HandleFunc("/query", DoQuery)
s := &http.Server{
Addr: ":9090",
ReadTimeout: 30 * time.Second,
WriteTimeout: 30 * time.Second,
//MaxHeaderBytes: 1 << 20,
}
log.Fatal(s.ListenAndServe())
fmt.Printf("server stop...")
}
I ran your code and everything worked as expected. I did two requests at the same time (curl localhost:9090/query) and they both finished 10 seconds later, together. Maybe the problem is elsewhere? Here's the command I used: time curl -s localhost:9090/query | echo $(curl -s localhost:9090/query) – tjameson
thakns
that's strange.
when i request same url from chrome ,send two request not handle in the same time, but use cur test can handle in the same time.
but when i send two request use different url, it's can be handle in the same time.
[root#localhost httpserver]# ./httpServer
server start working...
1374301593 path /query?form=chrome
1374301612 path /query?from=cur2
1374301614 path /query?from=cur1
1374301618 path /query?form=chrome
1374301640 path /query?form=chrome2
1374301643 path /query?form=chrome1
*1374301715 path /query?form=chrome
1374301725 path /query?form=chrome*
**1374301761 path /query?form=chrome1
1374301763 path /query?form=chrome2**
Yes, the standard HTTP server will start a new goroutine for each request. You should be able to do thousands of requests in parallel depending on the operating system settings.
Your browser might be limiting how many requests it will send to one server; be sure you are testing with a client that doesn't have that limitation/"optimization".
Reliably Go docs explaining Http Server creates a new gorotine for each request: http://golang.org/pkg/net/http/#Server.Serve
Related
I am trying to build a proxy server referring medium post. I am not able to log the RequestURI
func handleTunneling(w http.ResponseWriter, r *http.Request) {
fmt.Println("SCHEME:", r.URL.Scheme, "HOST:", r.Host, "PATH", r.URL.Path, )
dest_conn, err := net.DialTimeout("tcp", r.Host, 10*time.Second)
}
Result is for https://example.com/custom_page
SCHEME: HOST: example.com:443 PATH
But from the DialTimeout getting the response original uri. Any suggestions
Thanks
From what I can understand from the medium article you've mention, the handleTunneling actually still the origin of the request. The main process of proxy-ing actually lying the the go transfer() section.
go transfer(dest_conn, client_conn)
go transfer(client_conn, dest_conn)
which actually in the last block of the handleTunneling function.
Thus, it is make sense when you do:
fmt.Println("SCHEME:", r.URL.Scheme, "HOST:", r.Host, "PATH", r.URL.Path)
at the first line of the function, it is still the original path.
I tried to get html source from Reddit with Golang:
package main
import (
"fmt"
"io/ioutil"
"net/http"
"time"
)
func main() {
timeout := time.Duration(5 * time.Second)
client := http.Client{
Timeout: timeout,
}
resp, _ := client.Get("https://www.reddit.com/")
bytes, _ := ioutil.ReadAll(resp.Body)
fmt.Println("HTML:\n\n", string(bytes))
defer resp.Body.Close()
var input string
fmt.Scanln(&input)
}
First attemp was good. But at the second time it ran into an error:
<p>we're sorry, but you appear to be a bot and we've seen too many requests
from you lately. we enforce a hard speed limit on requests that appear to come
from bots to prevent abuse.</p>
<p>if you are not a bot but are spoofing one via your browser's user agent
string: please change your user agent string to avoid seeing this message
again.</p>
<p>please wait 6 second(s) and try again.</p>
<p>as a reminder to developers, we recommend that clients make no
more than <a href="http://github.com/reddit/reddit/wiki/API">one
request every two seconds</a> to avoid seeing this message.</p>
I tried to set delay but it still not work.
Sorry about my bad English.
Reddit doesn't want automatic scanner\grabbers on their site and has a bot-protection mechanism.
Here's a recommendation from them:
one request every two seconds
Just add a delay between requests.
timeout serves a different purpose. timeout is an upper limit for a routine to run. What you need is sleep between subsequent requests.
time.Sleep(6 * time.Second)
Long time ago , I built a website with Golang , now , I want to track online users.
I want to do this without Redis and working with SessionID.
What is the best way for my work ?
I wrote a global handler :
type Tracker struct {
http.Handler
}
func NewManager(handler http.Handler) *Tracker {
return &Tracker{Handler: handler}
}
func (h *Tracker) ServeHTTP(w http.ResponseWriter, r *http.Request) {
log.Println(r.RemoteAddr)
h.Handler.ServeHTTP(w,r)
}
.
.
.
srv := &http.Server{
Handler: newTracker(e),
Addr: "127.0.0.1" + port,
WriteTimeout: 15 * time.Second,
ReadTimeout: 15 * time.Second,
}
log.Fatal(srv.ListenAndServe())
I think one of work that I can do is :
Add a sessionID in client and save it in a map at server And counting online users and following there.
Is it good and right way ?
A global handler, middleware (if you're using a router pkg look at this) or just calling a stats function on popular pages would be enough. Be careful to exclude bots, rss hits, or other traffic you don't care about.
Assuming you have one process, and want to track users online in last 5 mins or something, yes, a map server side would be fine, you can issue tokens (depends on user allowing cookies, takes bandwidth on each request), or just hash ip (works pretty well, potential for slight undercounting). You then need to expire them after some interval, and use a mutex to protect them. On restart you lose the count, if running two processes you can't do this, this is the downside of in memory storage, you need another caching process to persist. So this is not suitable for large sites, but you could easily move to using a more persistent store later.
var PurgeInterval = 5 * time.Minute
var identifiers = make(map[string]time.Time)
var mu sync.RWMutex
...
// Hash ip + ua for anonymity in our store
hasher := sha256.New()
hasher.Write([]byte(ip))
hasher.Write([]byte(ua))
id := base64.URLEncoding.EncodeToString(hasher.Sum(nil))
// Insert the entry with current time
mu.Lock()
identifiers[id] = time.Now()
mu.Unlock()
...
// Clear the cache at intervals
mu.Lock()
for k, v := range identifiers {
purgeTime := time.Now().Add(-PurgeInterval)
if v.Before(purgeTime) {
delete(identifiers, k)
}
}
mu.Unlock()
Something like that.
I'm trying to develop a simple job queue server with some worker that query it but I encountered a problem with my net/http server. I'm surely doing something bad but after ~3 minutes my server start displaying :
http: Accept error: accept tcp [::]:4200: accept4: too many open files; retrying in 1s
For information it receive 10 request per second in my test case.
Here's two files to reproduce this error :
// server.go
package main
import (
"net/http"
)
func main() {
http.HandleFunc("/get", func(rw http.ResponseWriter, r *http.Request) {
http.Error(rw, "Try again", http.StatusInternalServerError)
})
http.ListenAndServe(":4200", nil)
}
// worker.go
package main
import (
"net/http"
"time"
)
func main() {
for {
res, _ := http.Get("http://localhost:4200/get")
defer res.Body.Close()
if res.StatusCode == http.StatusInternalServerError {
time.Sleep(100 * time.Millisecond)
continue
}
return
}
}
I already done some search about this error and I found some interesting response but none of these fixed my issue.
The first response I saw was to correctly close the Body in the http.Get response, as you can see I did it.
The second response was to change the file descriptor ulimit of my system but as I will not control where my app will run I prefer to not use this solution (But for information it's set at 1024 on my system)
Can someone explain me why this problem happen and how I can fix it in my code ?
Thanks a lot for your time
EDIT :
EDIT 2 : In comment Martin says that I'm not closing the Body, I tried to close it (without defer so) and it fixed the issue. Thanks Martin ! I was thinking that continue will execute my defer, I was wrong.
I found a post explaining the root problem in a lot more detail.
Nathan Smith even explains how to control timeouts on the TCP level, if needed.
Below is a summary of everything I could find on this particular problem, as well as the best practices to avoid this problem in future.
Problem
When a response is received regardless of whether response-body is required or not, the connection is kept alive until the response-body stream is closed. So, as mentioned in this thread, always close the response-body. Even if you do not need to use/read the body content:
func Ping(url string) (bool) {
// simple GET request on given URL
res, err := http.Get(url)
if err != nil {
// if unable to GET given URL, then ping must fail
return false
}
// always close the response-body, even if content is not required
defer res.Body.Close()
// is the page status okay?
return res.StatusCode == http.StatusOK
}
Best Practice
As mentioned by Nathan Smith never use the http.DefaultClient in production systems, this includes calls like http.Get as it uses http.DefaultClient at its base.
Another reason to avoid http.DefaultClient is that it is a Singleton (package level variable), meaning that the garbage collector will not try to clean it up, which will leave idling subsequent streams/sockets alive.
Instead create your own instance of http.Client and remember to always specify a sane Timeout:
func Ping(url string) (bool) {
// create a new instance of http client struct, with a timeout of 2sec
client := http.Client{ Timeout: time.Second * 2 }
// simple GET request on given URL
res, err := client.Get(url)
if err != nil {
// if unable to GET given URL, then ping must fail
return false
}
// always close the response-body, even if content is not required
defer res.Body.Close()
// is the page status okay?
return res.StatusCode == http.StatusOK
}
Safety Net
The safety net is for that newbie on the team, who does not know the shortfalls of http.DefaultClient usage. Or even that very useful, but not so active, open-source library that is still riddled with http.DefaultClient calls.
Since http.DefaultClient is a Singleton we can easily change the Timeout setting, just to ensure that legacy code does not cause idle connections to remain open.
I find it best to set this on the package main file in the init function:
package main
import (
"net/http"
"time"
)
func init() {
/*
Safety net for 'too many open files' issue on legacy code.
Set a sane timeout duration for the http.DefaultClient, to ensure idle connections are terminated.
Reference: https://stackoverflow.com/questions/37454236/net-http-server-too-many-open-files-error
*/
http.DefaultClient.Timeout = time.Minute * 10
}
As Martin say in comment I don't really closed the Body after the Get request. I used defer res.Body.Close() but it's not executed since I'm staying in the for loop. So continue dont't trigger defer
Please note that in some cases the setting in /etc/sysctl.conf
net.ipv4.tcp_tw_recycle = 1
Could cause this error because TCP connections remain open.
A temporary solution, just increase the number of open files:
ulimit -Sn 10000
This simple HTTP server contains a call to time.Sleep() that makes
each request take five seconds. When I try quickly loading multiple
tabs in a browser, it is obvious that each request
is queued and handled sequentially. How can I make it handle concurrent requests?
package main
import (
"fmt"
"net/http"
"time"
)
func serve(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "Hello, world.")
time.Sleep(5 * time.Second)
}
func main() {
http.HandleFunc("/", serve)
http.ListenAndServe(":1234", nil)
}
Actually, I just found the answer to this after writing the question, and it is very subtle. I am posting it anyway, because I couldn't find the answer on Google. Can you see what I am doing wrong?
Your program already handles the requests concurrently. You can test it with ab, a benchmark tool which is shipped with Apache 2:
ab -c 500 -n 500 http://localhost:1234/
On my system, the benchmark takes a total of 5043ms to serve all 500 concurrent requests. It's just your browser which limits the number of connections per website.
Benchmarking Go programs isn't that easy by the way, because you need to make sure that your benchmark tool isn't the bottleneck and that it is also able to handle that many concurrent connections. Therefore, it's a good idea to use a couple of dedicated computers to generate load.
From Server.go , the go routine is spawned in the Serve function when a connection is accepted. Below is the snippet, :-
// Serve accepts incoming connections on the Listener l, creating a
// new service goroutine for each. The service goroutines read requests and
// then call srv.Handler to reply to them.
func (srv *Server) Serve(l net.Listener) error {
for {
rw, e := l.Accept()
if e != nil {
......
c, err := srv.newConn(rw)
if err != nil {
continue
}
c.setState(c.rwc, StateNew) // before Serve can return
go c.serve()
}
}
If you use xhr request, make sure that xhr instance is a local variable.
For example, xhr = new XMLHttpRequest() is a global variable. When you do parallel request with the same xhr variable you receive only one result. So, you must declare xhr locally like this var xhr = new XMLHttpRequest().