I am using GWAN (v4.3.14) and facing a strange issue. I am trying to pass some long text in the query string. I have figured out that GWAN does not allow me to pass query parameters beyond a total request size of 537 characters.
It responds with a 400 Bad Request
An example string is :
http://xxx.xxx.xxx.xxx:yyyy/?t.cpp&c=DbE9kdOJGMm9yr7aypGlQBY1a9rZuiaMDAAnTJSbOBRJZo45YHbpAO5VENLa6IcmlSadZnTucpKBKb0E0G15pFHCgB4oNxqQ3m1K0CX8K15RQkawb8MThuoIHKp02vk9WwJFU5NkBJtwu80onudOkwWPUiGxKKcJiSwJJNcgDY1LQIJ1GnvgRGgomthoxppsZ1cl7zxIf5CjWggzsbUnADDTq5W4pBXveVnugOBHryqdTylhI4tudeae2jUnswezxtQM1qKG3ezGkM2dN68R7YxpCEfZ2N1nXggUkYdGn6em7veq5G5LpTVrdexn0fSozGbeNfHXS2OLjWGhffcEdGeu1dFKnFxNac6IETbIiVvTjv55wcZI7WBiTA0r60KJkUZYNn59W6XhnAwTk0zCYN2Rq8LraOjHzjXHjcyL9Sk6jw4D9K0wWLsiZHDfTOlnPr9jYp2SesyHlUJsCHPiHOR4fCBVwQMwh5YOddcpl2Kbr6CjSjWabaac
The code in my C++ file is:
# include "gwan.h"
# include <iostream>
using namespace std;
int main (int argc, char * argv[])
{
if(argc)
{
cout<<argv[0];
xbuf_cat(get_reply(argv), argv[0]);
}
else
{
xbuf_cat(get_reply(argv), "pass something to me to see it on your screen.");
}
return 200;
}
Can someone help me to make GWAN accept a query parameter of 1000 characters or more?
The error with G-WAN v4.5+ is "414: Request URI too large".
Many production HTTP servers disable PUT/POST Entities to avoid abuse.
G-WAN first used a limit slightly larger than 4KiB, but most requests do not need so much room so we have made it possible for developers to decide.
The example below (see entity_size.c for a working example) shows how to modify the G-WAN (server-global) PUT/POST Entity size limit from a servlet but this can also be done in the init() or the main() calls of a connection handler, and from the gwan/init.c script available in v4.10+:
u32 *max_entity_size = (int*)get_env(argv, MAX_ENTITY_SIZE);
*max_entity_size = 200 * 1024; // new size in bytes (200 KiB)
You can change the limit at any time (even while a given user is connected) by using IP filtering in a connection handler.
Your servlets will decide what to do with the entity anyway so you can dispose or store on disk or do real-time processing, see the entity.c example.
Beyond this, there are a few things to keep in mind:
to avoid DoS attacks letting everybody send huge entities to your server (in the GBs), you might enlarge the request size of authorized users only;
when dealing with requests without a PUT/POST Entity you may also dynamically enlarge the read buffer by allocating more memoy to the READ_XBUF by using xbuf_growto().
Now you know how to accept requests of any length. Make sure you do it only when needed.
You may want to check other related values like:
KALIVE_TMO // time-out in ms for HTTP keep-alives
REQUEST_TMO // time-out in ms waiting for request
MIN_SEND_SPEED // send rate in bytes/sec (if < close)
MIN_READ_SPEED // read rate in bytes/sec (if < close)
All of them can be setup from the gwan/init.c script - before any request can hit the server. This can also be done from G-WAN handlers and servlets, as shown in the examples cited above.
Related
I am currently learning to use golang as a server side language. I'm learning how to handle forms, and so I wanted to see how I could prevent some malicious client from sending a very large (in the case of a form with multipart/form-data) file and causing the server to run out of memory. For now this is my code which I found in a question here on stackoverflow:
part, _ := ioutil.ReadAll(io.LimitReader(r.Body, 8388608))
r.Body = ioutil.NopCloser(io.MultiReader(bytes.NewReader(part), r.Body))
In my code r is equal to *http.Request. So, I think that code works well, but what happens is that when I send a file regardless of its size (according to my code, the maximum size is 8M) my code still receives the entire file, so I have doubts that my code actually works. So my question is. Does my code really work wrong? Is there a concept that I am missing and that is why I think my code is malfunctioning? How can I limit the size of an http request correctly?
Update
I tried to run the code that was shown in the answers, I mean, this code:
part, _ := ioutil.ReadAll(io.LimitReader(r.Body, 8388608))
r.Body = ioutil.NopCloser(bytes.NewReader(part))
But when I run that code, and when I send a file larger than 8M I get this message from my web browser:
The connection was reset
The connection to the server was reset while the page was loading.
How can I solve that? How can I read only 8M maximum but without getting that error?
I would ask the question: "How is your service intended/expected to behave if it receives a request greater than the maximum size?"
Perhaps you could simply check the ContentLength of the request and immediately return a 400 Bad Request if it exceeds your maximum?
func MyHandler(rw http.ResponseWriter, rq *http.Request) {
if rq.ContentLength > 8388608 {
rw.WriteHeader(http.StatusBadRequest)
rw.Write([]byte("request content limit exceeded"))
return
}
// ... normal processing
}
This has the advantage of not reading anything and deciding not to proceed at the earliest possible opportunity (short of some throttling on the ingress itself), minimising cpu and memory load on your process.
It also simplifies your normal processing which then does not have to be concerned with catering for circumstances where a partial request might be involved, or aborting and possibly having to clean up processing if the request content limit is reached before all content has been processed..
Your code reads:
r.Body = ioutil.NopCloser(io.MultiReader(bytes.NewReader(part), r.Body))
This means that you are assigned a new io.MultiReader to your body that:
reads at most 8388608 from a byte slice in memory
and then reads the rest of the body after those 8388608 bytes
To ensure that you only read 8388608 bytes at most, replace that line with:
r.Body = ioutil.NopCloser(bytes.NewReader(part))
I'm looking to test load my app in Golang. I haven't found this functionality in already existing tools, I tried all of them. Here is what I'm trying to do:
Create 100 exactly the same HTTP requests (as goroutines)
From each goroutine connect to HTTP server and send the body of the response (which can be up to few MB), except the last byte
Synchronize between all goroutines - pretty much wait until all threads are at the point where there is only 1 byte left to send
Based on input from Terminal (for example, when I hit Enter), send the remaining byte, so I can test how the server handles this type of load - 100 large requests at the same time
I looked at the docs of the standard HTTP library, and I don't think it's possible wit standard tools. I'm looking to rewrite some parts of HTTP library to have this support, or maybe even use the plain old OS sockets to perform this type of functionality. It will require a lot of time just to implement that.
I'm wondering if I'm missing something here, some kind of HTTP library feature that allows to do that easily? Appreiate any suggestion that might work without a full rewrite.
To my understanding there is no way to send part of a http request then the rest at the end, but I believe I can help with the concurrency part.
Two variables here, threads (mind the python terminology) = number of simultaneous goroutines, number = number of times to
func main() {
fmt.Println("Input # of times to run")
var number int
fmt.Scan(&number)
fmt.Println("Input # of threads")
var threads int
fmt.Scan(&threads)
swg := sizedwaitgroup.New(threads)
for i := 0; i < number; i++ {
swg.Add()
go func(i int) {
defer swg.Done()//Ensure to put your request after this line
//Do request
}(i)
}
swg.Wait()
}
This code uses the github.com/remeh/sizedwaitgroup library
Bear in mind, if one of the first requests is completed, it will start another without waiting for others to finish.
Here's it in practice:
https://codeshare.io/3A3dj4
https://pastebin.com/DP1sn1m4
Edit:
If you further and manage to send all but the last byte of the http request, you'll be wanted to use channels to communicate when to send the last byte, I'm not too good at them but this guide is great:
https://go.dev/blog/pipelines
I want to create a simple gRPC endpoint which the user can upload his/her picture. The protocol buffer declaration is the following:
message UploadImageRequest {
AuthToken auth = 1;
// An enum with either JPG or PNG
FileType image_format = 2;
// Image file as bytes
bytes image = 3;
}
Is this approach of uploading pictures (and recieving pictures) still ok regardless of the warning in the gRPC documentation?
And if not, is the better approach (standard) to upload pictures using the standard form and storing the image file location instead?
For large binary transfers, the standard approach is chunking. Chunking can serve two purposes:
reduce the maximum amount of memory required to process each message
provide a boundary for recovering partial uploads.
For your use-case #2 probably isn't very necessary.
In gRPC, a client-streaming call allows for fairly natural chunking since it has flow control, pipelining, and is easy to maintain context in the client and server code. If you care about recovery of partial uploads, then bidirectional-streaming works well since the server can be responding with acknowledgements of progress that the client can use to resume.
Chunking using individual RPCs is also possible, but has more complications. When load balancing, the backend may be required to coordinate with other backends each chunk. If you upload the chunks serially, then the latency of the network can slow upload speed as you spend most of the time waiting to receive responses from the server. You then either have to upload in parallel (but how many in parallel?) or increase the chunk size. But increasing the chunk size increases the memory required to process each chunk and increases the granularity for recovering failed uploads. Parallel upload also requires the server to handle out-of-order uploads.
the solution provided in the question will not work for files having large sizes. it will only work for smaller image sizes.
the better and standard approach is use chunking. grpc supports streaming a built in. so it is fairly easy to send in chunks
syntax = 'proto3'
message UploadImageRequest{
bytes image = 1;
}
rpc UploadImage(stream UploadImageRequest) returns (Ack);
in the above way we can use streaming for chunking.
for chunking all the languages provide its own way to chunk file based on chunk size.
Things to take care:
you need to handle the chunking logic, streaming helps in sending naturally.
if you want to send the metadata also there are three approaches.
1: use below structure
message UploadImageRequest{
AuthToken auth = 1;
FileType image_format = 2;
bytes image = 3;
}
rpc UploadImage(stream UploadImageRequest) returns (Ack);
here bytes is still chunks and for the first chunk send AuthToken and FileType and for all other requests just don't send those metadata.
2: you can also use oneof which is much easier.
message UploadImageRequest{
oneof test_oneof {
Metadata meta = 2;
bytes image = 1;
}
}
message Metadata{
AuthToken auth = 1;
FileType image_format = 2;
}
rpc UploadImage(stream UploadImageRequest) returns (Ack);
3: just use below structure and in first chunk send metadata and other chunks will have data. you need to handle that in code.
syntax = 'proto3'
message UploadImageRequest{
bytes message = 1;
}
rpc UploadImage(stream UploadImageRequest) returns (Ack);
lastly for auth you can use headers instead of sending that in message.
Based on the following code, I built a version of an echo server, but with a threaded delay. This was built because I've noticed that upon initial connection, my first send is sent back to the client, but the client does not receive it until a second send. My real-world use case is that I need to send messages to the server, do a lot of processing, and then send the result back... say 10-30 seconds later (could be hours in some cases).
http://www.wangafu.net/~nickm/libevent-book/Ref8_listener.html
So here is my code. For brevity's sake, I have only included the libevent-related code; not the threading code or other stuff. When debugging, a new connection is set up, the string buffer is filled properly, and debugging reveals that the writes go successfully.
http://pastebin.com/g02S2RTi
But I only receive the echo from the send-before-last. I send from the client numbers to validate this and when I send a 1 from the client, I receive nothing from the server via echo... even though the server is definitely writing to the buffer using evbuffer_add ( I have also tried this using bufferevent_write_buffer).
From the client when I send a 2, I then receive the 1 from the previous send. It's like my writes are being cached.... I have turned off nagle.
So, my question is: Does libevent cache sends using the following method?
evbuffer_add( outputBuffer, buffer, length );
Is there a way to flush this cache? Is there some other method to mark the cache as finished or complete? Can I force a send? It never sends on it's own... I have even put in delays. Replacing evbuffer_add with "send" works perfectly every time.
Most likely you are affected by Nagle algorithm - basically it buffers outgoing data, before sending it to the network. Take a look at this article: TCP/IP options for high-performance data transmission.
Here is an example how to disable buffering:
int flag = 1;
int result = setsockopt(sock, /* socket affected */
IPPROTO_TCP, /* set option at TCP level */
TCP_NODELAY, /* name of option */
(char *) &flag, /* the cast is historical
cruft */
sizeof(int)); /* length of option value */
I am looking to send a large message > 1 MB through the windows sockets send api. Is there a efficient way to do this, I do not want to loop and then send the data in chunks. I have read somewhere that you can increase the socket buffer size and that could help. Could anyone please elaborate on this. Any help is appreciated
You should, and in fact must loop to send the data in chunks.
As explained in Beej's networking guide:
"send() returns the number of bytes actually sent out—this might be less than the number you told it to send! See, sometimes you tell it to send a whole gob of data and it just can't handle it. It'll fire off as much of the data as it can, and trust you to send the rest later."
This implies that even if you set the packet size to 1MB, the send() function may not send all of it, and you are forced to loop until the total number of bytes sent by your calls to send() total the number of bytes you are trying to send. In fact, the greater the size of the packet, the more likely it is that send() will not send it all.
Aside from all that, you don't want to send 1MB packets because if they get lost, you will have to transmit the entire 1MB packet again, whereas if you lost a 1K packet, retransmitting it is not a big deal.
In summary, you will have to loop your send() calls, and the receiver will even have to loop their recv() calls too. You will likely need to prepend a small header to each packet to tell the receiver how many bytes are being sent so the receiver can loop the appropriate number of times.
I suggest you take a look at Beej's network guide for more detailed info about send() and recv() and how to deal with this problem. It can be found at http://beej.us/guide/bgnet/output/print/bgnet_USLetter.pdf
Why don't you want to send it in chunks?
That's the way to do it in 99% of the cases.
What makes you think that sending in chunks is inefficient? The OS is likely to chunk large "send" calls anyway, and may coalesce small ones.
Likewise on the receiving side the client should be looping anyway as there's no guarantee of getting all the data in one go.
The windows sockets subsystem is not oblidged to send the whole buffer you provide anyway. You can't force it since some network level protocols have an upper limit for the packet size.
As a practical matter, you can actually allocate a large buffer and send in one call using Winsock. If you are not messing with socket buffer sizes, the buffer will generally be copied into kernel mode for sending anyway.
There is a theoretical possibility that it will return without sending everything, however, so you really should loop for correctness. The chunks you send should, however, be large (64k or the ballpark) to avoid repeated kernel transitions.
If you want to do a loop after all, you can use this C++ code:
#define DEFAULT_BUFLEN 1452
int SendStr(const SOCKET &ConnectSocket, const std::string &str, int strlen){
char sndbuf[DEFAULT_BUFLEN];
int sndbuflen = DEFAULT_BUFLEN;
int iResult;
int count = 0;
int len;
while(count < strlen){
len = min(strlen-count, sndbuflen);
//void * memcpy ( void * destination, const void * source, size_t num );
memcpy(sndbuf,str.data()+count,len);
// Send a buffer
iResult = send(ConnectSocket, sndbuf, len, 0);
// iResult: Bytes sent
if (iResult == SOCKET_ERROR){
throw WSAGetLastError();
}
else{
if(iResult > 0){
count+=iResult;
}
else{
break;
}
}
}
return count;
}