Chunked transfer encoding HTTP request in GO? - go

I see someone modified the net/http module to send chunked request.
That was 4 years ago. Can't that be done directly by the official net/http module without modification?

The net/http package automatically uses chunked encoding for request bodies when the content length is not known and the application did not explicitly set the transfer encoding to "identity". This feature dates back to the Go 1 release.

Related

How to setup OkHttp to request gzipped resources and NOT un-gzip them

I'm looking at new OkHttpClient.Builder() to see whether there's a setting that would allow me to do 'raw' GETs of resources and then get the compressed bytes for that resource in the response. As far as I can see OkHttp transparently ungzips payloads. I can't see a way of initializing OkHttpClient to not do that. I've tried googling for "gzip OkHttpClient.Builder" and I get a bunch of unrelated inexact matches. I'm missing something obvious. Obviously :-(
Set this request header:
Accept-Encoding: gzip
Or replace gzip with identity for no compression. In either case OkHttp won't interfere if you provide your own Accept-Encoding header.

how to parse https raw bytes in golang

I am new to golang, want to modify shadowsocks-go code to parse https raw response bytes to check whether the google, facebook or twitter has blocked our service or not. Shadowsocks-go uses socket5 to read https data, I checked golang http module, still don't know how to parse https raw bytes. From google, examples are only about how to use go http module.
Update
Actually, I want to make a Reverse Proxy, every proxy node contains a shadowsocks server. So every request I want to know the destinations whether block our request or not, if blocked, need to remove the node, and add new one. This requirement need to parse the https raw bytes to check response status.
You can make secure requests via go's net/http library. Just use the https:// scheme in the URL.

How does chunked downloading work in http/2 (or better what is the equivalent?)

In chunked downloading, there are extensions on each chunk that can be leveraged when coming to a browser. the last chunk can also contain optional headers defining stuff like content-length if we streamed a big file through, we can provide that information at the very end in the form of a http header.
How does this work in http/2? Are there even extensions or headers in last piece. I see there is Data payload but there is no extensions nor optional headers AFAICT. I only see padding.
Maybe a better question is do browsers even
leverage the optional headers in the last chunk?
leverage extensions in each chunk?
Perhaps programs may care but if it is a progam, I believe in http/2, the server just defines the api better perhaps and uses a push mechanism after response+data has been sent maybe?
How would one send optional headers in this new http/2 world though if I was a server defining an api for clients?
I was trying to use wireshark to capture a download trace but chrome seems to use QUICK and I can't seem to decrypt the SSL with wireshark for this use case when I use firefox and drive.google.com to download a file(it stays encrypted while in the same trace, I actually saw some http2 traffic in TLS for some other service working just fine). Using the "(Pre)-Master-Secret log filename" seems to only work half the time and I am not quite sure why. I end up having to restart everything and re-run my cases.
Also, in the Server hello, h2 was the protocol selected but then no http2 packets appear when I filter to ip.addr=(server hello google ip) and tcp.port=443
thanks,
Dean
In chunked downloading, there are extensions on each chunk that can be leveraged when coming to a browser. the last chunk can also contain optional headers defining stuff like content-length if we streamed a big file through, we can provide that information at the very end in the form of a http header.
In theory (i.e. standard) you have the extensions and the possibility to add non-essential(!) headers at the end. In practice these feature are not used. I'm not aware of any chunk extensions which are defined which means that browsers simply ignore them. And the example trailer defining a content-length makes no sense because with chunked encoding any content-length header should be ignored. There might be some third party libraries which make use of trailers. But since support for trailers would need to be declared up-front by the client (using TE:trailers header) browsers don't use it.
If I understand HTTP/2 correctly chunk extensions are simply gone (nothing lost, they were never used). Trailers are still possible, i.e. you could add headers after all data are sent, see RFC7540: 8.1 HTTP Request/Response Exchange.

Read chunked HTTP in .Net

I'm trying to write a VB.net client that reads HTTP chunked data. The chunks contain additional information that I need to use.
I've already figured out that I can't use the HTTPWebResponse, since it hides the optional tags.
So, the way I understand it, I need to use a TCPClient, send the HTTP request through it, and then parse the response.
My question at this point is how do I create and send the HTTP request, especially as HTTPWebRequest is not serializable.
Any help, including an indication of a better way to do this, would be appreciated.
If you#re going to use TCPClient, then you're going to have to do the request by hand. Fortunately, HTTP is a reasonably easy to do. Just write the headers you need to send delimited by /n/r.
You'll probably want/need to read up on the HTTP spec.

Access log replay for load testing? Jmeter Pitfalls and Competitors

Context
We wish to use "replay" web server access logs to generate load tests. JMeter had come to mind as I'd recently read blog posts about using jmeter in the cloud (e.g. firing up a number of Amazon EC2 instances to generate the load)
For years I had heard of JMeter's ability to replay access logs, but in reviewing this feature I found the following.
Access Log Sampler
DOES:
recreate sessions, i.e. handle the jsessionId token (thought it tries to approximate sessions by IP address);
DOES NOT:
handle POST data (even if you could configure apache/tomcat to write out post data to the access log, jmeter access log sampler only handles 'common' log format).
Post data would go a long way toward recreating actual load.
Additionally, the documentation describes the Access Log Sampler as "alpha code" even though it's 8 years old. It doesn't seem actively maintained. (That's longer than Gmail's beta.)
HttpPerf
Another blog post pointed me to the httpperf tool. I've started to read up on it:
blog: http://www.igvita.com/2008/09/30/load-testing-with-log-replay/
httpperf: http://code.google.com/p/httperf/
Summary
What's the best way to generate load testing 'scripts' from real user data?
What has worked best for you?
Pros and cons of various tools?
JMeter + HTTP Raw Request + Raw Data Source for me works well
I will describe how do we solve this problem using our own LT tool called Yandex Tank
It can handle simple access.log but only 'GET' requests, too. When there's a need to make other types of requests, we use other ammo formats (ammo is a file containing all the requests that we gonna send to our server). Example:
342
POST / HTTP/1.1^M
Host: xxx.xxx.xxx.xxx:8080^M
Connection: keep-alive^M
Keep-Alive: 300^M
Content-Type: multipart/form-data; boundary=AGHTUNG^M
Content-Length: 1400^M
Connection: Close^M
^M
--AGHTUNG^M
Content-Disposition: form-data; name="fp"; filename="fp_tank"^M
Content-Type: application/octet-stream^M
Content-Transfer-Encoding: binary^M
...
--AGHTUNG--^M
A number ('342') on the first line is the size of a following request. Request is in it's raw format. You could write a simple script in your favourite language that generates such ammo files from your access.log and then use it for load testing.
Such ammo format makes it really flexible. For example, this code generates ammo from FCGI logs (POST bodies are encoded in Base64). But on the other hand you will need to handle sessions manually.
You can easily replay access logs with POST data using ZebraTester. It has many plugins similar to JMeter and also ability to add inline scripts using which you can easily target POST payload, URLs, timestamps, etc. from the access logs. You can run load tests directly from the tool locally or copy the recorded script to the SaaS portal to run massive million virtual user load tests

Resources