Anyway to compress HTML output with Martini? - go

In a quasi-embedded environment so speed is everything. I have found that if I compress my .html files, the app is speedier. Is there a flag or way in Martini to do this on the fly?

You can use gzip Middleware
https://github.com/codegangsta/martini-contrib/tree/master/gzip
import (
"github.com/codegangsta/martini"
"github.com/codegangsta/martini-contrib/gzip"
)
func main() {
m := martini.Classic()
// gzip every request
m.Use(gzip.All())
m.Run()
}

This answer is just to show that #fabrizioM's answer actually works:
Step 1: Create the server
package main
import (
"github.com/codegangsta/martini"
"github.com/codegangsta/martini-contrib/gzip"
)
func main() {
m := martini.Classic()
// gzip every request
m.Use(gzip.All())
m.Get("/hello", func() string {
return "Hello, World!"
})
m.Run()
}
Step 2: Run the server
go run main.go
Step 3: Try the server
This is the step where you must remember to include the Accept-Encoding: gzip header (or equivalent).
Without compression:
curl --dump-header http://localhost:3000/hello
HTTP/1.1 200 OK
Date: Wed, 09 Jul 2014 17:19:35 GMT
Content-Length: 13
Content-Type: text/plain; charset=utf-8
Hello, World!
With compression:
curl --dump-header http://localhost:3000/hello -H 'Accept-Encoding: gzip'
HTTP/1.1 200 OK
Content-Encoding: gzip
Content-Type: text/plain; charset=utf-8
Vary: Accept-Encoding
Date: Wed, 09 Jul 2014 17:21:02 GMT
Content-Length: 37
� n��Q��J

Related

GO resty equivalent to the below CURL command

So, I have a curl command that works just fine. But, when I try to implement it in Resty I get an error: 405 (method not allowed). Now, that error code shouldn't be taken too seriously. Just an indication I am doing something wrong.
The curl command that works like a champ does this:
christianb#christianb-mac hashicorp % curl -vn --location --request PUT 'http://localhost:8081/artifactory/example-repo-local/crash.zip' \
--header 'Content-Type: application/zip' \
--data-binary '#./samples/crash.zip'
* Trying 127.0.0.1:8081...
* Connected to localhost (127.0.0.1) port 8081 (#0)
* Server auth using Basic with user 'admin'
> PUT /artifactory/example-repo-local/crash.zip HTTP/1.1
> Host: localhost:8081
> Authorization: Basic xxx=
> User-Agent: curl/7.78.0
> Accept: */*
> Content-Type: application/zip
> Content-Length: 0
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 201
< X-JFrog-Version: Artifactory/7.24.3 72403900
< X-Artifactory-Id: 65b0c15e32af425b:-53411fa9:17be4f9b6e8:-8000
< X-Artifactory-Node-Id: cb4b887aed9e
< Location: http://localhost:8081/artifactory/example-repo-local/crash.zip
< X-Checksum-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
< Content-Type: application/vnd.org.jfrog.artifactory.storage.ItemCreated+json;charset=ISO-8859-1
< Transfer-Encoding: chunked
< Date: Wed, 15 Sep 2021 21:22:56 GMT
<
christianb#christianb-mac hashicorp %
And the resty call looks like this:
PUT /artifactory/example-local-repo/crash.zip HTTP/1.1
Host: 127.0.0.1
User-Agent: jfrog/terraform-provider-artifactory:2.3.1
Content-Length: 4007
Accept: */*
Authorization: Basic cccxxx=
Content-Type: multipart/form-data; boundary=a8cddfc21bc1ecdf09e0c82e2e0ea2ac7627c4cbfeae3e51ed9e68987a99
Accept-Encoding: gzip
--a8cddfc21bc1ecdf09e0c82e2e0ea2ac7627c4cbfeae3e51ed9e68987a99
Content-Disposition: form-data; name="crash.zip"; filename="../../samples/crash.zip"
Content-Type: application/zip
...
HTTP/1.1 405
X-JFrog-Version: Artifactory/7.24.3 72403900
X-Artifactory-Id: 65b0c15e32af425b:-53411fa9:17be4f9b6e8:-8000
X-Artifactory-Node-Id: cb4b887aed9e
Allow: OPTIONS, GET, HEAD, POST
Content-Type: application/json;charset=ISO-8859-1
Content-Length: 65
Date: Wed, 15 Sep 2021 21:41:07 GMT
{
"errors" : [ {
"status" : 405,
"message" : ""
} ]
}
So, clearly resty is treating this as a multi-part upload, which is wrong.
Does anyone know the equivalent resty call to this curl command?
I've tried:
uri := "/artifactory/" + remotePath
reader, err := os.Open(localPath)
if err != nil {
return err
}
_, err = client.R().SetBody(reader).
SetHeader("Content-Type", contentType).Put(uri)
and
_, err = client.R().SetFileReader(filepath.Base(localPath), localPath, reader).
SetHeader("Content-Type", contentType).Put(uri)
and
uri := "/artifactory/" + remotePath
_, err := client.R().SetFile(filepath.Base(localPath),localPath).
SetHeader("Content-Type", contentType).Put(uri)
None work.
SetFileReader and SetFile are both intended to be used for multipart uploads.
SetBody is the right function to use, but I believe you need to read in the file contents first and then pass the bytes to SetBody:
uri := "/artifactory/" + remotePath
data, err := os.ReadFile(localPath)
if err != nil {
return err
}
_, err = client.R().SetBody(data).
SetHeader("Content-Type", contentType).Put(uri)

Get Json body and Response Status from Bash Script POST

Currently I am using:
#!/bin/bash
PROCESS=$(curl --location --request -v -X POST 'https://jsonplaceholder.typicode.com/posts' \
--header 'Content-Type: application/json' \
--data-raw '{"title": "foo","body": "bar","userId": "1"}')
echo "$PROCESS"
And getting:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 111 100 67 100 44 208 137 --:--:-- --:--:-- --:--:-- 344
{
"title": "foo",
"body": "bar",
"userId": "1",
"id": 101
}
But I also want the response status e.g. 201 or like this.
HTTP/2 200
date: Mon, 30 Nov 2020 14:00:56 GMT
content-type: application/json; charset=utf-8
set-cookie: __cfduid=dfda1e85d5738eb18115dc0a07311a4dd1606744856; expires=Wed, 30-Dec-20 14:00:56 GMT; path=/; domain=.typicode.com; HttpOnly; SameSite=Lax
x-powered-by: Express
x-ratelimit-limit: 1000
x-ratelimit-remaining: 999
x-ratelimit-reset: 1606702897
vary: Origin, Accept-Encoding
access-control-allow-credentials: true
cache-control: max-age=43200
pragma: no-cache
expires: -1
x-content-type-options: nosniff
etag: W/"6b80-Ybsq/K6GwwqrYkAsFxqDXGC7DoM"
via: 1.1 vegur
cf-cache-status: HIT
age: 13185
cf-request-id: 06bb0df15c0000edfbfb9b8000000001
expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
report-to: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report?s=ABBCY6aKAHfezboFKgcq%2FlsWKQZDAORup49fKMArhm%2BYl3Kb99pMLrZpLtbXsfz%2BQ6RxnutmzE0mCX5AcIVGRjmq%2FIrIja5MeNFFnmpO7WBT1725PWdN1J0KFhcqNxvNP8He2TBjfd3N"}],"group":"cf-nel","max_age":604800}
nel: {"report_to":"cf-nel","max_age":604800}
server: cloudflare
cf-ray: 5fa518fbcbdfedfb-CDG
I want to do the post and the echo out body and response code in a nice way.
Response code is sent in HTTP headers.
You may redirect headers to STDERR e.g. as described here: Report HTTP Response Headers to stderr?
So you may do this:
out=$(curl -s -D /dev/stderr http://boardreader.com 2>/tmp/headers)
# parse /tmp/headers
If you don't want to mess with temp file, you may try more complex solutions like
Capture stdout and stderr into different variables
You can only issue either a post or header request in one call and so you will need to do this in two separate calls read into the same variable and so:
PROCESS=$(curl -I 'https://jsonplaceholder.typicode.com/posts' && curl -X POST 'https://jsonplaceholder.typicode.com/posts' --header 'Content-Type: application/json' --data '{"title": "foo","body": "bar","userId": "1"}')
To me it makes sense to check the headers first and if this command is successful, get the json response with both being read into the PROCESS variable. You can of course change the order if you wish.

Ruby http, net/http, httpclient: can't parse www.victoriassecret.com

I am using httpclient gem, it works fine on Windows, just moved to AWS EC2, tried it on https://victoriassecret.com and it gets this response:
= Response
HTTP/1.1 920 Unknown
Content-Type: text/html
Date: Wed, 21 Oct 2015 21:42:51 GMT
Connection: Keep-Alive
Content-Length: 23
<h1>File not found</h1>#<HTTP::Message:0x000000023f5168
#http_body=
#<HTTP::Message::Body:0x000000023f50a0
#body="<h1>File not found</h1>",
#chunk_size=nil,
#positions=nil,
#size=0>,
#http_header=
#<HTTP::Message::Headers:0x000000023f5140
#body_charset=nil,
#body_date=nil,
#body_encoding=#<Encoding:ASCII-8BIT>,
#body_size=0,
#body_type=nil,
#chunked=false,
#dumped=false,
#header_item=
[["Content-Type", "text/html"],
["Date", "Wed, 21 Oct 2015 21:42:51 GMT"],
["Connection", "Keep-Alive"],
["Content-Length", "23"]],
#http_version="1.1",
#is_request=false,
#reason_phrase="Unknown",
#request_absolute_uri=nil,
#request_method="GET",
#request_query=nil,
#request_uri=
#<URI::HTTPS:0x000000023f58c0 URL:https://www.victoriassecret.com/pink/new-and-now>,
#status_code=920>,
#peer_cert=
#<OpenSSL::X509::Certificate: subject=#<OpenSSL::X509::Name:0x000000024ebe00>, issuer=#<OpenSSL::X509::Name:0x000000024ebec8>, serial=#<OpenSSL::BN:0x000000024de110>, not_before=2015-05-27 00:00:00 UTC, not_after=2017-05-26 23:59:59 UTC>,
#previous=nil>
It does not work only with this website, httpclient get https://google.com for example works fine. But on Windows I get normal response from httpclient get https://www.victoriassecret.com. Butt when using standard NET/HTTP library I get the same 920 response on Windows.
This isn't ec2 related. It's most likely related to the User Agent header sent by the various http library implementations.
For example, they clearly don't like 'wget':
curl -A "Wget/1.13.4 (linux-gnu)" -v https://www.victoriassecret.com
* Rebuilt URL to: https://www.victoriassecret.com/
* Trying 98.158.54.100...
* Connected to www.victoriassecret.com (98.158.54.100) port 443 (#0)
* TLS 1.2 # truncated
> GET / HTTP/1.1
> Host: www.victoriassecret.com
> User-Agent: Wget/1.13.4 (linux-gnu)
> Accept: */*
>
< HTTP/1.1 910 Unknown
< Content-Type: text/html
< Date: Thu, 22 Oct 2015 01:16:31 GMT
< Connection: Keep-Alive
< Content-Length: 23
<
* Connection #0 to host www.victoriassecret.com left intact
<h1>File not found</h1>%

GZIP encoding in Jersey 2 / Grizzly

I can't activate gzip-encoding in my Jersey service. This is what I've tried:
Started out with the jersey-quickstart-grizzly2 archetype from the Getting Started Guide.
Added rc.register(org.glassfish.grizzly.http.GZipContentEncoding.class);
(have also tried rc.register(org.glassfish.jersey.message.GZipEncoder.class);)
Started with mvn exec:java
Tested with curl --compressed -v -o - http://localhost:8080/myapp/myresource
The result is the following:
> GET /myapp/myresource HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 zlib/1.2.3.4 ...
> Host: localhost:8080
> Accept: */*
> Accept-Encoding: deflate, gzip
>
< HTTP/1.1 200 OK
< Content-Type: text/plain
< Date: Sun, 03 Nov 2013 08:07:10 GMT
< Content-Length: 7
<
* Connection #0 to host localhost left intact
* Closing connection #0
Got it!
That is, despite Accept-Encoding: deflate, gzip in the request, there is no Content-Encoding: gzip in the response.
What am I missing here??
You have to register the org.glassfish.jersey.server.filter.EncodingFilter as well. This example enables deflate and gzip compression:
import org.glassfish.jersey.message.DeflateEncoder;
import org.glassfish.jersey.message.GZipEncoder;
import org.glassfish.jersey.server.ResourceConfig;
import org.glassfish.jersey.server.filter.EncodingFilter;
...
private void enableCompression(ResourceConfig rc) {
rc.registerClasses(
EncodingFilter.class,
GZipEncoder.class,
DeflateEncoder.class);
}
This solution is jersey specific and works not only with Grizzly, but with the JDK Http server as well.
Try the code like:
HttpServer httpServer = GrizzlyHttpServerFactory.createHttpServer(
BASE_URI, rc, false);
CompressionConfig compressionConfig =
httpServer.getListener("grizzly").getCompressionConfig();
compressionConfig.setCompressionMode(CompressionConfig.CompressionMode.ON); // the mode
compressionConfig.setCompressionMinSize(1); // the min amount of bytes to compress
compressionConfig.setCompressableMimeTypes("text/plain", "text/html"); // the mime types to compress
httpServer.start();

Apache2 is changing my content type for a Ruby cgi script

I have a ruby cgi script which writes it output like this:
cgi.out("Cache-Control" => "no-cache, must-revalidate",
"type" => "text/html",
"charset" => "UTF-8") {
template.result(binding)
}
Unfortunately, when I view the headers from cURL, I see the following:
< HTTP/1.1 200 OK
< Date: Sun, 23 Aug 2009 09:48:03 GMT
< Server: Apache/2.2.11 (Ubuntu) DAV/2 SVN/1.5.4 PHP/5.2.6-3ubuntu4.1 with Suhosin-Patch mod_ssl/2.2.11 OpenSSL/0.9.8g
< 5541-Content-Type: text/html; charset=UTF-8
< Cache-Control: no-cache, must-revalidate
< Content-Length: 2495
< Cache-Control: max-age=86400
< Expires: Mon, 24 Aug 2009 09:48:03 GMT
< Content-Type: application/x-ruby
It's renaming my Content-Type, and adding a second cache control header. Clearly I have something misconfigured.
Turns out a had a debugging 'print' statement which was executing before the cgi.out() line. This caused a bit a text to prefix the headers.

Resources