I wrote a bash script that gets output from a website using curl and does a bunch of string manipulation on the html output. The problem is when I run it against a site that is returning its output gzipped. Going to the site in a browser works fine.
When I run curl by hand, I get gzipped output:
$ curl "http://example.com"
Here's the header from that particular site:
HTTP/1.1 200 OK
Server: nginx
Content-Type: text/html; charset=utf-8
X-Powered-By: PHP/5.2.17
Last-Modified: Sat, 03 Dec 2011 00:07:57 GMT
ETag: "6c38e1154f32dbd9ba211db8ad189b27"
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Cache-Control: must-revalidate
Content-Encoding: gzip
Content-Length: 7796
Date: Sat, 03 Dec 2011 00:46:22 GMT
X-Varnish: 1509870407 1509810501
Age: 504
Via: 1.1 varnish
Connection: keep-alive
X-Cache-Svr: p2137050.pubip.peer1.net
X-Cache: HIT
X-Cache-Hits: 425
I know the returned data is gzipped, because this returns html, as expected:
$ curl "http://example.com" | gunzip
I don't want to pipe the output through gunzip, because the script works as-is on other sites, and piping through gzip would break that functionality.
What I've tried
changing the user-agent (I tried the same string my browser sends, "Mozilla/4.0", etc)
man curl
google search
searching stackoverflow
Everything came up empty
Any ideas?
curl will automatically decompress the response if you set the --compressed flag:
curl --compressed "http://example.com"
--compressed
(HTTP) Request a compressed response using one of the algorithms libcurl supports, and save the uncompressed document. If this option is used and the server sends an unsupported encoding, curl will report an error.
gzip is most likely supported, but you can check this by running curl -V and looking for libz somewhere in the "Features" line:
$ curl -V
...
Protocols: ...
Features: GSS-Negotiate IDN IPv6 Largefile NTLM SSL libz
Note that it's really the website in question that is at fault here. If curl did not pass an Accept-Encoding: gzip request header, the server should not have sent a compressed response.
In the relevant bug report Raw compressed output when not using --compressed but server returns gzip data #2836 the developers says:
The server shouldn't send content-encoding: gzip without the client having signaled that it is acceptable.
Besides, when you don't use --compressed with curl, you tell the command line tool you rather store the exact stream (compressed or not). I don't see a curl bug here...
So if the server could be sending gzipped content, use --compressed to let curl decompress it automatically.
Related
I think this is an Apache bug. I found this:
https://grokbase.com/t/apache/bugs/108ht7t086/do-not-reply-bug-49767-new-no-last-chunk-0-after-post-request-on-cgi-script
https://bz.apache.org/bugzilla/show_bug.cgi?id=49766
https://bz.apache.org/bugzilla/show_bug.cgi?id=49767
I sent a http POST request to this CGI bash script:
#!/bin/bash
echo -e -n "Content-type: text/plain\n\n"
echo hello world!
What could be more innocent? I was shocked to see that Apache answers with a chunked response! And that's not all: The response is even faulty! The last chunk of zero length is missing. There is only one chunk.
The returned headers are these:
HTTP/1.1 200 OK
Date: Sun, 13 Oct 2019 12:19:23 GMT
Server: Apache
Upgrade: h2,h2c
Connection: Upgrade, Keep-Alive
Keep-Alive: timeout=5, max=100
Transfer-Encoding: chunked
Content-Type: text/plain
Has this bug still not been corrected? According to the bug reports above the "solution" (workaround) is to read all the posted input before outputing anything. When I did this:
#!/bin/bash
echo -e -n "Content-type: text/plain\n\n"
input=$(cat)
echo hello world!
the final chunk of zero length is present ;). Is this somehow intended behaviour or is it considered a bug? Why don't they fix it? Do I have to read ALL the input to avoid the error or? Anyone here with more info about this issue?
I am trying to get a batch script which will do the below task.
Telnet to any website on port 80 (telnet google.com 80)
Execute HTTP GET request (GET / HTTP/1.1 and host: google.com)
Redirect this output to the txt file.
I want the output in .txt files as shown below.
Filename: HTTP.txt
HTTP/1.1 302 Found
Cache-Control: private
Content-Type: text/html; charset=UTF-8
Location: http://www.google.co.in/?gfe_rd=cr&ei=ClUXV8WSEujI8Aep2L7oAQ
Content-Length: 261
Date: Wed, 20 Apr 2016 10:08:10 GMT
I want a batch file to auto input HTTP GET request and to set the HTTP host as well.
For http reques you can try winhttpjs.bat :
call winhttpjs.bat "http://google.com" -method GET -reportfile reportfile.txt
though the format of the report is not exactly what you want you can rewrite it to fit your needs.
I've got the following link, which is downloading a CSV file when put through a web browser.
http://pro.allocine.fr/film/export_classement.html?typeaffichage=2&lsttype=1001&lsttypeperiode=3002&typedonnees=visites&cfilm=&datefiltre=
However, when using Wget with Cygwin, with the command below, Wget retrieves a file, which is not a CSV file, but a file without extension. The file is empty, that is, has no data at all.
wget 'http://pro.allocine.fr/film/export_classement.html?typeaffichage=2&lsttype=1001&lsttypeperiode=3002&typedonnees=visites&cfilm=&datefiltre='
So as I hate to be stuck, I tried the following as well. I put the URL in a text file and used Wget with the file option:
inside fic.txt
'http://pro.allocine.fr/film/export_classement.html?typeaffichage=2&lsttype=1001&lsttypeperiode=3002&typedonnees=visites&cfilm=&datefiltre='
I used Wget in the following way:
wget -i fic.txt
I got the following errors:
Scheme missing
No URLs found in toto.txt
I think I can suggest some other options that will make your underlying problem more clear which is that it's supposed to be html, but there is no content (content-length = 0).
More concretely, this
wget -S -O export_classement.html 'http://pro.allocine.fr/film/export_classement.html?typeaffichage=2&lsttype=1001&lsttypeperiode=3002&typedonnees=visites&cfilm=&datefiltre='
produces this
Resolving pro.allocine.fr... 62.39.143.50
Connecting to pro.allocine.fr|62.39.143.50|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Server: nginx
Date: Fri, 28 Mar 2014 09:54:44 GMT
Content-Type: text/html; Charset=iso-8859-1
Connection: close
X-ServerName: WEBNX2
akamainocache: no-store
Content-Length: 0
Cache-control: private
X-KompressorName: kompressor7
Length: 0 [text/html]
2014-03-28 05:54:52 (0.00 B/s) - ‘export_classement.html’ saved [0/0]
Additionally the server is tailoring it's output based on how the browser identifies itself. using wget does have an option to include an arbitrary user-agent in the headers. Here's an example what happens when you make wget identify itself as Chrome. Here's a list of other possibiities.
wget -S --user-agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.152 Safari/537.36" 'http://pro.allocine.fr/film/export_classement.html?typeaffichage=2&lsttype=1001&lsttypeperiode=3002&typedonnees=visites&cfilm=&datefiltre='
Now the output changes to export.csv, with type "application/octet-stream" instead of "text/html"
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Server: nginx
Date: Fri, 28 Mar 2014 10:34:09 GMT
Content-Type: application/octet-stream; Charset=iso-8859-1
Transfer-Encoding: chunked
Connection: close
X-ServerName: WEBNX2
Edge-Control: no-store
Last-Modified: Fri, 28 Mar 2014 10:34:17 GMT
Content-Disposition: attachment; filename=export.csv
I can download a file by url but when I try it from bash I get a html page instead of a file.
How to download file with url redirection (301 Moved Permanently) using curl, wget or something else?
UPD
Headers from the url request.
curl -I http://www.somesite.com/data/file/file.rar
HTTP/1.1 301 Moved Permanently
Date: Sat, 07 Dec 2013 10:15:28 GMT
Server: Apache/2.2.22 (Ubuntu)
X-Powered-By: PHP/5.3.10-1ubuntu3
Location: http://www.somesite.com/files/html/archive.html
Vary: Accept-Encoding
Content-Type: text/html
X-Pad: avoid browser bug
Use -L, --location to follow redirects:
$ curl -L http://httpbin.org/redirect/1
Just been tinkering with Sinatra and trying to get a bit of a restful web service going.
The error I'm getting at the moment is very specific though.
Take this example post method
post '/postMan/:someParam' do
#Edited here. This code can be anything. 411 is still the response
puts params[:someParam]
end
Seems simple enough. Take a param, make an object out of it, then go store it in whatever way the objects save method defines.
Heres what I use to post the data using Curl
$curl -I -X POST http://127.0.0.1/postman/123456
The only problem is, I'm getting 411 back and have no idea why.
To the best of my knowledge, 411 is length required. Here is the trace
HTTP/1.1 411 Length Required
Content-Type: text/html; charset=ISO-8859-1
Server: WEBrick/1.3.1 (Ruby/1.9.2/2011-07-09)
Date: Fri, 02 Mar 2012 22:27:09 GMT
Content-Length: 303
Connection: close
I 'Cannot' change the curl message in any way. So might anyone have a way to set the content length to be ignored in sinatra? Or some fix which doesn't involve changing the curl request?
For the record, it doesn't matter whether I use the parameters in the Post method or not. I could have some crazy code inside it, it will still throw the same error
As others said above, WEBrick wrongly requires POST requests to have a Content-Length header. I just pass an empty body, because it's less typing than passing in the header:
curl -X POST -d '' http://webrickwhyyounotakeemptyposts.com/
Are you sure you're on port 80 for your app?
When I run:
ruby -r sinatra -e "post('/postMan/:someParam'){puts params[:someParam]}"
and curl it:
curl -I -X POST http://127.0.0.1:4567/postMan/123456
HTTP/1.1 200 OK
X-Frame-Options: sameorigin
X-XSS-Protection: 1; mode=block
Content-Type: text/html;charset=utf-8
Content-Length: 0
Connection: keep-alive
Server: thin 1.3.1 codename Triple Espresso
it's ok. Had to change the URL to postManthough, your example threw a 404because you had postman.
The output was also as expected:
== Sinatra/1.3.2 has taken the stage on 4567 for development with backup from Thin
>> Thin web server (v1.3.1 codename Triple Espresso)
>> Maximum connections set to 1024
>> Listening on 0.0.0.0:4567, CTRL+C to stop
123456
Ah. Try it without -I. It's probably sending a HEAD request and as such, not sending what you expect. Use -v if you want to show the headers.
curl -v -X POST http://127.0.0.1/postman/123456
curl -v -X POST -d "key=val" http://127.0.0.1/postman/123456
WEBrick erroneously requires POST requests to include the Content-Length header.
curl -H 'Content-Length: 0' -X POST http://example.com
Standardly, however, POST requests don't require a body and therefore don't require the Content-Length header.