How can I output the full web-page specific content with Powershell wget? - windows

When I issue the command powershell.exe wget http://IP_ADDR:8080/config/version/ I get the response:
StatusCode : 200
StatusDescription : OK
Content : {"Content of the web page does not fully show through this output… How can I use wget to just show the full content of the webpage without cutting it out?
RawContent : HTTP/1.1 200 OK
Content-Length: 307
Content-Type: application/json
Date: Wed, 26 Apr 2017 04:20:04 GMT
Server: CherryPy/3.2.2
{"Content of the web page does not fully show...
Forms : {}
Headers : {[Content-Length, 307], [Content-Type, application/json], [Date, Wed, 26 Apr 2017 04:20:04 GMT],
[Server, CherryPy/3.2.2]}
Images : {}
InputFields : {}
Links : {}
ParsedHtml : mshtml.HTMLDocumentClass
RawContentLength : 307
As above, the command shows only part of the content, and not the full output of what is returned when I go to the address normally.
My other alternative is to just use curL, but I would like a native resolution to the above other than a third-party tool.
My question is, how can I use the wget command to only show the output of the content, and the full output of it?

Per the comments, you need to access either the .Content or .RawContent property (RawContent contains the HTTP header fields where as Content does not. Note that the Headers are also available in the .Headers property):
powershell.exe (wget http://IP_ADDR:8080/config/version/).Content
or
powershell.exe (wget http://IP_ADDR:8080/config/version/).RawContent
To explain what is occurring, PowerShell returns Objects rather than plain text which to put simply means they are like mini databases with properties that can be returned/filtered etc. What you see when you make your call is the default view of a subset of properties of the object, which is not all of the properties available.
To learn more about wget look up Invoke-WebRequest which is the full cmdlet name (wget is an alias of it).
As a further aside, if your web call is returning JSON or XML you might want to consider using Invoke-RestMethod instead as that will take the JSON or XML and convert it automatically a PowerShell object (which you could then further manipulate within PowerShell).

Related

How to get custom header in bash

I'm adding a custom header in Asp.Net app:
context.Response.Headers.Add("X-Date", DateTime.Now.ToString());
context.Response.Redirect(redirectUrl, false);
When I'm using Fiddler I can see the "X-Date" header in the response.
I need to receive it by using bash.
I tried curl -i https://my.site.com and also wget -O - -o /dev/null --save-headers https://my.site.com with no success.
In both cases I see just the regular headers like: Content-Type, Server, Date, etc...
How I can receive the "X-Date" header?
Thanks,
Lev
protocol headers are different than file-headers (like http-header and tcp-header are different). When you create a protocol header you wiil need a server to resolve it and use the associated enviroment variables. Example ...
#!/bin/bash
# Apache - CGI
echo "text/plain"
echo ""
echo "$CONTENT_TYPE"
echo "$HTTP_ACCEPT"
echo "$SERVER_PROTOCOL"
When calling this script via web, The response ony my browser was...
text/html
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
HTTP/1.1
What you looking for are enviorment variables called $HTTP_ACCEPT, $CONTENT_TYPE and maybe $SERVER_PROTOCOL too.

Executable downloaded from Cloudfront in IE11 (Windows 7) downloads without a file extension

The long-short of it is an .exe downloaded from Cloudfront (using signed URLs) in IE11/Win7 downloads without an extension (exe_file.exe -> exe_file)
I don't think that it's the same issue as described here (among many, many other places) as the file is not renamed exe_file_exe, the extension is just dropped.
The file is being served from Cloudfront from S3 - and was uploaded via aws-cli
$ aws s3 cp exe_file.exe s3://cdn/exe_file.exe --content-type "application/x-msdownload"
as far as I'm aware the content-type argument isn't absolutely necessary as CF/S3/something, at some point, tries to do some intelligent MIME assigning (plus, before, when I was uploading without that arg, inspecting the download headers would show the correct MIME type).
Headers received when downloading the file
HTTP/1.1 200 OK
Content-Type: application/x-msdownload
Content-Length: 69538768
Connection: keep-alive
Date: Tue, 27 Dec 2016 17:36:51 GMT
Last-Modified: Thu, 22 Dec 2016 22:31:59 GMT
ETag: "c8fc68a920e198dca95e5549f8657bfb"
Accept-Ranges: bytes
Server: AmazonS3
Age: 335
X-Cache: Hit from cloudfront
This only happens in IE11 on Windows 7 - it works fine on IE11/Windows 10 (I say only but I have not tried on, for example, IE8 - you couldn't pay me enough money to put myself through that). And it does not happen with other downloads - dmg_file.dmg and linux_file.zip are both downloaded with the extension. Other browsers are also not impacted - they all download the file as-is in S3.
I have tried with and without AVs present - it does not make a difference.
You need to set the content-disposition correctly:
Forcing SaveAs using the HTTP header
In order to force the browser to show SaveAs dialog when clicking a hyperlink you have to include the following header in HTTP response of the file to be downloaded:
Content-Disposition: attachment; filename="<file name.ext>"; filename*=utf-8''<file name.ext>
Note: Those user agents that do not support the RFC 5987 encoding ignore filename* when it occurs after filename.
Where is the filename you want to appear in SaveAs dialog (like finances.xls or mortgage.pdf) - without < and > symbols.
You have to keep the following in mind:
The filename should be in US-ASCII charset and shouldn't contain special characters: < > \ " / : | ? * space.
The filename should not have any directory path information specified.
The filename should be enclosed in double quotes but most browsers will support file names without double quotes.
Ancient browsers also required the following (not needed nowadays, but for a fool proof solution might be worth doing):
Content-Type header should be before Content-Disposition.
Content-Type header should refer to an unknown MIME type (at least until the older browsers go away).
So, you should use cp with options:
--content-type (string) Specify an explicit content type for this operation. This value overrides any guessed mime types.
--content-disposition (string) Specifies presentational information for the object.
--metadata-directive REPLACE Specifies whether the metadata is copied from the source object or replaced with metadata provided when copying S3 objects.
Note that if you are using any of the following parameters: --content-type, content-language, --content-encoding, --content-disposition, --cache-control, or --expires, you will need to specify --metadata-directive REPLACE for non-multipart copies if you want the copied objects to have the specified metadata values.
try:
aws s3 cp exe_file.exe s3://cdn/exe_file.exe --content-type "application/x-msdownload" --content-disposition "attachment; filename=\"exe_file.exe\"; filename*=utf-8''exe_file.exe" --metadata-directive REPLACE
In addition to the accepted answer, I supplied my own response-content-disposition parameter to the Cloudfront Signer:
in Python, it looked like
from botocore.signers import CloudFrontSigner
def generate_presigned_url(filename, headers={}):
cf_signer = CloudFrontSigner(CF_KEY_ID, rsa_signer)
headers = '&'.join(["%s=%s" % (key, urllib.quote_plus(value)) for key, value in headers.iteritems()])
return cf_signer.generate_presigned_url(
'https://' + CF_DOMAIN + '/' + filename + ("" if len(headers) == 0 else "?%s" % (headers)),
# ... other params
)
called using
cloudfront.generate_presigned_url(file_name, {
'response-content-disposition': 'attachment; filename="exe_file.exe"; filename*=utf-8\'\'exe_file.exe'
})

JMeter Not Sending File with HTTP Request

I'm new to JMeter and trying to put a file to our API using an HTTP Request. When I put the file via curl using the -F flag, it works no problem.
Here's my curl request:
curl -X PUT -u uname:pword https://fakehostname.com/psr-1/controllers/vertx/upload/file/big/ADJTIME3 -F "upload1=#ADJTIME" -vis
and here's the relevant part of the response from the server:
> User-Agent: curl/7.37.1 Host: myfakehost.com Accept: */*
> Content-Length: 4190 Expect: 100-continue Content-Type:
> multipart/form-data; boundary=------------------------d76566a6ebb651d3
When I do the same put via JMeter, the Content-Length is 0 which makes me think that JMeter isn't reading the file for some reason. I know the path is correct because I browsed to the file from JMeter. Litte help?
In File Upload, make your file path RELATIVE to .jmx file or place next to it and specify file name only.
Thanks to everyone who offered solutions and suggestions. It turns out that the API I was trying to load test was the issue. I can PUT a file via curl no problem, but there's something about the Jmeter PUT that the API does not like. I finally tried doing a PUT to an unrelated API and was successful.

Ruby JSON.parse error

I am building a script that uses a cURL command against an API. I send the cURL command formatted as an application/json request and get the result, which I parse into a Ruby hash.
This works great when I use cURL POST commands, getting the correctly formatted JSON responses. However, when using cURL GET commands I am returned a JSON document that has headers:
puts r:
HTTP/1.1 200 OK
X-Compute-Request-Id: req-7e625990-068b-47d1-8c42-9d3dd3b27050
Content-Type: application/json
Content-Length: 1209
Date: Wed, 16 Jan 2013 20:47:41 GMT
{ <JSON DATA> }
When I try and do a JSON.parse(r) I get an unexpected token error at 'HTTP/1.1.'.
My method for this:
def list_flavors
r = %x(curl -s -k -D - -H \"X-Auth-Token: #{$token}\" -X 'GET' http://10.30.1.49:8774/v2/27e60c130c7748f48b0e3e9175702c30/flavors -H 'Content-type: application/json')
response = JSON.parse(r)
response
end
Is there a way to use regular expressions to pull the body out of the JSON doc and then parse?
Or am I going about this the wrong way when getting the response from cURL?
You'll need to find a way to cut out that header before passing the string into JSON.parse. JSON.parse expects valid json only.
Rather than curling and using the thing wholesale as a string, I'd suggest you use the very versatile ruby Net::HTTP and/or OpenURI libraries, which will allow you to easily access just your response's body without the header.

How to properly handle a gzipped page when using curl?

I wrote a bash script that gets output from a website using curl and does a bunch of string manipulation on the html output. The problem is when I run it against a site that is returning its output gzipped. Going to the site in a browser works fine.
When I run curl by hand, I get gzipped output:
$ curl "http://example.com"
Here's the header from that particular site:
HTTP/1.1 200 OK
Server: nginx
Content-Type: text/html; charset=utf-8
X-Powered-By: PHP/5.2.17
Last-Modified: Sat, 03 Dec 2011 00:07:57 GMT
ETag: "6c38e1154f32dbd9ba211db8ad189b27"
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Cache-Control: must-revalidate
Content-Encoding: gzip
Content-Length: 7796
Date: Sat, 03 Dec 2011 00:46:22 GMT
X-Varnish: 1509870407 1509810501
Age: 504
Via: 1.1 varnish
Connection: keep-alive
X-Cache-Svr: p2137050.pubip.peer1.net
X-Cache: HIT
X-Cache-Hits: 425
I know the returned data is gzipped, because this returns html, as expected:
$ curl "http://example.com" | gunzip
I don't want to pipe the output through gunzip, because the script works as-is on other sites, and piping through gzip would break that functionality.
What I've tried
changing the user-agent (I tried the same string my browser sends, "Mozilla/4.0", etc)
man curl
google search
searching stackoverflow
Everything came up empty
Any ideas?
curl will automatically decompress the response if you set the --compressed flag:
curl --compressed "http://example.com"
--compressed
(HTTP) Request a compressed response using one of the algorithms libcurl supports, and save the uncompressed document. If this option is used and the server sends an unsupported encoding, curl will report an error.
gzip is most likely supported, but you can check this by running curl -V and looking for libz somewhere in the "Features" line:
$ curl -V
...
Protocols: ...
Features: GSS-Negotiate IDN IPv6 Largefile NTLM SSL libz
Note that it's really the website in question that is at fault here. If curl did not pass an Accept-Encoding: gzip request header, the server should not have sent a compressed response.
In the relevant bug report Raw compressed output when not using --compressed but server returns gzip data #2836 the developers says:
The server shouldn't send content-encoding: gzip without the client having signaled that it is acceptable.
Besides, when you don't use --compressed with curl, you tell the command line tool you rather store the exact stream (compressed or not). I don't see a curl bug here...
So if the server could be sending gzipped content, use --compressed to let curl decompress it automatically.

Resources