I'm running some tests of my webapp in Firefox Quantum (60.0.2).
I fill out a form, and submit it. This sends a POST request with an application/x-www-form-urlencoded message-body to my APP.
When I use Tools / Web Developer / Network Tools to inspect the request, the Params tab shows the decoded values that were present in the message-body of the request.
What I want, in this context, is to load the urlencoded content into my paste buffer.
Copy POST Data gives me a decoded copy of the information
Copy as cURL gives me a curl command with all of the headers, but the --data argument is an empty string.
What's the right way to get the raw message body?
Copy All As HAR loads into the paste buffer an HTTP Archive, which is a JSON representation of the "HTTP transactions" of your session. entries[].request.postData.text seems to be what you want.
In my experiments, you also get entries[].request.postData.params[], but according to the 1.2
Note that text and params fields are mutually exclusive.
So, good luck?
Related
I have deployed an AWS Lambda function, written in Python, and AWS API Gateway structure to cause POST requests to an API endpoint to be redirected to my function. I want to upload a PDF document to my function and have it store the document in a S3 bucket. The problem I have is that the payload of any POST request to my API is being UTF-8 encoded. I don't want that but can't figure out the magic mojo to disable encoding of the request payload.
I am testing using curl, with the following command line:
curl -XPOST https://xxxxxxxxxx.execute-api.us-west-1.amazonaws.com/test -H 'content-type: application/pdf' --data-binary #document.pdf
UPDATE: I just found the following article describing how API Gateway and Lambda support uploading binary data:
https://aws.amazon.com/blogs/compute/handling-binary-data-using-amazon-api-gateway-http-apis/
This article suggests that all of the complexities that I discussed in the initial formation of my question (still provided below) should not be necessary. All I should need to do to upload binary content to my Lambda function is insure that my request includes an appropriate Content-Type header. I was already doing that, but I massaged my Curl command a bit (modified above) to define my request in exactly the way that is done in this article. I still get UTF-8 encoded data and NOT base-64 encoded data. I tried uploading a jpeg file rather than a PDF so I was doing exactly what was done in the article. Still no love. I don't get it. This article demonstrates exactly what I'm doing. But I don't get the result it suggests I should. Ggggrrrr.
ORIGINAL POST:
I am using Terraform to define my deployment. I want to cause the PDF to not be encoded/mangled at all. This is my first time using API Gateway, and I'm obviously missing some bit of config. The one thing I'm doing specifically right now to say that I want incoming payloads to be treated as binary is via the binary_media_types argument to my API definition in Terraform:
resource aws_api_gateway_rest_api proxy {
...
binary_media_types = [
"application/pdf",
"application/octet-stream",
"*/*"
]
This sets the Binary Media Types configuration associated with the API I've defined. I've confirmed via the AWS Console that this setting is having the desired effect...I can see these types in the console. I should need just the first item in the list, but I've added the others while I try to figure out the problem here. By adding that wildcard item, I believe that it shouldn't matter what the incoming Content-Type is...all payloads should be being treated as binary.
The other bit of config that I know about that might be important is the "integration contentHandling property". Here is the key bit of AWS docs that seems to explain all this:
I think the case that applies to me here is the one I've highlighted, per what I say above. This says to me that I shouldn't need to do anything else, per the "unspecified" value in the table for "contentHandling. I've tried setting the "contentHandling" argument on the integration record of my Terraform config, like this:
resource aws_api_gateway_integration proxy {
...
passthrough_behavior = "WHEN_NO_MATCH"
content_handling = "CONVERT_TO_BINARY"
}
I first tried only specifying the content_handling value. I've also tried setting that value to "CONVERT_TO_TEXT", hoping to then get base64-encoded data. Neither of these has any effect. I've tried adding the passthrough_behavior value as shown. I've also tried replacing "WHEN_NO_MATCH" with "WHEN_NO_TEMPLATES". Nothing I do changes the behavior. I haven't been able to figure out where these settings would show up in the AWS console. If I knew they were necessary, I'd explore this further. But I don't think I need to set these.
What am I missing? How can I POST a PDF document to my AWS Lambda function through API Gateway and have the payload of the request not be converted in any way? TIA!
NOTE: I am aware of this Q/A: PDF Uploaded via AWS API Gateway getting corrupted. The answer there doesn't apply to me, as I need to avoid having to form-encode the upload. The client code that will eventually be doing the upload is set in stone and sends a POST request with a payload that is just the bytes of the PDF.
I use the Windows Azure Blob Storage to keep files there.
To download files i create urls with Shared Access Signature.
It works fine, but there is one problem.
Some files (blobs) have the header "Content-Type" set during upload and other no.
if a file has no Content-Type than on request to Azure the response will have the header Content-Type: application/octet-stream . This is exactly what i need, because in such case a browser will show "Download dialog" for a user.
But for files where this header was set on upload, it is returned and sometimes it makes a problem. For example, Content-Type: images/jpeg makes a browser to show this image, but not download it (does not show Download dialog)
So, my question is
is there a way on download with presigned url from WIndows Azure to force to use some specific response header?
I want it behave like there is no Content-Type saved for a file, even if it is saved
So, after some time browsing i finally found the documentation about it.
There are references.
https://nxt.engineering/en/blog/sas_token/
https://learn.microsoft.com/en-us/rest/api/storageservices/service-sas-examples
https://learn.microsoft.com/en-us/rest/api/storageservices/create-service-sas
For me it was needed to up the version of the API (i used the 2012 API version).
Also one useful note. It is very sensetive to a date format. The expiraton time must be in the format like "2021-11-16T04:25:00Z" .
I have added 2 new arguments
'rscd=file;%20attachment&rsct=binary&'.
and both of the must be in the signature string to sign on their correct places
So, my question is is there a way on download with presigned url from
WIndows Azure to force to use some specific response header? I want it
behave like there is no Content-Type saved for a file, even if it is
saved
Yes, you can override Content-Disposition response header in your SAS Token and the blob will be always downloaded regardless of it’s content type.
You can override this header to a value like attachment; filename=yourdesiredfilename and the blob will always be downloaded with yourdesiredfilename name.
I have a scenario where I want to read the data from CSV file and use the same in a POST request. The email data contains '#' symbol.
So, when I try to hit the API using Jmeter '#' is getting replaced by '%40'. I tried below solutions but it didn't worked out:
Uncheck 'URL Encode' checkbox
Used __urldecode function -> ${__urldecode(abc#xyz.com)}
Result:
I don't think JMeter converts anything, it should send POST request parameters as they are, what you see in the View Results Tree listener is the textual representation. You can use a sniffer tool like Wireshark to see what exactly JMeter sends:
If you switch to HTTP tab you should see that the username is sent with # symbol
If you have troubles building your HTTP requests manually consider recording just recording the request(s) using either HTTP(S) Test Script Recorder or JMeter Chrome Extension, both should produce syntactically correct HTTP Request samplers.
This is the opposite of the issue that all my searches kept coming up with answers to, where people wanted plain text, but got compressed.
I'm writing a bash script that uses curl to fetch the mailing list archive files from a Mailman mailing list (using the standard Mailman web interface on the server end).
The file (for this month) is http://lists.example.com/private.cgi/listname-domain.com/2013-September.txt.gz (sanitized URL).
When I save this with my browser I get, in fact, a gzipped text file, which when ungzipped contains what I expect.
When I fetch it with Curl (after previously sending the login password and getting a cookie set, and saving that cookie file to use in the request), though, what comes out stdout (or is saved to a -o file) is the UNCOMPRESSED text.
How can I get Curl to just save the data into a file like my browser does? (Note that I am not using the --compressed flag in my Curl call; this isn't a question of the server compressing data for transmission, it's a question of downloading a file that's compressed on the server disk and I want to keep it compressed.)
(Obviously I can hack around this by re-compressing it in my bash script. Waste of CPU resources, and a problem waiting to happen in the future, though. Or I can leave it uncompressed, and hack the name and store it as just September.txt; that wastes disk space instead. Again, that would break if the behavior changed in the future, though. The problem seems to me to be that Curl is getting confused between compressed transmittal, and and actual compressed data.)
Is it possible the server is decompressing the file based on headers sent (or not sent) by curl? Try the following header with curl:
--header 'Accept-Encoding: gzip,deflate'
You can download the *.txt.gz directly, without any uncompressing, with 'wget' instead of 'curl'.
wget http://lists.example.com/private.cgi/listname-domain.com/2013-September.txt.gz
If curl is essential, then check out the details here
I'm doing an AJAX download that is being redirected. I'd like to know the final target URL the request was redirected to. I'm using jQuery, but also have access to the underlying XMLHttpRequest. Does anyone know a way to get the final URL?
It seems like I'll need to have the final target insert its URL into a known location in the headers or response body, then have the script look for it there. I was hoping to have something that would work regardless of the target though.
Additional note: I'm asking how my code can get the full url from production code, which will run from the user's system. I'm not asking how I can get the full url when I'm debugging.
The easiest way to do this is to use Fiddler or Wireshark to examine the HTTP traffic. Use Fiddler at the client if your interface uses a browser, otherwise use Wireshark to capture the traffic on the wire.
One word - Firebug, it is a Firefox plugin. Never do any kind of AJAX development without it.
Activate Firebug and select Net, then perform your AJAX request. This will show the URL that is called, the entire request (header and body) and the entire response (once again, header and body). It also allows you to step through your JavaScript and debug it - breakpoints, watches, etc.
I'll second the Firebug suggestion. You'll see the url as the "Location" header in the http response.
It sounds like you also want to get this url in js? If so, you can get it off the xhr response object in the callback (which you can also inspect using FB!). :)