How does wep api return a content type of html although I have only defined a JsonFormatter? - asp.net-web-api

In my Application_Start:
var jsonFormatter = new JsonMediaTypeFormatter();
config.Services.Replace(typeof(IContentNegotiator), new JsonContentNegotiator(jsonFormatter));
My default url:
[HttpGet]
[Route("~/")]
public HttpResponseMessage Index()
{
var stream = File.OpenRead(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, #"Views\Home\Index.html"));
var content = new StreamContent(stream);
return new HttpResponseMessage() { Content = content };
}
The content is of type "text/html" but I have not set it in the response.Headers.ContentType but still the html file is correctly returned although there is not something like a html content negotiator and actually I assumed the action would return the html file as json or an error would occur but everything worked fine.
Why is that?

After googling a while I found this explanation:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec7.html#sec7.2.1
Any HTTP/1.1 message containing an entity-body SHOULD include a
Content-Type header field defining the media type of that body. If and
only if the media type is not given by a Content-Type field, the
recipient MAY attempt to guess the media type via inspection of its
content and/or the name extension(s) of the URI used to identify the
resource. If the media type remains unknown, the recipient SHOULD
treat it as type "application/octet-stream".
So my "browser" is inspecting the URI of the resource which ends with .HTML and therefore its treated as content-type "text/html" :-)

Related

Is there a way to map local in proxyman based off of parameters attached to the body of a url?

I have a url:
https://cn.company.com/appv2/search
and want to have a different map local depending on the request coming with a different parameter in the body (i.e. it is NOT attached to the url like https://cn.company.com/appv2/search?cursor=abc. Instead it is in the body of the request { cursor: abc }.
Any idea on if this can be done in proxyman?
I basically want to be able to stub pagination through the proxy without waiting on a server implementation. So I'd have no cursor on the first request, server would return a cursor and then use that on the next request and get a different response from server on the request so that I can test out the full pagination flow.
Yes, it can be solved with the Scripting from Proxyman app.
Use Scripting to get the value of the request body
If it's matched, use Scripting to mimic the Map Local (Mock API also supports)
Here is the sample code and how to do it:
Firstly, call your request and make sure you can see the HTTPS Response
Right-Click on the request -> Tools -> Scripting
Select the Mock API checkbox if you'd like a Mock API
Use this code
/// This func is called if the Response Checkbox is Enabled. You can modify the Response Data here before it goes to the client
/// e.g. Add/Update/Remove: headers, statusCode, comment, color and body (json, plain-text, base64 encoded string)
///
async function onResponse(context, url, request, response) {
// get the value from the body request
var cursorValue = request.body["cursor"];
// Use if to provide a map local file
if (cursorValue === "abc") {
// Set Content Type as a JSON
response.headers["Content-Type"] = "application/json";
// Set a Map Local File
response.bodyFilePath = "~/Desktop/my_response_A.json";
} else if (cursorValue === "def") {
// Set Content Type as a JSON
response.headers["Content-Type"] = "application/json";
// Set a Map Local File
response.bodyFilePath = "~/Desktop/my_response_B.json";
}
// Done
return response;
}
Reference
Map Local with Scripting: https://docs.proxyman.io/scripting/snippet-code#map-a-local-file-to-responses-body-like-map-local-tool-proxyman-2.25.0+

Web API content negotiated formatters with accept header and url parameter

I have implemented content negotiation so that a specific serializer will be used based on the accept header:
XmlFormatter fmtXml = new XmlFormatter();
fmtXml.SupportedMediaTypes.Add(new
System.Net.Http.Headers.MediaTypeHeaderValue("text/xml"));
JsonFormatter fmtJson = new JsonFormatter();
fmtJson.SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json"));
config.Formatters.Insert(0, fmtJson);
config.Formatters.Insert(0, fmtXml);
I need to allow a client to specify the desired format using a url parameter, which would take precedence over the accept header.
To do this, I've started subclassing the DefaultContentNegogiator (although I don't know that it's the best idea.:
public class CustomContentNegotiator : DefaultContentNegotiator
{
public override ContentNegotiationResult Negotiate(Type type, HttpRequestMessage request, IEnumerable<MediaTypeFormatter> formatters)
{
string sMimeType = HttpUtility.ParseQueryString(request.Url.Query).Get("_format");
if (!string.IsNullOrEmpty(sMimeType))
{
...
}
else
{
return base.Negotiate(type, request, formatters);
}
}
}
Then I replace the default content negotiator with mine:
GlobalConfiguration.Configuration.Services.Replace(typeof(IContentNegotiator), new CustomContentNegotiator());
The idea with the custom content negotiator is that if a content format has been specified as a url parameter, I would locate the formatter that matches, otherwise I would just fallback to the behavior of the DefaultContentNegotiator.
I'm just not sure how to match correctly on the supported media types, or if there is a better, simpler solution to this...
I determined that using a custom content negotiator was a red herring. Instead I was able to use a MediaTypeMapping which matches against a specific url parameter instead of the accept request header:
fmtJson.MediaTypeMappings.Add(new System.Net.Http.Formatting.QueryStringMapping("_format", "json", "application/json"));

FileSystemResource is returned with content type json

I have the following spring mvc method that returns a file:
#RequestMapping(value = "/files/{fileName}", method = RequestMethod.GET)
public FileSystemResource getFiles(#PathVariable String fileName){
String path="/home/marios/Desktop/";
return new FileSystemResource(path+fileName);
}
I expect a ResourceHttpMessageConverter to create the appropriate response with an octet-stream type according to its documentation:
If JAF is not available, application/octet-stream is used.
However although I correctly get the file without a problem, the result has Content-Type: application/json;charset=UTF-8
Can you tell me why this happens?
(I use spring version 4.1.4. I have not set explicitly any message converters and I know that spring loads by default among others the ResourceHttpMessageConverter and also the MappingJackson2HttpMessageConverter because I have jackson 2 in my classpath due to the fact that I have other mvc methods that return json.
Also if I use HttpEntity<FileSystemResource> and set manually the content type, or specify it with produces = MediaType.APPLICATION_OCTET_STREAM it works fine.
Note also that in my request I do not specify any accept content types, and prefer not to rely on my clients to do that)
I ended up debugging the whole thing, and I found that AbstractJackson2HttpMessageConverter has a canWrite implementation that returns true in case of the FileSystemResource because it just checks if class is serializable, and the set media type which is null since I do not specify any which in that case is supposed to be supported by it.
As a result it ends up putting the json content types in a list of producible media types. Of course ResourceHttpMessageConverter.canWrite implementation also naturally returns true, but the ResourceHttpMessageConverter does not return any producible media types.
When the time to write the actual response comes, from the write method implementation, the write of the ResourceHttpMessageConverter runs first due to the fact that the ResourceHttpMessageConverter is first in the list of the available converters (if MappingJackson2HttpMessageConverter was first, it would try to call write since its canWrite returns true and throw exception), and since there was already a producible content type set, it does not default to running the ResourceHttpMessageConverter.getDefaultContentType that would set the correct content type.
If I remove json converter all would work fine, but unfortunately none of my json methods would work. Therefore specifying the content type is the only way to get rid of the returned json content type
For anyone still looking for a piece of code:
You should wrap your FileSystemResource into a ResponseEntity<>
Then determine your image's content type and append it to ResponseEntity as a header.
Here is an example:
#GetMapping("/image")
public #ResponseBody ResponseEntity<FileSystemResource> getImage() throws IOException {
File file = /* load your image file from anywhere */;
if (!file.exists()) {
//TODO: throw 404
}
FileSystemResource resource = new FileSystemResource(file);
HttpHeaders headers = new HttpHeaders();
headers.setContentType(/* determine your image's media type or just set it as a constant using MediaType.<value> */);
headers.setContentLength(resource.contentLength());
return new ResponseEntity<>(resource, headers, HttpStatus.OK);
}

Compression response filter fails on breeze.js Metadata call

I have an http module where I'm adding a response filter below for compression. This works for all API calls except for 1, the call to MetaData. If I remove the [BreezeController] decoration it works fine. I think it has to do with action filter attribute that converts the string return type into an HttpResponse return type with string content.
The error I'm getting is " Exception message: The stream state of the underlying compression routine is inconsistent."
I've done some testing where a method thats defined to return an HttpResponse works fine. So I think its the scenario where the method is defined to return string, and then the action filter changes it to HttpResponse at runtime.
Any ideas how I can get this to work?
Here's the response filter being added in BeginRequest:
HttpApplication app = (HttpApplication)sender;
// Check the header to see if it can accept compressed output
string encodings = app.Request.Headers.Get("Accept-Encoding");
if (encodings == null)
return;
Stream s = app.Response.Filter;
encodings = encodings.ToLower();
if (encodings.Contains("gzip"))
{
app.Response.Filter = new GZipStream(s, CompressionMode.Compress);
app.Response.AppendHeader("Content-Encoding", "gzip");
}
Don't know the specifics of what you're doing but I know that the [BreezeController] attribute strips out filters and adds back just the ones that breeze wants.
One approach might be to define a separate controller (ModelMetadataController) that only serves the metadata. This controller doesn't have the [BreezeController] attribute; it's a plain old Web API controller.
Then you create a "Breeze controller" (ModelController) with all of the usual methods except the Metadata method.
You call the metadata controller from the client during app launch via MetadataStore.fetchMetadata just to get metadata.
Once you have populated a metadataStore in this fashion, you use it in your EntityManager which sends query and save requests to the "real" Web API data controller.
The client code might look something like this:
var ds = new breeze.DataService({
serviceName: 'breeze/Model' // the breeze query & save controller
});
var ms = new MetadataStore({
namingConvention: breeze.NamingConvention.camelCase, // assuming that's what you want
});
ms.addDataService(ds); // associate the metadata-to-come with the "real" dataService
var manager = new breeze.EntityManager({
dataService: ds,
metadataStore: ms
});
// the fun bit: fetch the metadata from a different controller
var promise = ms.fetchMetadata('breeze/ModelMetadata') // the metadata-only controller!
return promise; // wait on it appropriately

nsIProtocolHandler and nsIURI: Relative URLs in self-created protocol

I have a simple implementation of custom protocol. It's said that newURI method takes 3 arguments (spec, charset & baseURI) and "if the protocol has no concept of relative URIs, third parameter is ignored".
So i open a page like this tada://domain/samplepage which has XML starting with this:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE Product SYSTEM "product.dtd">
But i don't see any request regarding product.dtd to my protocol (newURI is not even called). Do i miss smth in my implementation?
BTW: the page itself opens correctly, but there's no request to the DTD-file.
const
Cc = Components.classes,
Ci = Components.interfaces,
Cr = Components.results,
Cu = Components.utils,
nsIProtocolHandler = Ci.nsIProtocolHandler;
Cu.import("resource://gre/modules/XPCOMUtils.jsm");
function TadaProtocol() {
}
TadaProtocol.prototype = {
scheme: "tada",
protocolFlags: nsIProtocolHandler.URI_DANGEROUS_TO_LOAD,
newURI: function(aSpec, aOriginCharset, aBaseURI) {
let uri = Cc["#mozilla.org/network/simple-uri;1"].createInstance(Ci.nsIURI);
uri.spec = (aBaseURI === null)
? aSpec
: aBaseURI.resolve(aSpec);
return uri;
},
newChannel: function(aURI) {
let
ioService = Cc["#mozilla.org/network/io-service;1"].getService(Ci.nsIIOService),
uri = ioService.newURI("chrome://my-extension/content/about/product.xml", null, null);
return ioService.newChannelFromURI(uri);
},
classDescription: "Sample Protocol Handler",
contractID: "#mozilla.org/network/protocol;1?name=tada",
classID: Components.ID('{1BC90DA3-5450-4FAF-B6FF-F110BB73A5EB}'),
QueryInterface: XPCOMUtils.generateQI([Ci.nsIProtocolHandler])
}
let NSGetFactory = XPCOMUtils.generateNSGetFactory([TadaProtocol]);
The channel you return from newChannel has the chrome:// URI you passed to newChannelFromURI as its URI. So that's the URI the page has as its URI, and as its base URI. So the DTD load happens from "chrome://my-extension/content/about/product.dtd" directly.
What you probably want to do is to set aURI as the originalURI on the channel you return from newChannel.
As Boris mentioned in his answer, your protocol implementation doesn't set nsIChannel.originalURI property so that URLs will be resolved relative to the chrome: URL and not relative to your tada: URL. There is a second issue with your code however: in Firefox loading external DTDs only works with chrome: URLs, this check is hardcoded. There is a limited number of supported DTDs that are mapped to local files (various HTML doctypes) but that's it - Gecko doesn't support random URLs in <!DOCTYPE>. You can see the current logic in the source code. The relevant bug is bug 22942 which isn't going to be fixed.
Boris and Wladimir, thank you!
After some time i have a solution. The problem was that the DTD-file could not be loaded from my custom-created protocol. The idea was to use Proxy API to override schemeIs() method, which was called in newURI method of nsIProtocolHandler.
So now i have this snippet of code in newURI method:
let standardUrl = Cc["#mozilla.org/network/standard-url;1"].createInstance(Ci.nsIStandardURL);
standardUrl.init(standardUrl.URLTYPE_STANDARD, -1, spec, charset, baseURI);
standardUrl.QueryInterface(Ci.nsIURL);
return Proxy.create(proxyHandlerMaker(standardUrl));
proxyHandlerMaker just implements Proxy API and overrides the needed schemeIs() method. This solved the problem and now all the requests come to newChannel where we can handle them.
Important notes:
Request to DTD comes to newURI() method and does not come to newChannel(). This is the default behavior. This happens because schemeIs("chrome") method is called on the object which was returned by newURI() method. This method should return "true" for DTD-requests if you want the request to reach the newChannel() method.
newChannel() method is invoked with the {nsIURI} object which is not the same as the object which was returned by the newURI method.
If you want to handle both protocol:page & protocol://domain/page URLs by your protocol, you should use both {nsIURI} and {nsIStandardURL} objects
You can pass the created {nsIStandardUrl}-object (standardUrl in the snippet above) as a 2nd argument to the Proxy.create() function. This will make your baseURI (3rd arguments in newURI) pass "baseURI instanceof nsIStandardUrl" check. SchemeIs() method of this proxied object will also return true for the DTD-files requests. But unfortunately the requests won't reach newChannel() method. This could be a nice DTD-problem solution but I can't solve this problem.

Resources