Should the HTTP/2 `:authority` header include port number? - http2

Following on from Is Port Number Required in HTTP "Host" Header Parameter?, does the same logic apply to HTTP/2?
i.e. if a browser makes a request to https://server.com:1234/, should the :authority header be server.com or server.com:1234?

It should, :authority is defined by RFC 7540 (https://www.rfc-editor.org/rfc/rfc7540#section-8.1.2.3) as a :
pseudo-header field includes the authority portion of the target URI ([RFC3986], Section 3.2). The authority MUST NOT include the deprecated "userinfo" subcomponent for "http" or "https" schemed URIs.
RFC 3986 in turn describes authority as:
authority = [ userinfo "#" ] host [ ":" port ]
It then clarifies in 3.2.3. Port":
A scheme may define a default port. For example, the "http" scheme
defines a default port of "80", corresponding to its reserved TCP port
number. [...] URI producers and normalizers should omit the port
component and its ":" delimiter if port is empty or if its value would
be the same as that of the scheme's default.
So yes, it should include the port, if the port isn't the default for the scheme.

Related

kong gateway add headers

I want to add some headers to my incoming rest api requests with kong gateway.
In the kong admin UI, I set these parametes :
config.add.headers: myheader: $(consumer_id)
myheader is a name for new header and I want to set variable of consumer_id value in these header.
but after this configuration , kong gives me a empty value.
I also tried ${consumer_id} and etc ...
but none of these variables worked.!
So the question is: what's the VARIABLE syntaxs of consumer_id or custom_id and ... too call in kong gateway.
thx
BR
finally I found the correct header from network dump, and replace it with My required header.
the right header for my case was : X-Consumer-Custom-ID

Why can't I use a wildcard * on Access-Control-Allow-Headers when Allow-Credentials is set to true?

I have a website where the React frontend resides in (let's say) app.mydomain.com and the api on api.mydomain.com.
The user submits a login request to the API and, upon logging in successfully, receives a nice cookie to be used later. The front-end talks only and directly to the API so the domain on the cookie is simply set to api.mydomain.com.
I use Axios to perform the requests with the withCredentials flag set to true, in order to receive the cookie.
The headers on the server to allow CORS are as follows:
Access-Control-Allow-Origin: http://app.mydomain.com
Access-Control-Allow-Methods: GET,POST,DELETE,PUT,OPTIONS
Access-Control-Allow-Headers: *
Access-Control-Allow-Credentials: true
In this situation this is the response from Firefox:
But, as soon as the Access-Control-Allow-Headers value is set more specifically, say to
Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, Accept, everything works.
Mozilla says they do not allow "wildcarding" the Origin value, not the Headers one, same in this page, where nothing is mentioned.
Why is Firefox behaving like this and why is it not mentioned anywhere that I can find?
In a preflight response that includes Access-Control-Allow-Credentials: true, an asterisk used as the value of header Access-Control-Allow-Headers is interpreted literally, not as a wildcard (i.e. "all headers allowed").
It's true that the main MDN page about CORS doesn't explicitly state this rule. However, the more specific MDN page about the Access-Control-Allow-Headers header does so explicitly:
The value "*" only counts as a special wildcard value for requests without credentials (requests without HTTP cookies or HTTP authentication information). In requests with credentials, it is treated as the literal header name "*" without special semantics. Note that the Authorization header can't be wildcarded and always needs to be listed explicitly.
Edit (2023): As sideshowbarker pointed out in his comments, the MDN Web Docs page about CORS has since been updated and now states the following:
When responding to a credentialed request: [...]
The server must not specify the "*" wildcard for the Access-Control-Allow-Headers response-header value, but must instead specify an explicit list of header names; for example, Access-Control-Allow-Headers: X-PINGOTHER, Content-Type
Here are clarifying quotes from the more authoritative Fetch standard:
For Access-Control-Expose-Headers, Access-Control-Allow-Methods, and Access-Control-Allow-Headers response headers, the value * counts as a wildcard for requests without credentials.
Access-Control-Expose-Headers, Access-Control-Allow-Methods, and Access-Control-Allow-Headers response headers can only use * as value when request’s credentials mode is not "include".
(my emphasis)
The relevant normative requirement for browsers that the spec states is in the main fetch algorithm at https://fetch.spec.whatwg.org/#main-fetch, in step 13, substep 2:
If request's credentials mode is not
"include" and headerNames contains *, then set response’s CORS-exposed header-name list to all unique header names in response’s header list.
In other words, the browser behavior that spec statement requires for * is:
if the credentials mode is not "include", the browser is required to examine the actual list of response headers in the response, and allow all of them
if the credentials mode is "include", the browser is required to only allow any response header with the literal name "*"
At the end of that step, there’s actually a note saying pretty much the same thing:
One of the headerNames can still be * at this point, but will only match a header whose name is *.

What is the "accept" part for?

When connecting to a website using Net::HTTP you can parse the URL and output each of the URL headers by using #.each_header. I understand what the encoding and the user agent and such means, but not what the "accept"=>["*/*"] part is. Is this the accepted payload? Or is it something else?
require 'net/http'
uri = URI('http://www.bible-history.com/subcat.php?id=2')
http://www.bible-history.com/subcat.php?id=2>
http_request = Net::HTTP::Get.new(uri)
http_request.each_header { |header| puts header }
# => {"accept-encoding"=>["gzip;q=1.0,deflate;q=0.6,identity;q=0.3"], "accept"=>["*/*"], "user-agent"=>["Ruby"], "host"=>["www.bible-history.com"]}
From https://www.w3.org/Protocols/HTTP/HTRQ_Headers.html#z3
This field contains a semicolon-separated list of representation schemes ( Content-Type metainformation values) which will be accepted in the response to this request.
Basically, it specifies what kinds of content you can read back. If you write an api client, you may only be interested in application/json, for example (and you couldn't care less about text/html).
In this case, your header would look like this:
Accept: application/json
And the app will know not to send any html your way.
Using the Accept header, the client can specify MIME types they are willing to accept for the requested URL. If the requested resource is e.g. available in multiple representations (e.g an image as PNG, JPG or SVG), the user agent can specify that they want the PNG version only. It is up to the server to honor this request.
In your example, the request header specifies that you are willing to accept any content type.
The header is defined in RFC 2616.

Sanitizing url and parameters

Currently, my software has the following workflow
User performs an search through a REST API and selects an item
Server performs the same search again to validate the user's selection
In order to implement step 2, the user has to send the URL params that he used for his search as a string (ex. age=10&gender=M).
The server will then http_get(url + "?" + params_str_submitted_by_user)
Can a malicious user make the server connect to an unintended server by manipulating params_str_submitted_by_user?
What is the worst case scenario if even newlines are left in and the user can arbitrarily manipulate the HTTP headers?
As you are appending params_str_submitted_by_user to the base URL after the ? delimiter, you are safe from this type of attack used where the context of the domain is changed to a username or password:
Say URL was http://example.com and params_str_submitted_by_user was #evil.com and you did not have the / or ? characters in your URL string concatenation.
This would make your URL http://example.com#evil.com which actually means username example.com at domain evil.com.
However, the username cannot contain the ? (nor slash) character, so you should be safe as you are forcing the username to be concatenated. In your case URL becomes:
http://example.com?#evil.com
or
http://example.com/?#evil.com
if you include the slash in your base URL (better practise). These are safe as all it does is pass your website evil.com as a query string value because #evil.com will no longer be interpretted as a domain by the parser.
What is the worst case scenario if even newlines are left in and the user can arbitrarily manipulate the HTTP headers?
This depends on how good your http_get function is at sanitizing values. If http_get does not strip newlines internally it could be possible for an attacker to control the headers sent from your application.
e.g. If http_get internally created the following request
GET <url> HTTP/1.1
Host: <url.domain>
so under legitimate use it would work like the following:
http_get("https://example.com/foo/bar")
generates
GET /foo/bar HTTP/1.1
Host: example.com
an attacker could set params_str_submitted_by_user to
<space>HTTP/1.1\r\nHost: example.org\r\nCookie: foo=bar\r\n\r\n
this would cause your code to call
http_get("https://example.com/" + "?" + "<space>HTTP/1.1\r\nHost: example.org\r\nCookie: foo=bar\r\n\r\n")
which would cause the request to be
GET / HTTP/1.1
Host: example.org
Cookie: foo=bar
HTTP/1.1
Host: example.com
Depending on how http_get parses the domain this might not cause the request to go to example.org instead of example.com - it is just manipulating the header (unless example.org was another site on the same IP address as your site). However, the attacker has managed to manipulate headers and add their own cookie value. The advantage to the attacker depends on what can be gained under your particular setup from them doing this - there is not necessarily any general advantage, it would be more of a logic flaw exploit if they could trick your code into behaving in an unexpected way by causing it to make requests under the control of the attacker.
What should you do?
To guard against the unexpected and unknown, either use a version of http_get that handles header injection properly. Many modern languages now deal with this situation internally.
Or - if http_get is your own implementation, make sure it sanitizes or rejects URLs that contain invalid characters like carriage returns or line feeds and other parameters that are invalid in a URL. See this question for list of valid characters.

Fiddler: Creating an AutoResponse rule to map all calls to one host to another host

Example:
I want to create one AutoResponse rule that will map all calls to one host to another host, but preserve the urls. Examples
http://hostname1/foo.html -> http://hostname2/foo.html
and
http://hostname1/js/script.js -> http://hostname2/js/script.js
in one rule.
For now, I've accomplished this by creating aN AutoResponse rule for every URL my project calls, but I'm sure there must be a way to right one rule using the right wildcards. I looked at http://www.fiddler2.com/Fiddler2/help/AutoResponder.asp, but I couldn't see how to do it. The wild cards all seem to be around the matching and not the action.
Full context: I'm developing on a beta platform and Visual Studio is borked in such away that it is sending all the requests to http://localhost:24575 when my project is actually running on http://localhost:56832
This is how I configured Fiddler2 :
I want to redirect all requests from http://server-name/vendor-portal-html/ to http://localhost/vendor-portal-html/
My configuration is as follows:
REGEX:.*/vendor-portal-html/(.*) to http://127.0.0.1/vendor-portal-html/$1
Thanks to EricLaw for above comment.
To map from one host to another, don't use AutoResponder. Instead, click Tools > Hosts.
Alternatively, you can click Rules > Customize Rules, scroll to OnBeforeRequest and write a bit of code:
if (oSession.HostnameIs("localhost") && (oSession.port == 24575)) oSession.port = 56832;
Because this was way harder to find than it should have been to use Fiddler to redirect all requests for one to host to another host:
Use the AutoResponder tab to set a rule such that any request matching your old host will redirect to your new host with the path and query string appended.
Match with regex options ix to make it case-insensitive and ignore whitespace. Leave off the n option as it requires explicitly named capture groups.
Capture the path and query string of the request and append it to the redirect response using the variable $1, where the path+query is the first capture group. You can use capture groups $1-$n if your regex has more.
Fiddler will then issue an HTTP 307 redirect response.
Request: regex:^(?ix)http://old.host.com/(.*)$ #Match HTTP host
Response: *redir:http://new.host.com/$1
Request
GET http://old.host.com/path/to/file.html HTTP/1.1
Host: old.host.com
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Response
HTTP/1.1 307 AutoRedir
Content-Length: 0
Location: http://new.host.com/path/to/file.html
Cache-Control: max-age=0, must-revalidate
Mapping requests with Fiddler Autoresponder using Regular Expressions is possible.
This can be done with rexexp rules. However this doesn't seem to be documented anywhere.
If you add a rule and use regular expressions within parenthesis, these matches can be used in the desired mapping when using the placeholders ... $n
each number corresponds to the matched regexp in the rule.
Example of Rule: regex:http://server1/(\w*) -> http://server2/
This will result in the following mapping: http://server1/foo.html -> http://server2/foo.html

Resources