I'm trying to add "client_ip" in to a response header, but I can see the IP address is being printed on the kong apigateway logs but cannot forward it to a response header,
Sample log output:
,"method":"GET"},"client_ip":"49.36.22.209","tries":[{"balancer
I was trying following methods to try it out, but still response header is not printed the ip address.
- name: response-transformer
route: routeName
config:
add:
headers:
- X-Real-IP:${{client_ip}}
Can anyone help me to try enable this header on kong apigateway configs ?
Thanks.
You could use the plugin "serverless-functions"
In your case you would use the "post-function" running on service response at "header-phase".
With this plugin and post-function you can write custom logic with lua and modify the response.
With pre-function you could modify the request.
Kong have a PDK you can use globally.
Either if lb or not you would use
kong.client.get_ip() or kong.client.get_forwarded_ip()
Example Code
local client = kong.client
local response = kong.response
local function set_client_ip_header()
local client_ip = client.get_ip() -- or client.get_forwarded_ip()
response.set_header("X-Real-Ip", client_ip)
end
return set_client_ip_header -- return for memoization
Related
I have a server in Azure running two web apps, one on port 443 (IIS), another on 1024 (Apache). Both are https. I have an Azure Application Gateway (WAF v2) in place. I would like to allow requests for subdomain1.domain.com to go through on 443 (which is set-up and working) and requests for subdomain2.domain.com to be re-written to port 1024 internally.
I have tried various combinations of conditions and actions, but cannot get anything to do anything at all, good bad or indifferent!
My current Condition is as follows
Type of variable to check: HTTP Header
Header type: Response Header
Header name: Common Header
Common header: Location
Case-sensitive: No
Operator: =
Pattern to match: (https?):\/\/.*subdomain2.domain.com(.*)$
My current action is:
Re-write type: Response Header
Action type: Set
Header name: Common header
Common header: Location
Header value: https://backendservername.domain.com:1024{http_resp_Location_2}
I can't find a combination that does anything at all, nor any examples that show port updates. I've tried using request headers and the host value, but unfortunately that conflicts with the host rewrite in the HTTP Settings that was necessary to get any end to end SSL working.
Thanks in advance.
Matt.
I am new to Apollo. I set the uri for the remote GraphQL server as instructed in the Apollo documentation: Apollo Angular Doc: Setup and options
const uri = 'http://ip_address:port_number/.../graphql';
But I got the following error when trying to send a post request to the remote server:
POST http://localhost:4200/ 404 (Not Found)
scheduleTask # zone-evergreen.js:2845
scheduleTask # zone-evergreen.js:385
...
Looks like the uri I set in graphql.module.ts has no effect. Any suggestion?
Thanks!
I did some try and error and found that the problem is gone if I changed the single quote to double quote when specifying the uri:
const uri = "http://ip_address:port_number/.../graphql";
Hope it will help other people running into the same issue.
I want to implement a custom nginx cache control method from my scripts, by using a custom header: "Do-Cache".
I used in http block of nginx:
map $sent_http_do_cache $nocache {
public 0;
default 1;
}
And in the server block of nginx:
fastcgi_cache_bypass $nocache;
fastcgi_no_cache $nocache;
So, for Do-Cache: public, nginx should cache the response. Otherwise not.
But this configuration is not working. By debuging into logs the values of $sent_http_do_cache and $nocache are the right ones, until they are used in server block of nginx. If using them in the server block (fastcgi_cache_bypass $nocache, or a simple set $a $nocache), the $nocache variable got the "1" value, and $sent_http_do_cache - "-".
Is the any other way of managing the cache of nginx based on custom header in response?
Caching based on the response header cannot be done because it implies that Nginx must proxy the request back to the backend and check its response, defeating the purpose of the proxy cache.
I'm working on a Laravel project (an API) and I have a problem with a custom param in the request header.
I need to send a token in the request header, so I just add a param api_token in my request.
When I am on my local configured with apache2, I can in Laravel get my header request param with $request->header('api_token'), but when I tr on my server configured with nginx, I always get null
For me, there is a problem with nginx and header request, what can I do ?
Any ideas ? Maybe it's not from nginx...
That's because by default Nginx does not allow header with an underscore. You can simply update your header parameter to api-token:
$request->header('api-token');
Or you can configure your Nginx configuration to allow header with an underscore. Somewhere between your server block, add underscores_in_headers directive like this:
server {
...
underscores_in_headers on;
...
}
Also don't forget to reload your Nginx configuration. Read more about this underscores_in_headers directive here.
Hope this solve your issue.
Just getting started with nifi.
Have an http processor of type "ListenHTTP" listening no port 9090
Need to allow http POST on http://localhost:9090/ end-point
I was unable to locate any "acceptable http verbs" settings within the Web UI for this processor, so my guess is - it needs to be specified in some sort of config file.
My question is: what file would that be, and what is the actual syntax to specify this?
I have seen some xml templates on line, but I'm not sure where to put one.
By default, if no alternative base name is specified, the ListenHTTP processor's endpoint is available at:
http://{hostname}:{port}/contentListener
Accordingly, your request should, for default settings, be:
curl --data "param1=value1¶m2=value2" localhost:9090/contentListener
The full documentation on the processor is available at ListenHTTP or if that link breaks, via the Nifi Documentation page.
Alternatively, if you were looking to restrict your endpoint to specific verbs consider the combination of HandleHttpRequest and HandleHttpResponse
I believe by default it will accept all verbs. I configured ListenHttp on port 9090 with an empty "Base Path" property, and was able to use curl to POST data to it successfully. What kind of issues are you having?
The ListenHTTP processor...
... starts an HTTP Server and listens on a given base path to transform
incoming requests into FlowFiles. The default URI of the Service will
be http://{hostname}:{port}/contentListener. Only HEAD and POST
requests are supported. GET, PUT, and DELETE will result in an error
and the HTTP response status code 405.
-- Nifi Documentation, ListenHTTP 1.6.0
I have been able to POST to Nifi using PowerShell with Invoke-WebRequest, but needed to specify -ContentType:"application/x-www-form-urlencoded". This content type is set implicitly in the curl command in the answer from #apiri.
The example below works, and as an added bonus shows how you might include a header that will set an attribute on the resulting flowfile. Remember that you need to set the optional HTTP Headers to receive as Attributes (Regex) property in the processor configuration.
[PS] $HttpPost = #{
Uri = "http://{hostname}:{port}/contentListener"
Method = "POST"
ContentType = "application/x-www-form-urlencoded"
Headers = #{MyAttribute = "SomeValue"}
}
[PS] $Body = Get-Content <some_file> -Raw
[PS] Invoke-WebRequest #HttpPOST -Body:$Body
VERBOSE: POST http://{hostname}:{port}/contentListener with -1-byte payload
VERBOSE: received 0-byte response of content type text/plain
StatusCode : 200
StatusDescription : OK