Envoy lua filter - http call cluster invalid. Must be configured - filter

I am trying to create an envoy filter for all of my microservices deployed on a GKE cluster which is running istio.
In the filter I want to read 2 header values and send a request to an external service on web to get list of all groups that the user has authority of.
When I send the request I get "http call cluster invalid. Must be configured" error.
As per my understanding we need to register the host as cluster so that envoy understand where to send the request and I have added a value in "clusters" section of my filter.yaml file.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: authorization-filter
namespace: default
spec:
filters:
- listenerMatch:
listenerType: SIDECAR_INBOUND
listenerProtocol: HTTP
filterName: envoy.lua
filterType: HTTP
filterConfig:
inlineCode: |
function envoy_on_request(request_handle)
request_handle:logWarn("Inside filter")
local token = request_handle:headers():get("Authorization")
local partition = request_handle:headers():get("partition-id")
if token == nil or partition == nil then
request_handle:logErr("Token or Partition is empty")
request_handle:respond({[":status"] = "400",["Content-Type"] = "application/json"}, "{\"ErrorMessage\":\"Bad Request: Authorization or Partition-Id missing in headers\"}")
end
request_handle:logWarn("Sending request")
local headers, body = request_handle:httpCall(
"lua_cluster",
{
[":method"] = "GET",
[":authority"] = "example.com",
[":path"] = "/api/authorization/groups",
["Authorization"] = token,
["partition-id"] = partition
},
nil,
5000)
request_handle:headers():add("authorization-response-headers", headers)
end
clusters:
- name: lua_cluster
connect_timeout: 0.5s
type: strict_dns
lb_policy: round_robin
hosts:
- socket_address:
address: "example.com"
port_value: 8888
What am i missing here?

Related

Istio EnvoyFilter Lua HttpCall doesn't work with HTTPS?

I need to decrypt the body of a request in an external API.
But, when I try to do it with an EnvoyFilter using lua it doesn't work.
If I try the same code that I'm posting here, but without HTTPS, works. But with HTTPS returns 503.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: eva-decrypt-filter
namespace: istio-system
spec:
configPatches:
- applyTo: HTTP_FILTER
match:
context: ANY
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
patch:
operation: INSERT_BEFORE
value:
name: envoy.lua
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua"
inlineCode: |
function envoy_on_request(request_handle)
local buffered = request_handle:body()
local bodyString = tostring(buffered:getBytes(0, buffered:length()))
print("bodyString ->")
print(bodyString)
if string.match(bodyString, "valcirtest") then
print("iniciando http_Call")
local responseHeaders, responseBody = request_handle:httpCall(
"thirdparty",
{
[":method"] = "POST",
[":path"] = "/decrypt",
[":authority"] = "keycloack-dev-admin.eva.bot",
[":scheme"] = "https",
["content-type"] = "application/json",
["content-length"] = bodyString:len(),
},
bodyString,
3000)
print("acabou a requisicao")
print("responseHeaders -> ")
print(responseHeaders)
print(responseHeaders[":status"])
print("responseBody -> ")
print(responseBody)
local content_length = request_handle:body():setBytes(responseBody)
request_handle:headers():replace("content-length", content_length)
else
print("nao entrou")
end
end
- applyTo: CLUSTER
match:
context: SIDECAR_OUTBOUND
patch:
operation: ADD
value: # cluster specification
name: thirdparty
connect_timeout: 1.0s
type: STRICT_DNS
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: thirdparty
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
protocol: TCP
address: keycloack-dev-admin.eva.bot
port_value: 443
The response error is:
503
responseBody ->
upstream connect error or disconnect/reset before headers. reset reason: connection termination
I'm using Istio v.1.11.4.
It should be configured on your "thirdparty" cluster adding the following on your cluster config:
transport_socket:
name: envoy.transport_sockets.tls
To add to #koffi-kodjo's answer, you also need to specify the typed_config property. The transport_socket node should be placed at the same level of the name: thirdparty node.
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
ref:
https://github.com/envoyproxy/envoy/issues/11582#issuecomment-646427632
https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/transport_sockets/tls/v3/tls.proto.html#extensions-transport-sockets-tls-v3-upstreamtlscontext

How do i get hostname in the kubernetes pod

I have 3 ingress pointing to the same service. In my kubernetes pod, how can i find the hostname and from which subdomain request is coming . my backend code in golang server.
when the request comes to any pod, i want to know from which subdomain(x,y,x) request has come to pod. Currently in the golang code it's giving hostname as pod ip address
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/browser-xss-filter: 'true'
ingress.kubernetes.io/force-hsts: 'true'
ingress.kubernetes.io/hsts-include-subdomains: 'true'
ingress.kubernetes.io/hsts-max-age: '315360000'
name: test
namespace: test
spec:
rules:
- host: http://x.test.com
http:
paths:
- backend:
serviceName: test-service
servicePort: 8080
path: /
- host: http://y.test.com
http:
paths:
- backend:
serviceName: test-service
servicePort: 8080
path: /
- host: http://z.test.com
http:
paths:
- backend:
serviceName: test-service
servicePort: 8080
path: /
func Subdomain( r *http.Request) {
host := r.URL.Host
host = strings.TrimSpace(host)
//Figure out if a subdomain exists in the host given.
host_parts := strings.Split(host, ".")
if len(host_parts) > 2 {
//The subdomain exists, we store it as the first element
//in a new array
subdomain := []string{host_parts[0]}
}
}
Try to use the X-Forwarded-Host header added by the ingress controller

Kong lambda plugin - Correlation id being dropped

so I'm using kong as an API gateway. I have a simple lambda function on aws returning headers. I'm able to see a response from the aws lambda function, however there is no correlation id. I have set the correlation-id plugin global. Would like some clarification on why the lambda plugin is not adding the correlation-id in the headers of the original response.
exports.handler = async (event) => {
// TODO implement
const response = {
statusCode: 200,
body: JSON.stringify(event.request_headers),
};
return response;
};
My kong.yaml
_format_version: "1.1"
routes:
- name: lambda1
paths:
- /lambda1
path_handling: v0
preserve_host: false
protocols:
- http
- https
regex_priority: 0
strip_path: true
https_redirect_status_code: 426
plugins:
- name: aws-lambda
config:
aws_key: XXXXXXXXXXXXXXXXXXXXXXX
aws_region: us-east-1
aws_secret: XXXXXXXXXXXXXXXXXXXXXXXXx
awsgateway_compatible: false
forward_request_body: true
forward_request_headers: true
forward_request_method: false
forward_request_uri: false
function_name: kong-lambda-plugin
host: null
invocation_type: RequestResponse
is_proxy_integration: false
keepalive: 60000
log_type: Tail
port: 443
proxy_scheme: null
proxy_url: null
qualifier: null
skip_large_bodies: true
timeout: 60000
unhandled_status: null
enabled: true
protocols:
- grpc
- grpcs
- http
- https
plugins:
- name: correlation-id
config:
echo_downstream: false
generator: uuid#counter
header_name: correlation-id
enabled: true
protocols:
- grpc
- grpcs
- http
- https
So what worked for me was
https://github.com/Kong/priority-updater
I created the .rock file generated by the script above.
Since I'm using a dockerized version of Kong, I coppied the file over to my Kong docker host.
Installed the plugin in my Kong docker host using luarocks command.
Added the plugin to my 'kong.conf' file.
Added it to the lambda route and worked beautifully.
You are right the problem comes from the plugin priority
CorrelationIdHandler.PRIORITY = 1
AWSLambdaHandler.PRIORITY = 750
Aws Lambda plugin is breaking the chain of plugins as in the handler phase it is doing
return kong.response.exit(status, content, headers)
So the other plugins cannot been used.
You can create a custom correlationId plugin and change the PRIORITY there is a tool to handle that for you https://github.com/Kong/priority-updater
You can also create a route to add the correlation id calling another route doing the lambda call

GET URL Name in Kubernetes

I use spring boot project and deployed in Kubernetes, I would like to get URL of the pod,
I referred How to Get Current Pod in Kubernetes Java Application not worked for me.
There are different environments (DEV, QA, etc..) and wanted to get URL dynamically, is there anyway?
my service yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: test-service
namespace: default
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/metric: concurrency
# Disable scale to zero with a minScale of 1.
autoscaling.knative.dev/minScale: "1"
# Limit scaling to 100 pods.
autoscaling.knative.dev/maxScale: "100"
spec:
containers:
- image: testImage
ports:
- containerPort: 8080
is it possible to add
valueFrom:
fieldRef:
fieldPath: status.podIP
from url https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api
Your case is about Knative ingress, and not Kubernetes.
From the inbound network connectivity part of the runtime contract of the knative documentation:
In addition, the following base set of HTTP/1.1 headers MUST be set on
the request:
Host - As specified by RFC 7230 Section 5.4
Also, the following proxy-specific request headers MUST be set:
Forwarded - As specified by RFC 7239.
Look to the headers inside your request.
An example for servlet doGet method:
Enumeration<String> headerNames = request.getHeaderNames();
while(headerNames.hasMoreElements()) {
String paramName = (String)headerNames.nextElement();
out.print("<tr><td>" + paramName + "</td>\n");
String paramValue = request.getHeader(paramName);
out.println("<td> " + paramValue + "</td></tr>\n");
}

Multiple paths in http module - metricbeats

I am using http module of metricbeats to monitor jmx. I am using http module instead of the jolokia module because it lacks wildcard support at this point. The example configuration in the documents is as follows.
- module: http
metricsets: ["json"]
period: 10s
hosts: ["localhost:80"]
namespace: "json_namespace"
path: "/jolokia/"
body: '{"type" : "read", "mbean" : "kafka.consumer:type=*,client-id=*", "attribute" : "count"}'
method: "POST"
This works fine and I am able to get data to kibana. I see errors when I configure it as follows to call multiple paths.
- module: http
metricsets: ["json"]
enabled: true
period: 10s
hosts: ["localhost:80"]
namespace: "metrics"
method: POST
paths:
- path: "/jolokia/"
body: '{"type" : "read", "mbean" : "kafka.consumer:type=*,client-id=*", "attribute" : "bytes-consumed-rate"}'
- path: "/jolokia/"
body: '{"type" : "read", "mbean" : "kafka.consumer:type=*,client-id=*", "attribute" : "commit-latency-avg"}'
This does not seem to be the right config and I see that the http events have had failures.
2018/02/26 19:53:18.315740 metrics.go:39: INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30000 beat.memstats.gc_next=4767600 beat.memstats.memory_alloc=4016168 beat.memstats.memory_total=47474256 libbeat.config.module.running=3 libbeat.output.read.bytes=4186 libbeat.output.write.bytes=16907 libbeat.pipeline.clients=7 libbeat.pipeline.events.active=0 libbeat.pipeline.events.published=18 libbeat.pipeline.events.total=18 libbeat.pipeline.queue.acked=18 metricbeat.http.json.events=3 metricbeat.http.json.failures=3
Documentation on how to setup http module: Example configuration
I had to query multiple URL's of my REST API and I could achieve that by having multiple modules of "http" with different host URL's following is the example:
- module: http
metricsets: ["json"]
period: 3600s
hosts: ["http://localhost/Projects/method1/"]
namespace: "testmethods"
method: "GET"
enabled: true
- module: http
metricsets: ["json"]
period: 3600s
hosts: ["http://localhost/Projects/method2/"]
namespace: "testmethods"
method: "GET"
enabled: true
This made me achieve have multiple paths for the same module
Multiple paths are not supported by the http module's json metricset.
What you found in the config example is for the http module's server metricset. This metricset does not query URLs. Instead it opens an http server on the specified port, and can receive input on multiple paths which are used to separate data into different namespaces.

Resources