How to limit requests based on source IP on Istio - limit

I am using istio version 1.12.
I tried to add limit configuration but it can't limit requests based on source IP.Does Istio support this?
I am using the official sample:
https://istio.io/v1.12/docs/tasks/policy-enforcement/rate-limit/#global-rate-limit

You can use remote_address key.
ConfigMap changes ratelimit service:
- key: remote_address
rate_limit:
requests_per_unit: 10
unit: second
Restart the ratelimit service.
Envoyfilter changes:
- actions:
- remote_address: {}
See also: https://github.com/neumanndaniel/kubernetes/tree/master/envoy-ratelimit

Related

Kong rate limit plugin to multiple services

I'm using rate limiting plugin by Kong, I'm using the declarative mode with a config file like:
_format_version: "3.0"
_transform: true
services:
- name: service1
url: https://example.com/api/endpoint1
routes:
- name: route1
paths:
- /path1
- name: service2
url: https://example.com/api/endpoint2
routes:
- name: route2
paths:
- /path2
plugins:
- name: rate-limiting
service: service1
config:
second: 5
policy: local
limit_by: service
- name: rate-limiting
service: service2
config:
second: 5
policy: local
limit_by: service
for the rate limiting plugin I'm also using the attribute limit_by: service, because I want to rate limit all the requests arriving for that service not just by grouping by IP.
I have more or less 12 endpoints like the above conf. and for every endpoint I want to apply, up to now, the same rate limit, every time there is the need to add a block like that:
- name: rate-limiting
service: serviceN
config:
second: 5
policy: local
limit_by: service
is it possible to specify a rate-limit for every service globally but still groupyng by the service name?
In the docs is written:
so it seems that every time we need to add the limit_by property otherwise it should group requests by IP.
Is there any way to do it without repeating every tie the plugin block?
Thanks
You can use the following to configure the plugin globally:
plugins:
- name: rate-limiting
config:
second: 5
policy: local
limit_by: service
The parameter limit_by configured to the value service defines the scope of the rate limit buckets by service id. There is no need to explicitly specify service names or ids.

Service Discovery with Envoy

How does it work with Envoy?
Let's say I have configured an upstream cluster like this:
clusters:
-
name: "service_a_cluster"
connect_timeout: "0.25s"
type: "strict_dns"
lb_policy: "ROUND_ROBIN"
hosts:
-
socket_address:
address: "service_a"
port_value: 8786
How is my Envoy instance (ClusterManager?) going to resolve service_a?
To whom is it going to send DNS queries?
Envoy has internal mechanisms for doing resolution, and these are all available through configuration. It looks like you're using Envoy v2 apis, so the relevant high level config is in the cluster object here.
If you read that, you'll notice the hosts field references the type field. This type field tells envoy how to handle discovery/resolution. The full details of that mechanism is here.

Elastic Cloud on Kubernetes change config of the server

I'm running ECK cluster with rancher2. There are 3 nodes: 2 for elasticsearch, 1 for kibana.
I want to change Elastic-server configuration with operator, for example, disable ssl communication.
But what right way to do it? Mount config-file from host? Please give some ideas
Quoting the documentation:
You can explicitly disable TLS for Kibana, APM Server, Enterprise Search and the HTTP layer of Elasticsearch.
spec:
http:
tls:
selfSignedCertificate:
disabled: true
That is generally useful when you want to run ECK with Istio and want to let that manage TLS.
However, you cannot disable TLS for the transport communication (between the Elasticsearch nodes). For security reasons that is always enabled.
PS: For a highly available cluster, you'd want at least 3 Elasticsearch nodes. Having 2 isn't helping you — if one of them is going down, the other one will degrade as well, since Elasticsearch is built around a majority based consensus protocol.

Nginx HTTPS->HTTP gets 403 "SSL required" error with Spring

The problem
I'm getting 403 SSL required from Spring when trying to route through my ELB to Kubernetes Nginx ingress controller.
Setup
My set up is as follows:
I've got an ELB (AWS) with ACM for my Kubernetes cluster (created by kops) which routes all requests to the
Nginx Ingress Controller which in turn routes all requests according to the rules dictated in the
Ingress that passes the traffic unto the
Service that exposes port 80 and routes in to port 8080 in the
Pods selected with labels "app=foobar" (which are described in a Deployment)
Pods are running a Spring Boot Web App v2.1.3
So basically:
https://foo.bar.com(:443) -> ingress -> http://foo.bar.svc.cluster.local:80
This works like a charm for everything. Except SprintBoot.
For some reason, I keep getting 403 - SSL required from Spring
One note to keep in mind here: my Spring application does not have anything to do with SSL. I don't want it to do anything in that nature. For this example's purposes, this should be a regular REST API requests, with the SSL termination happening outside the container.
What I tried so far
Port-forwarding to the service itself and requesting - it works fine.
Disabling CSRF in WebSecurityConfigurerAdapter
Putting ingress annotation nginx.ingress.kubernetes.io/force-ssl-redirect=true - it gives out TOO_MANY_REDIRECTS error when I try it (instead of the 403)
Putting ingress annotation nginx.ingress.kubernetes.io/ssl-redirect=true - doesn't do anything
Putting ingress annotation nginx.ingress.kubernetes.io/enable-cors: "true" - doesn't do anything
Also nginx.ingress.kubernetes.io/ssl-passthrough: "true"
Also nginx.ingress.kubernetes.io/secure-backends: "true"
Also kubernetes.io/tls-acme: "true"
I tried a whole bunch of other stuff that I can't really remember right now
How it all looks like in my cluster
Nginx ingress controller annotations look like this (I'm using the official nginx ingress controller helm chart, with very little modifications other than this thing):
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "aws_acm_certificate_arn"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
Ingress looks like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foobar
namespace: api
spec:
rules:
- host: foo.bar.com
http:
paths:
- backend:
serviceName: foobar
servicePort: http
path: /
Service looks like this:
apiVersion: v1
kind: Service
metadata:
name: foobar
namespace: api
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: foobar
What I think the problem is
My hunch is that it's something with X-Forwarded headers, and Spring doing its magic behind the scenes, trying to be all smart like and deciding that I need SSL based on some headers without me explicitly asking for it. But I didn't figure it out yet.
I searched far and wide for a solution, but I couldn't find anything to ease my pain... hope you'll be able to help!
Edit
I found out that my current setup (without k8s and nginx) works fine and ELB passes X-Forwarded-Port: 443 and X-Forwarded-Proto: https, and it seems to work, but on my k8s cluster with nginx, I put in a listener client that spits out all the headers, and my headers seem to be X-Forwarded-Port: 80 and X-Forwarded-Proto: http
Thanks for all the people that helped out, I actually found the answer.
Within the code there were some validations that all requests should come from a secure source, and Nginx Ingress Controller changed these headers (X-Forwarded-Proto and X-Forwarded-Port) because SSL was terminated within ELB and handed to the ingress controller as HTTP
To fix that I did the following:
Added use-proxy-protocol: true to the config map - which passed the correct headers, but got errors regarding broken connection (which I don't really remember the actual error right now, I'll edit this answer later if there will be any requests for it)
To fix these errors I added the following the the nginx ingress controller annotations configuration:
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
This made sure that all traffic will use the proxy protocol, and I also had to change the backend-protocol from HTTP to TCP.
Doing this made sure all requests routed ELB reserve their original X headers, and are passed unto Nginx Ingress Controller, which in turn passed unto my applications that require these headers to be passed.

Is there a way to provide custom value other than ClientIP for sessionAffinity in kubernetes?

First of all request goes to proxy service that i've implemented, service forwards request to pods randomly without using sessionAffinity. I want to send requests to same pod based on custom value that i've set in request parameters using post method. I've used sessionAffinity with my service yml.
Here's service yml with sessionAffinity:
apiVersion: v1
metadata:
name: abcd-service
namespace: ab-services
spec:
ports:
- name: http
protocol: TCP
port: ****
targetPort: ****
nodePort: *****
selector:
app: abcd-pod
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 600
type: NodePort
Now problem is that when request are send by multiple client's from same ip address all requests are directed to single pod and not to other replicas, causing uneven load balancing. But I don't want requests to be forwarded randomly either. i want all request's from same client or different client to be forwarded based on custom value that i set in post request and not by clientIP considering clientIP resolves to source ip of each request.
As you can read here, it currently supports only ClientIP and None values.
sessionAffinity string Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity.
Must be ClientIP or None. Defaults to None. More info:
https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
Unfortunatelly there are no other values allowed.

Resources