I tried to use slack configs for both email and slack webhook receivers, in this case I am facing an issue that only one of them is working - slack

kind: ConfigMap
apiVersion: v1
metadata:
name: alertmanager-config
namespace:
data:
alertmanager.yml: |
global:
resolve_timeout: 5m
smtp_from: from-email#email.com
smtp_smarthost: smtp.eample.com:25
smtp_auth_username: key
smtp_auth_password: user#123
route:
group_by: ['alertname', 'priority']
group_wait: 30s
group_interval: 30s
repeat_interval: 1h
receiver: "slack"
routes:
- receiver: "slack"
match_re:
severity: critical|warning
continue: true
- receiver: "email"
match:
status: critical|warning
receivers:
- name: "slack"
slack_configs:
- api_url:
send_resolved: true
channel: 'monitoring'
http_config:
basic_auth:
username: user
password: user#123
- name: "email"
email_configs:
- to: email#123#gmail.com
send_resolved: true

Related

Does Istio not support wss?

Although I have read many articles, it seems that few people are discussing this problem, and I have not seen any solutions. I would like to know whether wss is not supported after using Istio as the entry point? According to my test, it seems that wss really can't run on istio, but ws is normal, gateway & virtualservice I have done the following test.
Hope someone can discuss this with me, even if it turns out that Istio doesn't support wss, thanks!
1.use tls.mode: PASSTHROUGH
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: test-ws-gw
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: PASSTHROUGH
hosts:
- "test.ws.com"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: test-ws-vs
namespace: ws-test
spec:
hosts:
- test.ws.com
gateways:
- istio-system/test-ws-gw
tls:
- match:
- port: 443
sniHosts:
- "test.ws.com"
route:
- destination:
port:
number: 9400
host: ws-svc.ws-test.svc.cluster.local
use tls.mode: SIMPLE
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: test-ws-gw
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: ingress-cert-ws
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: test-ws-vs
spec:
hosts:
- "test.ws.com"
gateways:
- test-ws-gw
http:
- match:
- uri:
prefix: /
route:
- destination:
port:
number: 9400
host: ws-svc.ws-test.svc.cluster.local

ECK Filebeat Daemonset Forwarding To Remote Cluster

I wish to forward logs from remote EKS clusters to a centralised EKS cluster hosting ECK.
Versions in use:
EKS v1.20.7
Elasticsearch v7.7.0
Kibana v7.7.0
Filebeat v7.10.0
The setup is using a AWS NLB to forward requests to Nginx ingress, using host based routing.
When the DNS lookup (filebeat test output) for the Elasticsearch is tested on Filebeat, it validates the request.
But the logs for Filebeat are telling a different story.
2021-10-05T10:39:00.202Z ERROR [publisher_pipeline_output]
pipeline/output.go:154 Failed to connect to backoff(elasticsearch(https://elasticsearch.dev.example.com:9200)):
Get "https://elasticsearch.dev.example.com:9200": Bad Request
The Filebeat agents can connect to the remote Elasticsearch via the NLB, when using a curl request.
The config is below. NB: dev.example.com is the remote cluster hosing ECK.
app:
name: "filebeat"
configmap:
enabled: true
filebeatConfig:
filebeat.yml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
templates:
- config:
- type: container
paths:
- /var/lib/docker/containers/*/${data.kubernetes.container.id}-json.log
exclude_lines: ["^\\s+[\\-`('.|_]"]
processors:
- drop_event.when.not.or:
- contains.kubernetes.namespace: "apps-"
- equals.kubernetes.namespace: "cicd"
- decode_json_fields:
fields: ["message"]
target: ""
process_array: true
overwrite_keys: true
- add_fields:
fields:
kubernetes.cluster.name: dev-eks-cluster
target: ""
processors:
- add_cloud_metadata: ~
- add_host_metadata: ~
cloud:
id: '${ELASTIC_CLOUD_ID}'
cloud:
auth: '${ELASTIC_CLOUD_AUTH}'
output:
elasticsearch:
enabled: true
hosts: "elasticsearch.dev.example.com"
username: '${ELASTICSEARCH_USERNAME}'
password: '${ELASTICSEARCH_PASSWORD}'
protocol: https
ssl:
verification_mode: "none"
headers:
Host: "elasticsearch.dev.example.com"
proxy_url: "https://example.elb.eu-west-2.amazonaws.com"
proxy_disable: false
daemonset:
enabled: true
version: 7.10.0
image:
repository: "docker.elastic.co/beats/filebeat"
tag: "7.10.0"
pullPolicy: Always
extraenvs:
- name: ELASTICSEARCH_HOST
value: "https://elasticsearch.dev.example.com"
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: "elastic"
- name: ELASTICSEARCH_PASSWORD
value: "remote-cluster-elasticsearch-es-elastic-user-password"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
clusterrolebinding:
enabled: true
namespace: monitoring
clusterrole:
enabled: true
serviceaccount:
enabled: true
namespace: monitoring
deployment:
enabled: false
configmap:
enabled: false
Any tips or suggestions on how to enable Filebeat forwarding, would be much appreciated :-)
#1 Missing ports:
Even with the ports added in as suggested. Filebeat is erroring with:
2021-10-06T08:34:41.355Z ERROR [publisher_pipeline_output] pipeline/output.go:154 Failed to connect to backoff(elasticsearch(https://elasticsearch.dev.example.com:9200)): Get "https://elasticsearch.dev.example.com:9200": Bad Request
...using a AWS NLB to forward requests to Nginx ingress, using host based routing
How about unset proxy_url and proxy_disable, then set hosts: ["<nlb url>:<nlb listener port>"]
The final working config:
app:
name: "filebeat"
configmap:
enabled: true
filebeatConfig:
filebeat.yml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
templates:
- config:
- type: container
paths:
- /var/lib/docker/containers/*/${data.kubernetes.container.id}-json.log
exclude_lines: ["^\\s+[\\-`('.|_]"]
processors:
- drop_event.when.not.or:
- contains.kubernetes.namespace: "apps-"
- equals.kubernetes.namespace: "cicd"
- decode_json_fields:
fields: ["message"]
target: ""
process_array: true
overwrite_keys: true
- add_fields:
fields:
kubernetes.cluster.name: qa-eks-cluster
target: ""
processors:
- add_cloud_metadata: ~
- add_host_metadata: ~
cloud:
id: '${ELASTIC_CLOUD_ID}'
cloud:
auth: '${ELASTIC_CLOUD_AUTH}'
output:
elasticsearch:
enabled: true
hosts: ["elasticsearch.dev.example.com:9200"]
username: '${ELASTICSEARCH_USERNAME}'
password: '${ELASTICSEARCH_PASSWORD}'
protocol: https
ssl:
verification_mode: "none"
daemonset:
enabled: true
version: 7.10.0
image:
repository: "docker.elastic.co/beats/filebeat"
tag: "7.10.0"
pullPolicy: Always
extraenvs:
- name: ELASTICSEARCH_HOST
value: "https://elasticsearch.dev.example.com"
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: "elastic"
- name: ELASTICSEARCH_PASSWORD
value: "remote-cluster-elasticsearch-es-elastic-user-password"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
clusterrolebinding:
enabled: true
namespace: monitoring
clusterrole:
enabled: true
serviceaccount:
enabled: true
namespace: monitoring
deployment:
enabled: false
configmap:
enabled: false
In addition the following changes were needed:
NBL:
Add listener for 9200 forwarding to the Ingress Controller for HTTPS
SG:
Opened up port 9200 on the EKS worker nodes

Filebeat: How to export logs of specific pods

This is my filebeat config map.
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: $${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
setup.ilm.enabled: false
processors:
- add_cloud_metadata:
- add_host_metadata:
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
This sends logs from every pod to AWS ElasticSearch. How I can restrict it to send logs from specific pods by name and/or by the label?

pod spring boot(jhipster) not connect cloud SQL

I have tried to connect from a pod (jhipster) to a Google cloud SQL but I have not been successful.
My pod is left in CrashLoopBackOff because Cloud SQL can not connect Error:
org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IPconnections.atorg.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:280)atorg.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)......ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [cl/databin/invoicing/folio/config/LiquibaseConfiguration.class]: Invocation of init method failed; nested exception is liquibase.exception.DatabaseException: org.postgresql.util.PSQLException: Connection to localhost:5432 refused.
my folio-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: folio
namespace: jhipster
spec:
replicas: 2
selector:
matchLabels:
app: folio
version: "v1"
template:
metadata:
labels:
app: folio
version: "v1"
spec:
containers:
- name: folio-app
image: skilledboy/folio:v1
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
- name: JHIPSTER_SECURITY_AUTHENTICATION_JWT_BASE64_SECRET
valueFrom:
secretKeyRef:
name: jwt-secret
key: secret
- name: SPRING_DATASOURCE_URL
value: jdbc:postgresql://localhost:5432/folio
- name: POSTGRES_DB_USER
value: user
- name: POSTGRES_DB_PASSWORD
value: password1
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=invo-project-233618:us-central1:folios=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-oauth-credential
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: SPRING_SLEUTH_PROPAGATION_KEYS
value: "x-request-id,x-ot-span-context"
- name: JAVA_OPTS
value: " -Xmx256m -Xms256m"
resources:
requests:
memory: "256Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "1"
ports:
- name: http
containerPort: 8081
readinessProbe:
httpGet:
path: /folio/management/health
port: http
initialDelaySeconds: 20
periodSeconds: 15
failureThreshold: 6
livenessProbe:
httpGet:
path: /folio/management/health
port: http
initialDelaySeconds: 120
volumes:
- name: cloudsql-oauth-credential
secret:
secretName: cloudsql-oauth-credential
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
and in the configuration of my application-prod.yml
datasource:
type: com.zaxxer.hikari.HikariDataSource
url: jdbc:postgresql://127.0.0.1:5432/folio
username: ${POSTGRES_DB_USER}
password: ${POSTGRES_DB_PASSWORD}
What will I have wrong? someone to give me an idea that I can have bad? thanks
Your problem is that you are telling the Cloud SQL proxy to run with -credential_file=/secrets/cloudsql/credentials.json, but you haven't actually provided a file at /secrets/cloudsql/ for it to use. (The volume in your config is at /etc/ssl/certs).
It's also worth pointing out that the credential_file flag is for using a service account key, and token flag is used for an oauth token (it's unclear which you are trying to use)

Configuration of Istio-Envoy with Consul

I'm trying to build a service mesh with Istio. Currently I have a Docker-Compose with two REST-services and one sidecar (Envoy) for each. If you send a HTTP-request to serviceA, it is forwarded to serviceB, which returns the result. It works fine. Because the control plane is not implemented yet, the "mesh" is realized using the Envoy-configuration. The config files look as following:
ServiceA:
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: service
domains:
- "*"
routes:
- match:
prefix: "/abc"
route:
prefix_rewrite: "/v1/abc"
host_rewrite: serviceB
cluster: service_ner
http_filters:
- name: envoy.router
config: {}
clusters:
- name: service_abc
connect_timeout: 0.25s
type: logical_dns
dns_lookup_family: V4_ONLY
lb_policy: round_robin
hosts:
- socket_address:
address: serviceB
port_value: 80
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8081
Service B:
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: service
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: local_service
http_filters:
- name: envoy.router
config: {}
clusters:
- name: local_service
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
hosts:
- socket_address:
address: 127.0.0.1
port_value: 5001
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8081
Now I want to implement Istio as control plane and I'm trying to replace a part of the configuration using the Istio-Pilot. Is it possible to overwrite the route rules from the Envoy config files by implementing Istio-route rules? For example:
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: ruleSerA
spec:
destination:
name: serviceA
precedence: 2
route:
- labels:
version: v1
---
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: ruleSerB
spec:
destination:
name: serviceB
precedence: 2
match:
request:
headers:
uri:
prefix: /abc
rewrite:
uri: /v1/abc
route:
- labels:
version: v1
What else do I need to configure to connect the data plane with the service plane? Until now the Istio-route rules did not affect the services at all.
Best regards, Martin

Resources