Kong error using deck sync - service that already exists - continuous-integration

I'm using deck in a CI pipeline to sync configurations to Kong from a declarative yaml file, like this:
_format_version: "1.1"
_info:
defaults: {}
select_tags:
- ms-data-export
services:
- connect_timeout: 60000
enabled: true
host: <the-host-name>
name: data-export-api
path: /api/download
port: <the-port>
protocol: http
read_timeout: 60000
retries: 5
routes:
- name: data-export
https_redirect_status_code: 426
path_handling: v0
preserve_host: false
regex_priority: 0
request_buffering: true
response_buffering: true
strip_path: true
paths:
- /api/download
protocols:
- http
plugins:
- config:
bearer_only: "yes"
client_id: kong
...
...
The error I'm getting occurs while running deck sync --kong-addr <kong-gateway> -s <the-above-yaml-file>, and when there are no actual changes to sync from the file (because the particular service already exists), and it says:
creating service data-export-api
Summary:
Created: 0
Updated: 0
Deleted: 0
Error: 1 errors occurred:
while processing event: {Create} service data-export-api failed: HTTP status 409 (message: "UNIQUE violation detected on '{name=\"data-export-api\"}'")
data-export-api is the name of the service that already exists in kong and deck tries to create.
Is there a way to avoid this error?

Related

Alertmanager Webhook configuration for multiple alertId under same receiver

I am currently working on Alert manager and trying to handle multiple Alerts from Prometheus with different Id under same receiver:
global:
resolve_timeout: 5m
route:
group_by: ['alertname']
group_wait: 10s
group_interval: 10s
receiver: 'web.hook'
routes:
- receiver: "web.hook"
continue: true
- receiver: "abc.hook"
match:
id: 1234567
severity: CRITICAL
continue: true
receivers:
- name: 'abc.hook'
webhook_configs:
- url: 'http://localhost:8080/services/alert'
- name: 'web.hook'
webhook_configs:
- url: 'http://localhost:8005/'
inhibit_rules:
- source_match:
severity: 'critical'
target_match:
severity: 'warning'
equal: ['alertname', 'dev', 'instance']
So in above configuration under **route ** section, we have routes which contains receiver with name: "abc.hook"
I have two alerts with different Id (1234567 and 8765432)
- receiver: "abc.hook"
match:
id: 1234567
severity: CRITICAL
continue: true
Is it possible to allow both alerts with different Ids(1234567 and 8765432) under receiver mentioned above.
I tried finding something like this over google but found nothing helpful.
I also tried something like this:
- receiver: "abc.hook"
match:
id: 1234567|8765432
severity: CRITICAL
continue: true
Any helpful information how to achieve above scenario will be appreciated.

How to route requests to a dynamic endpoint on kong api-gateway

I have a service named alpha (created using python-django) that runs on http://127.0.0.1:9000 and has these two endpoings
/health returns {"health": "OK"} status 200
/codes/<str:code> returns {"code": code} status 200
I also have a kong api-gateway in the db-less declarative mode that runs on localhost port 80
in kong.yaml I have two services
services:
- name: local-alpha-health
url: http://host.docker.internal:9000/health
routes:
- name: local-alpha-health
methods:
- GET
paths:
- /alpha/health
strip_path: true
- name: local-alpha-code
url: http://host.docker.internal:9000/code/ # HOW TO WRITE THIS PART???
routes:
- name: local-alpha-code
methods:
- GET
paths:
- /alpha/code/(?<appcode>\d+) # Is this right???
strip_path: true
If I send a GET request to http://127.0.0.1/alpha/health it returns {"health": "OK"} status 200 which shows kong is working.
I want to send a request such as http://127.0.0.1/alpha/code/123 and I expect to receive {"code": 123} status 200 but I don't know how to setup kong.yaml file to do this. If I send a request to http://127.0.0.1/alpha/code/123 I get 404 from (from the alpha django application) which means kong is routing the request to alpha service but if I send a request to http://127.0.0.1/alpha/code/abc I get {"message": "no Route matched with those values"} which shows the regex is working
I could do this
services:
- name: local-alpha-health
url: http://host.docker.internal:9000/
routes:
- name: local-alpha-health
methods:
- GET
paths:
- /alpha
strip_path: true
Then a request sent to http://127.0.0.1/alpha/code/123 would go to ``http://127.0.0.1:9000/code/123` but I cannot control with regex
Any idea How to route requests to a dynamic endpoint on kong api-gateway?
This content seems related but cannot figure it out how to set it up
https://docs.konghq.com/gateway-oss/2.5.x/proxy/
Note that a request like http://127.0.0.1/alpha/code/abc will indeed not match the rule you have added, because of the \d+ (which matches one or more digits). Also, http://127.0.0.1/alpha/code/123 will reach the upstream as a request to /, since you have strip_path set to true.
I have tested your example with some minor tweaks to proxy to a local httpbin service, which has a similar endpoint (/status/<code>).
Start a local httpbin service:
$ docker run --rm -d -p "8080:80" kennethreitz/httpbin
Start Kong with the following config:
_format_version: "2.1"
services:
- name: local-alpha-code
url: http://localhost:8080
routes:
- name: local-mockbin-status
methods:
- GET
paths:
- /status/(?<appcode>\d+)
strip_path: false
Note that strip_path is set to false, so the entire matching path is proxied to the upstream.
Test it out with:
$ http :8000/status/200
HTTP/1.1 200 OK

How to sms with prometheus/alertmanager

I have two problems that I can't solve because I don't know if I'm missing something or not..
Here is my promising configuration, and I would therefore like to receive alerts via sms or via pushover, but it does not work.
global:
resolve_timeout: 5m
route:
group_by: ['critical']
group_wait: 30s
group_interval: 180s
repeat_interval: 300s
receiver: myIT
receivers:
- name: 'myIT'
email_configs:
- to: me#myfirm
from: me#myfirm
smarthost: ssl0.ovh.net:587
auth_username: 'me#myfirm'
auth_identity: 'me#myfirm'
auth_password: 'ZZZZZZZZZZZZZZZZZ'
- name: Teams
webhook_configs:
- url: 'https://teams.microsoft.com/l/channel/19%3xxxxxxxxyyyyuxxxab%40thread.tacv2/Alertes?groupId=xxxxxxxxyyyyuxxx0&t enantId=3caa0abd-0122-496f-a6cf-73cb6d3aaadd'
send_resolved: true
- name: Sms
webhook_configs:
- url: 'https://www.ovh.com/cgi-bin/sms/http2sms.cgi?&account=sms-XXXXXXX-1&login=XXXXX&password=XXXXXXX&from=XXXXXX&to=0123456789&message=Alert '
send_resolved: true
- name: pushover
pushover_configs:
- user_key: xxxxxxxxyyyyuxxx
token: xxxxxxxxyyyyuxxx
For the pushover part, it works via my grafana (and still not all the time). For the http2sms, it works all the time via a browser.
But for both it doesn't work under alertmanager. AND I would like to be able to differentiate the alerts. The simple warnign in teams or by email for example, and criticize them by sms.
Did I forget to install something?
Does anyone have a configuration that could look like this need? Thank you
Well. I found.
route:
group_by: ['critical']
group_wait: 30s
group_interval: 180s
repeat_interval: 300s
receiver: myIT
receivers:
- name: 'myIT'
email_configs:
- to: me#myfirm
from: me#myfirm
smarthost: ssl0.ovh.net:587
auth_username: 'me#myfirm'
auth_identity: 'me#myfirm'
auth_password: 'ZZZZZZZZZZZZZZZZZ'
webhook_configs:
- url: 'https://teams.microsoft.com/l/channel/19%3xxxxxxxxyyyyuxxxab%40thread.tacv2/Alertes?groupId=xxxxxxxxyyyyuxxx0&t enantId=3caa0abd-0122-496f-a6cf-73cb6d3aaadd'
send_resolved: true
pushover_configs:
- user_key: xxxxxxxxyyyyuxxx
token: xxxxxxxxyyyyuxxx
It works fine like that.

Add endpoint as the receiver in the prometheus alert configuration

I am trying to activate my spring boot application endpoints with the alerts, for the required event that is defined in the alert rules of prometheus is broken, so that I want to add my application endpoints as a receiver to receive alerts from the prometheus alertmanager. Can anyone please suggest how to configure endpoint as a receiver to this receiver label, instead of any other push notifiers?
- receiver: 'frontend-pager'
group_by: [product, environment]
matchers:
- team="frontend"
I think 'webhook receiver' can help you. More information can refer doc https://prometheus.io/docs/alerting/latest/configuration/#webhook_config
This is an example of a webhook alert created based on blackbox_exporter's metric scraping.
prometheus rule setting
You need to create rule(s) to trigger alert, defined a rule named 'http_health_alert' here as example.
groups:
- name: http
rules:
- alert: http_health_alert
expr: probe_success == 0
for: 3m
labels:
type: http_health
annotations:
description: Health check for {{$labels.instance}} is down
Alertmanager setting
'match' is set to http_health_alert, the alert will be sent to'http://example.com/alert/receiver' via HTTP/POST method (I think you will prepare in advance).
The alert will post JSON format to the configured endpoint 'http://example.com/alert/receiver'. And you can also customize different receiving methods or receiving information in the endpoint/program for different label contents.
global:
route:
group_by: [alertname, env]
group_wait: 30s
group_interval: 3m
repeat_interval: 1h
routes:
- match:
alertname: http_health_alert
group_by: [alertname, env]
group_wait: 30s
group_interval: 3m
repeat_interval: 1h
receiver: webhook_receiver
receivers:
- name: webhook_receiver
webhook_configs:
- send_resolved: true
url: http://example.com/alert/receiver
- name: other_receiver
email_configs:
- send_resolved: true
to: xx
from: xxx

An error occurred: IamRoleLambdaExecution - Maximum policy size of 10240 bytes exceeded for role

using serverless-plugin-split-stacks in serverless.yml and getting this error
An error occurred: IamRoleLambdaExecution - Maximum policy size of 10240 bytes exceeded for role Vkonnect-dev-ap-south-1-lambdaRole (Service: AmazonIdentityManagement; Status Code: 409; Error Code: LimitExceeded; Request ID: 51920d55-4b81-4b6c-99f1-d9f0ba087cc2; Proxy: null).
when i use serverless-plugin-custom-roles i get this error
The CloudFormation template is invalid: Circular dependency between resources: [GenerateOtpDocLambdaPermissionApiGateway, DoctorUnderscorelistLambdaPermissionApiGateway .......]
serverless.yml
service: Vkonnect #Name of your App
provider:
name: aws
runtime: nodejs14.x # Node JS version
memorySize: 128
timeout: 10
stage: dev
region: ap-south-1 # AWS region
deploymentBucket:
name: vkonnectlayers
# iamRoleStatements:
# - Effect: "Allow"
# Action:
# - "s3:*"
# Resource:
# NOTE you can't refer to the LogicalID of S3Bucket, otherwise
# there will be a circular reference in CloudFormation
iamRoleStatements:
- Effect: "Allow"
Action:
- "cloudformation:*"
- "codecommit:*"
- "apigateway:*"
- "execute-api:Invoke"
- "execute-api:ManageConnections"
- "cloudformation:DescribeStacks"
- "cloudformation:ListStackResources"
- "cloudwatch:ListMetrics"
- "cloudwatch:GetMetricData"
- "ec2:DescribeSecurityGroups"
- "ec2:DescribeSubnets"
- "ec2:DescribeVpcs"
- "kms:ListAliases"
- "iam:GetPolicy"
- "iam:GetPolicyVersion"
- "iam:GetRole"
- "iam:GetRolePolicy"
- "iam:ListAttachedRolePolicies"
- "iam:ListRolePolicies"
- "iam:ListRoles"
- "lambda:*"
- "logs:DescribeLogGroups"
- "states:DescribeStateMachine"
- "states:ListStateMachines"
- "tag:GetResources"
- "xray:GetTraceSummaries"
- "xray:BatchGetTraces"
Resource:
- "*"
- "arn:aws:apigateway:*::/*"
- "arn:aws:events:*:*:rule/codecommit*"
- "arn:aws:logs:*:*:log-group:/aws/lambda/*"
plugins:
- serverless-offline
- serverless-layers
- serverless-plugin-split-stacks
- serverless-plugin-custom-roles
# - serverless-nested-stack
package:
individually: true
exclude:
- ./**
custom:
splitStacks:
perFunction: false
perType: false
perGroupFunction: true
nestedStackCount: 5
serverless-layers: # All Layers
- moment:
name: moment
excludeDevDependencies: false
individually: true
dependenciesPath: ./layers/moment-layer/package.json
package:
patterns:
- /**
- "!node_modules/**"
first create
IAM role
in your aws account with full access to the service that u want then do following
serverless.yaml
provider:
name: aws
runtime: nodejs14.x
memorySize: 128
timeout: 5
stage: prod
region: us-east-1 # AWS region
versionFunctions: false
deploymentBucket:
name: XXXXXX
iam:
role: arn:aws:iam::XXXXXX:role/full //your role arn
plugins:
- serverless-offline
- serverless-layers
- serverless-plugin-split-stacks
package:
individually: true
exclude:
- ./**
For policy size limit error:
AWS has a limit on policy size. Check this article for reference: https://aws.amazon.com/premiumsupport/knowledge-center/iam-increase-policy-size/
For circular dependency error:
Check this AWS blog: https://aws.amazon.com/blogs/infrastructure-and-automation/handling-circular-dependency-errors-in-aws-cloudformation/
AWS is setting limit on few of the resources like IAM, S3 etc. Resources should not exceed whatever the limit is set. You can submit a request to AWS Support to increase the limit.
Before that, you can go to service quota in AWS to know the limit for AWS resources. Based on that you can take a call to submit a request to AWS or follow the above document to reduce the size.

Resources