How to parse a specific HTTP status error - elasticsearch

I have a nginx-ingress-controller logs in the Elasticsearch that is not able to catch/parse HTTP error 503 (Service Unavailable) and come up with it in the "Status" field, but HTTP 200 and others work.
Elasticsearch discover - currently
I have these the annotations already configured in the nginx-ingress-controller deployment:
annotations:
co.elastic.logs/processors.0.dissect.ignore_failure: "true"
co.elastic.logs/processors.0.dissect.target_prefix: dissect
co.elastic.logs/processors.0.dissect.tokenizer: '%{levelandtimestamp} %{pid} %{class} %{message}'
co.elastic.logs/processors.0.dissect.when.regexp.message: ^[IWEF][0-9]{4}.*
co.elastic.logs/processors.1.copy_fields.fields.0.from: dissect.levelandtimestamp
co.elastic.logs/processors.1.copy_fields.fields.0.to: dissect.timestamp
co.elastic.logs/processors.1.copy_fields.ignore_missing: "true"
co.elastic.logs/processors.2.dissect.field: dissect.timestamp
co.elastic.logs/processors.2.dissect.ignore_failure: "true"
co.elastic.logs/processors.2.dissect.target_prefix: dissect
co.elastic.logs/processors.2.dissect.tokenizer: '%{time_normilized}'
co.elastic.logs/processors.2.dissect.trim_chars: IWEF
co.elastic.logs/processors.2.dissect.trim_values: left
co.elastic.logs/processors.3.timestamp.field: dissect.time_normilized
co.elastic.logs/processors.3.timestamp.ignore_missing: "true"
co.elastic.logs/processors.3.timestamp.layouts: 0102 15:04:05.999
co.elastic.logs/processors.3.timestamp.when.has_fields: dissect.time_normilized
co.elastic.logs/processors.4.drop_fields.fields: message
co.elastic.logs/processors.4.drop_fields.ignore_missing: "true"
co.elastic.logs/processors.4.drop_fields.when.regexp.message: ^[IWEF][0-9]{4}.*
co.elastic.logs/processors.5.rename.fields.0.from: dissect.message
co.elastic.logs/processors.5.rename.fields.0.to: message
co.elastic.logs/processors.5.rename.fields.1.from: dissect.class
co.elastic.logs/processors.5.rename.fields.1.to: class
co.elastic.logs/processors.5.rename.ignore_missing: "true"
co.elastic.logs/processors.5.rename.when.has_fields: dissect.message
co.elastic.logs/processors.6.add_fields.fields.level: INFO
co.elastic.logs/processors.6.add_fields.target: ""
co.elastic.logs/processors.7.add_fields.fields.level: ERROR
co.elastic.logs/processors.7.add_fields.target: ""
co.elastic.logs/processors.7.add_fields.when.contains.dissect.levelandtimestamp: E
co.elastic.logs/processors.8.add_fields.fields.level: WARN
co.elastic.logs/processors.8.add_fields.target: ""
co.elastic.logs/processors.8.add_fields.when.contains.dissect.levelandtimestamp: W
co.elastic.logs/processors.9.add_fields.fields.level: FATAL
co.elastic.logs/processors.9.add_fields.target: ""
co.elastic.logs/processors.9.add_fields.when.contains.dissect.levelandtimestamp: F
co.elastic.logs/processors.10.decode_json_fields.fields: message
co.elastic.logs/processors.10.decode_json_fields.max_depth: "1"
co.elastic.logs/processors.10.decode_json_fields.overwrite_keys: "true"
co.elastic.logs/processors.10.decode_json_fields.target: ""
co.elastic.logs/processors.11.timestamp.field: time
co.elastic.logs/processors.11.timestamp.layouts: "2006-01-02T15:04:05+00:00"
co.elastic.logs/processors.11.timestamp.when.has_fields: time
co.elastic.logs/processors.12.dissect.field: request_query
co.elastic.logs/processors.12.dissect.ignore_failure: "true"
co.elastic.logs/processors.12.dissect.target_prefix: ""
co.elastic.logs/processors.12.dissect.tokenizer: '%{request_method} %{request_uri} %{request_protocol}'
co.elastic.logs/processors.12.dissect.when.regexp.message: request_query
co.elastic.logs/processors.13.drop_fields.fields: message
co.elastic.logs/processors.13.drop_fields.ignore_missing: "true"
co.elastic.logs/processors.13.drop_fields.when.has_fields: request_query
Filebeat.yaml - ConfigMap:
Data
====
filebeat.yml:
----
http:
enabled: true
host: localhost
port: 5066
filebeat.inputs:
- type: udp
max_message_size: 10MiB
host: "0.0.0.0:9999"
fields:
event_type: "vault-audit"
fields_under_root: true
processors:
- decode_json_fields:
fields: ["message"]
target: "vault"
process_array: true
overwrite_keys: false
add_error_key: true
- copy_fields:
fields:
- from: vault.response.data.username
to: vault.response.datainfo.username
fail_on_error: false
ignore_missing: true
- drop_fields:
fields: ["vault.response.data"]
ignore_missing: true
- timestamp:
field: vault.time
layouts:
- 'Y'
- type: tcp
max_message_size: 10MiB
host: "0.0.0.0:9000"
fields:
event_type: "vault-audit"
fields_under_root: true
processors:
- decode_json_fields:
fields: ["message"]
target: "vault"
process_array: true
overwrite_keys: false
add_error_key: true
- copy_fields:
fields:
- from: vault.response.data.username
to: vault.response.datainfo.username
fail_on_error: false
ignore_missing: true
- drop_fields:
fields: ["vault.response.data"]
ignore_missing: true
- timestamp:
field: vault.time
layouts:
- 'Y'
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: true
hints.default_config:
type: container
paths:
- /var/lib/docker/containers/${data.kubernetes.container.id}/*.log
processors:
- add_fields:
target: kubernetes
fields:
cluster.name: "k8s-dev"
- drop_event:
when:
and:
- contains:
message: "DEBUG"
- contains:
message: "changes.SessionEntityWrapper"
- drop_event:
when:
equals:
class: "c.i.p.metrics.TelegrafMetricObserver"
- drop_event:
when:
contains:
message: "com.xxx"
- drop_event:
when:
contains:
message: "metrics.xxx.com"
- drop_event:
when:
and:
- equals:
service: "CMC"
- contains:
exception.stacktrace: "org.hibernate.HibernateException: createQuery is not valid without active transaction"
- drop_event:
when:
and:
- equals:
kubernetes.container.name: "selenoid"
- contains:
kubernetes.pod.name: "availability-tests"
- drop_event:
when:
equals:
kubernetes.labels.app: "time-nginx"
- add_cloud_metadata: ~
- rename:
ignore_missing: true
fail_on_error: false
fields:
- from: "kubernetes.labels.k8s-app"
to: "service"
- from: "kubernetes.labels.service"
to: "service"
- rename:
fields:
- from: "kubernetes.labels.tenant-alias"
to: "tenant_alias"
ignore_missing: true
fail_on_error: false
when:
not:
has_fields: ['tenant_alias']
- rename:
fields:
- from: "kubernetes.labels.tenant-id"
to: "tenant_id"
ignore_missing: true
fail_on_error: false
when:
not:
has_fields: ['tenant_id']
- script:
lang: javascript
id: lowercase
source: >
function process(event) {
var level = event.Get("level");
if(level != null) {
event.Put("level", level.toString().toLowerCase());
}
}
- drop_fields:
fields:
- dissect
- ecs
- input
- ts
- tsNs
- stream
- kubernetes.namespace_uid
- kubernetes.namespace_labels
- kubernetes.node.uid
- kubernetes.node.hostname
- kubernetes.node.labels
- kubernetes.pod.uid
- kubernetes.pod.ip
- kubernetes.statefulset
- kubernetes.replicaset
- kubernetes.container.image
- kubernetes.labels
- container.id
ignore_missing: true
logging.metrics.enabled: false
logging.json: true
logging.level: warning
output.kafka:
version: 2.0.0
codec.json:
pretty: false
# escape_html: false
client_id: "logshipper"
hosts: ["kafka-cp.xxx.com:9094"]
topic: "logging-kubernetes"
topics:
- topic: "kubernetes-audit"
when.equals:
event_type: "audit"
- topic: "vault-audit"
when.equals:
event_type: "vault-audit"
partition.round_robin:
group_events: 10
reachable_only: false
required_acks: 1
compression: gzip
max_message_bytes: 1e+06
ssl.certificate_authorities: /tmp/ca.crt
How can I parse status error 503 into Elasticsearch Discover the same way as others?

The error was in the log format, more specifically in log-format-upstream in the nginx-ingress-controller configmap, some fields did not have "quote".
old:
log-format-upstream: '{ "time": "$time_iso8601", "remote_addr": "$remote_addr",
"x-forward-for": "$proxy_add_x_forwarded_for", "request_time": $request_time,
"status": $status, "bytes_sent": $bytes_sent, "body_bytes_sent": $body_bytes_sent,
"request_length": $request_length, "request_host": "$http_host", "request_query":
"$request", "http_referrer": "$http_referer", "http_user_agent": "$http_user_agent",
"upstream_name": "$proxy_upstream_name", "upstream_addr": "$upstream_addr", "upstream_response_length":
$upstream_response_length, "upstream_response_time": $upstream_response_time,
"upstream_status": $upstream_status, "request_id": "$req_id" }'
new:
log-format-upstream: '{ "time": "$time_iso8601", "remote_addr": "$remote_addr",
"x-forward-for": "$proxy_add_x_forwarded_for", "request_time": "$request_time",
"status": "$status", "bytes_sent": "$bytes_sent", "body_bytes_sent": "$body_bytes_sent",
"request_length": "$request_length", "request_host": "$http_host", "request_query":
"$request", "http_referrer": "$http_referer", "http_user_agent": "$http_user_agent",
"upstream_name": "$proxy_upstream_name", "upstream_addr": "$upstream_addr", "upstream_response_length":
"$upstream_response_length", "upstream_response_time": "$upstream_response_time",
"upstream_status": "$upstream_status", "request_id": "$req_id" }'

Related

filebeat JSON logging broken logs in Elastic search

I'm using filebeat>kafka>logstash>elastiserach stack of version 7.15.0.
My logs are getting broken and not writing properly into elastic which is resulting in json.erros
error.message:Key 'log' not found
error.type: json
and
error.message:Error decoding JSON: invalid character 's' after array element error.type:json
my log contains both json and nonjson contents
my docker stdout logs>>
{"log":"{\"instant\":{\"epochSecond\":1643023707,\"nanoOfSecond\":538281000},\"thread\":\"grpc-default-executor-11\",\"level\":\"INFO\",\"loggerName\":\"com.abc.ab.ab.core.service.integration.controller.NotifyCoreGrpcController\",\"message\":\"RMQ_CORE_GRPC_NOTIFY RESP : {\\\"baseBizResponse\\\":{\\\"success\\\":true,\\\"resultCode\\\":\\\"SUCCESS\\\"}} \",\"endOfBatch\":false,\"loggerFqcn\":\"org.apache.logging.slf4j.Log4jLogger\",\"contextMap\":{\"RMQ_ID\":\"2022012401445669241212121212\",\"FLOW_TYPE\":\"RMQ_CORE_GRPC_NOTIFY\",\"MERCHANT_TRANS_ID\":\"bcd4ab1e54abaha122\",\"spanId\":\"4fa1474c078afceb\",\"traceId\":\"bcd4ab1e54abaha122\"},\"threadId\":100,\"threadPriority\":5,\"dateTime\":\"2022-01-24T16:58:27.538+0530\"}\r\n","stream":"stdout","time":"2022-01-24T11:28:27.538354156Z"}
and
[244540.169s][debug][gc,age] GC(51) Desired survivor size 80740352 bytes, new threshold 15 (max threshold 15)
Filebeat conf>>
filebeat.yml: |
filebeat.inputs:
- type: container
multiline.pattern: ^[[:space:]]'
multiline.negate: false
multiline.match: after
json.keys_under_root: true
json.message_key: log
json.add_error_key: true
enabled: true
paths:
- /var/log/containers/*.log
exclude_files: ['fluentd-*', 'istio-*', 'cluster-logging-*', 'infra-*']
processors:
- add_kubernetes_metadata:
in_cluster: true
host: ${NODE_NAME}
default_matchers.enabled: false
matchers:
- logs_path:
logs_path: "/var/log/containers/"
processors:
- drop_fields:
fields:
- 'kubernetes.node.name'
output.kafka:
enabled: true
hosts: ["kafka1:9092","kafka2:9092","kafka3:9092"]
partition.round_robin:
reachable_only: false
required_acks: 1
compression: gzip
max_message_bytes: 10000000
topics:
- topic: '%{[kubernetes.labels.app]}'
default: 'app-perf-k8s-logs'
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
filebeat.modules:
- module: nginx
- module: kafka
logging.level: debug
logstash config >>
input {
kafka {
topics_pattern => ".*."
bootstrap_servers => "kafka1:9092","kafka2:9092","kafka3:9092"
client_id => "logstash"
codec => json
decorate_events => true
consumer_threads => 10
heartbeat_interval_ms => "100000"
session_timeout_ms => "300000"
poll_timeout_ms => 300000
partition_assignment_strategy => "org.apache.kafka.clients.consumer.RoundRobinAssignor"
request_timeout_ms => "400000"
group_id => "logConsumer"
auto_offset_reset => "latest"
}
}
output {
elasticsearch {
hosts => "es-logging-perf-lb.abc.com:80"
index => "filebeat-%{[#metadata][kafka][topic]}-%{+YYYY.MM.dd}"
}
Please help and suggest.

Debug CloudFormation validation issue

I am using a linter and my template appears valid, yet my deployment is failing with a "AWS::ElasticLoadBalancingV2::ListenerRule Validation exception". There does not appear to be any place to further drill down on this exception in the Cloud Formation console. How do I determine why my deployment is invalid?
Cloud Formation template
Parameters:
Env:
Type: String
Mappings:
EnvMap:
sandbox:
...
Resources:
HttpsListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
Certificates:
- CertificateArn: !FindInMap [EnvMap, !Ref Env, CertificateArn]
DefaultActions:
- Type: forward
ForwardConfig:
# TODO: read all this stuff off HTTP listener
TargetGroupStickinessConfig:
Enabled: false
TargetGroups:
- TargetGroupArn:
!FindInMap [EnvMap, !Ref Env, LoadBalancerDefaultTargetArn]
Weight: 1
TargetGroupArn:
!FindInMap [EnvMap, !Ref Env, LoadBalancerDefaultTargetArn]
LoadBalancerArn: !FindInMap [EnvMap, !Ref Env, LoadBalancerArn]
Port: 443
Protocol: HTTPS
HttpsListenerRule:
Type: AWS::ElasticLoadBalancingV2::ListenerRule
Properties:
Actions:
- Type: forward
ForwardConfig:
TargetGroupStickinessConfig:
Enabled: false
TargetGroups:
- TargetGroupArn:
!FindInMap [EnvMap, !Ref Env, LoadBalancerRouteTargetArn]
Weight: 1
TargetGroupArn:
!FindInMap [EnvMap, !Ref Env, LoadBalancerRouteTargetArn]
Conditions:
- Field: path-pattern
PathPatternConfig:
Values:
- /*
Values:
- /*
ListenerArn: !Ref HttpsListener
Priority: 50000
Error
"Status Reason" from the event.
Resource handler returned message: "Invalid request provided: AWS::ElasticLoadBalancingV2::ListenerRule Validation exception" (RequestToken: 16bd4239-0d41-b16f-2963-b0a774009dfd, HandlerErrorCode: InvalidRequest)
Try to remove PathConfigPattern from the Conditions key:
HttpsListenerRule:
Type: AWS::ElasticLoadBalancingV2::ListenerRule
Properties:
Actions:
- Type: "forward"
ForwardConfig:
TargetGroupStickinessConfig:
Enabled: false
TargetGroups:
- TargetGroupArn: !FindInMap [ EnvMap, !Ref Env, LoadBalancerRouteTargetArn ]
Weight: 1
TargetGroupArn: !FindInMap [ EnvMap, !Ref Env, LoadBalancerRouteTargetArn ]
Order: 1
Conditions:
- Field: path-pattern
Values:
- "/*"
ListenerArn: !Ref HttpsListener
Priority: 50000
Also, make sure your EnvMap map looks like this:
Parameters:
Env:
Type: String
Default: sandbox
Mappings:
EnvMap:
sandbox:
LoadBalancerRouteTargetArn: "arn:aws:elasticloadbalancing:eu-west-1:111111111111:targetgroup/my-tg-1/222222222222"
prod:
LoadBalancerRouteTargetArn: "arn:aws:elasticloadbalancing:eu-west-1:333333333333:targetgroup/my-tg-2/444444444444"

ECK Filebeat Daemonset Forwarding To Remote Cluster

I wish to forward logs from remote EKS clusters to a centralised EKS cluster hosting ECK.
Versions in use:
EKS v1.20.7
Elasticsearch v7.7.0
Kibana v7.7.0
Filebeat v7.10.0
The setup is using a AWS NLB to forward requests to Nginx ingress, using host based routing.
When the DNS lookup (filebeat test output) for the Elasticsearch is tested on Filebeat, it validates the request.
But the logs for Filebeat are telling a different story.
2021-10-05T10:39:00.202Z ERROR [publisher_pipeline_output]
pipeline/output.go:154 Failed to connect to backoff(elasticsearch(https://elasticsearch.dev.example.com:9200)):
Get "https://elasticsearch.dev.example.com:9200": Bad Request
The Filebeat agents can connect to the remote Elasticsearch via the NLB, when using a curl request.
The config is below. NB: dev.example.com is the remote cluster hosing ECK.
app:
name: "filebeat"
configmap:
enabled: true
filebeatConfig:
filebeat.yml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
templates:
- config:
- type: container
paths:
- /var/lib/docker/containers/*/${data.kubernetes.container.id}-json.log
exclude_lines: ["^\\s+[\\-`('.|_]"]
processors:
- drop_event.when.not.or:
- contains.kubernetes.namespace: "apps-"
- equals.kubernetes.namespace: "cicd"
- decode_json_fields:
fields: ["message"]
target: ""
process_array: true
overwrite_keys: true
- add_fields:
fields:
kubernetes.cluster.name: dev-eks-cluster
target: ""
processors:
- add_cloud_metadata: ~
- add_host_metadata: ~
cloud:
id: '${ELASTIC_CLOUD_ID}'
cloud:
auth: '${ELASTIC_CLOUD_AUTH}'
output:
elasticsearch:
enabled: true
hosts: "elasticsearch.dev.example.com"
username: '${ELASTICSEARCH_USERNAME}'
password: '${ELASTICSEARCH_PASSWORD}'
protocol: https
ssl:
verification_mode: "none"
headers:
Host: "elasticsearch.dev.example.com"
proxy_url: "https://example.elb.eu-west-2.amazonaws.com"
proxy_disable: false
daemonset:
enabled: true
version: 7.10.0
image:
repository: "docker.elastic.co/beats/filebeat"
tag: "7.10.0"
pullPolicy: Always
extraenvs:
- name: ELASTICSEARCH_HOST
value: "https://elasticsearch.dev.example.com"
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: "elastic"
- name: ELASTICSEARCH_PASSWORD
value: "remote-cluster-elasticsearch-es-elastic-user-password"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
clusterrolebinding:
enabled: true
namespace: monitoring
clusterrole:
enabled: true
serviceaccount:
enabled: true
namespace: monitoring
deployment:
enabled: false
configmap:
enabled: false
Any tips or suggestions on how to enable Filebeat forwarding, would be much appreciated :-)
#1 Missing ports:
Even with the ports added in as suggested. Filebeat is erroring with:
2021-10-06T08:34:41.355Z ERROR [publisher_pipeline_output] pipeline/output.go:154 Failed to connect to backoff(elasticsearch(https://elasticsearch.dev.example.com:9200)): Get "https://elasticsearch.dev.example.com:9200": Bad Request
...using a AWS NLB to forward requests to Nginx ingress, using host based routing
How about unset proxy_url and proxy_disable, then set hosts: ["<nlb url>:<nlb listener port>"]
The final working config:
app:
name: "filebeat"
configmap:
enabled: true
filebeatConfig:
filebeat.yml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
templates:
- config:
- type: container
paths:
- /var/lib/docker/containers/*/${data.kubernetes.container.id}-json.log
exclude_lines: ["^\\s+[\\-`('.|_]"]
processors:
- drop_event.when.not.or:
- contains.kubernetes.namespace: "apps-"
- equals.kubernetes.namespace: "cicd"
- decode_json_fields:
fields: ["message"]
target: ""
process_array: true
overwrite_keys: true
- add_fields:
fields:
kubernetes.cluster.name: qa-eks-cluster
target: ""
processors:
- add_cloud_metadata: ~
- add_host_metadata: ~
cloud:
id: '${ELASTIC_CLOUD_ID}'
cloud:
auth: '${ELASTIC_CLOUD_AUTH}'
output:
elasticsearch:
enabled: true
hosts: ["elasticsearch.dev.example.com:9200"]
username: '${ELASTICSEARCH_USERNAME}'
password: '${ELASTICSEARCH_PASSWORD}'
protocol: https
ssl:
verification_mode: "none"
daemonset:
enabled: true
version: 7.10.0
image:
repository: "docker.elastic.co/beats/filebeat"
tag: "7.10.0"
pullPolicy: Always
extraenvs:
- name: ELASTICSEARCH_HOST
value: "https://elasticsearch.dev.example.com"
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: "elastic"
- name: ELASTICSEARCH_PASSWORD
value: "remote-cluster-elasticsearch-es-elastic-user-password"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
clusterrolebinding:
enabled: true
namespace: monitoring
clusterrole:
enabled: true
serviceaccount:
enabled: true
namespace: monitoring
deployment:
enabled: false
configmap:
enabled: false
In addition the following changes were needed:
NBL:
Add listener for 9200 forwarding to the Ingress Controller for HTTPS
SG:
Opened up port 9200 on the EKS worker nodes

Serverless Framework - Lambda#Edge Deployment for a predefined Cloud Front Distribution

I have been trying for a day to configure automating a lambda#Edge to be associated with a Distribution through the serverless framework but things aren't working well.
Here is the documentation and they said we can use a predefined cloud front distribution from resources but not shown how?
Here is my Resources.yml that include the S3 bucket and associated two distribution's origins to it:
Resources:
ResourcesBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.resourcesBucketName}
AccessControl: Private
CorsConfiguration:
CorsRules:
- AllowedHeaders: ['*']
AllowedMethods: ['PUT']
AllowedOrigins: ['*']
ResourcesBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket:
Ref: ResourcesBucket
PolicyDocument:
Statement:
# Read permission for CloudFront
- Action: s3:GetObject
Effect: "Allow"
Resource:
Fn::Join:
- ""
-
- "arn:aws:s3:::"
-
Ref: "ResourcesBucket"
- "/*"
Principal:
CanonicalUser: !GetAtt CloudFrontOriginAccessIdentity.S3CanonicalUserId
CloudFrontOriginAccessIdentity:
Type: AWS::CloudFront::CloudFrontOriginAccessIdentity
Properties:
CloudFrontOriginAccessIdentityConfig:
Comment:
Fn::Join:
- ""
-
- "Identity for accessing CloudFront from S3 within stack "
-
Ref: "AWS::StackName"
- ""
# I can use this instead of Fn::Join !Sub 'Identity for accessing CloudFront from S3 within stack #{AWS::StackName}' Getting benefit of
# serverless-pseudo-parameters plugin
# Cloudfront distro backed by ResourcesBucket
ResourcesCdnDistribution:
Type: AWS::CloudFront::Distribution
Properties:
DistributionConfig:
Origins:
# S3 origin for private resources
- DomainName: !Sub '${self:custom.resourcesBucketName}.s3-${self:provider.region}.amazonaws.com'
Id: S3OriginPrivate
S3OriginConfig:
OriginAccessIdentity: !Sub 'origin-access-identity/cloudfront/#{CloudFrontOriginAccessIdentity}'
# S3 origin for public resources
- DomainName: !Sub '${self:custom.resourcesBucketName}.s3-${self:provider.region}.amazonaws.com'
Id: S3OriginPublic
S3OriginConfig:
OriginAccessIdentity: !Sub 'origin-access-identity/cloudfront/#{CloudFrontOriginAccessIdentity}'
Enabled: true
Comment: CDN for public and provate static content.
DefaultRootObject: index.html
HttpVersion: http2
DefaultCacheBehavior:
AllowedMethods:
- DELETE
- GET
- HEAD
- OPTIONS
- PATCH
- POST
- PUT
Compress: true
TargetOriginId: S3OriginPublic
ForwardedValues:
QueryString: false
Headers:
- Origin
Cookies:
Forward: none
ViewerProtocolPolicy: redirect-to-https
CacheBehaviors:
-
PathPattern: 'private/*'
TargetOriginId: S3OriginPrivate
AllowedMethods:
- DELETE
- GET
- HEAD
- OPTIONS
- PATCH
- POST
- PUT
Compress: true
ForwardedValues:
QueryString: false
Headers:
- Origin
Cookies:
Forward: none
ViewerProtocolPolicy: redirect-to-https
-
PathPattern: 'public/*'
TargetOriginId: S3OriginPublic
AllowedMethods:
- DELETE
- GET
- HEAD
- OPTIONS
- PATCH
- POST
- PUT
Compress: true
ForwardedValues:
QueryString: false
Headers:
- Origin
Cookies:
Forward: none
ViewerProtocolPolicy: redirect-to-https
PriceClass: PriceClass_200
Now I have all set regarding the CloudFront and I just want to add a lambda at the edge to authenticate my private content (The Origin with Id: S3OriginPrivate). So here is my serverless.yml file:
service: mda-app-uploads
plugins:
- serverless-offline
- serverless-pseudo-parameters
- serverless-iam-roles-per-function
custom:
stage: ${opt:stage, self:provider.stage}
resourcesBucketName: ${self:custom.stage}-mda-resources-bucket
provider:
name: aws
runtime: nodejs12.x
stage: ${opt:stage, 'dev'}
region: us-east-1
versionFunctions: true
resources:
- ${file(resources/s3-cloudfront.yml)}
# functions:
functions:
mdaAuthEdge:
handler: mda-edge-auth.handler
events:
- cloudFront:
eventType: viewer-request
origin:
Id: S3OriginPrivate
When deploying I am getting this issue:
TypeError: Cannot read property 'replace' of undefined
This telling that this id already exists and can't be replaced as I think. My main focus is to get the lambda at edge deployed and associated with the cloud front within the serverless framework, so I made another trial to add almost everything to the cloud formation resources and depend only on the serverless framework in deploying the function and here was my serverless.yml and the resources file:
service: mda-app-uploads
plugins:
- serverless-offline
- serverless-pseudo-parameters
- serverless-iam-roles-per-function
custom:
stage: ${opt:stage, self:provider.stage}
resourcesBucketName: ${self:custom.stage}-mda-resources-bucket
provider:
name: aws
runtime: nodejs12.x
stage: ${opt:stage, 'dev'}
region: us-east-1
versionFunctions: true
resources:
# Buckets
- ${file(resources/s3-cloudfront.yml)}
# functions:
functions:
mdaAuthEdge:
handler: mda-edge-auth.handler
role: LambdaEdgeFunctionRole
The resources:
Resources:
LambdaEdgeFunctionRole:
Type: "AWS::IAM::Role"
Properties:
Path: "/"
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
-
Sid: "AllowLambdaServiceToAssumeRole"
Effect: "Allow"
Action:
- "sts:AssumeRole"
Principal:
Service:
- "lambda.amazonaws.com"
- "edgelambda.amazonaws.com"
LambdaEdgeFunctionPolicy:
Type: "AWS::IAM::Policy"
Properties:
PolicyName: MainEdgePolicy
PolicyDocument:
Version: "2012-10-17"
Statement:
Effect: "Allow"
Action:
- "lambda:GetFunction"
- "lambda:GetFunctionConfiguration"
Resource: !Ref MdaAuthAtEdgeLambdaFunction.Version #!Join [':', [!GetAtt MdaAuthAtEdgeLambdaFunction.Arn, '2']]
Roles:
- !Ref LambdaEdgeFunctionRole
ResourcesBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.resourcesBucketName}
AccessControl: Private
CorsConfiguration:
CorsRules:
- AllowedHeaders: ['*']
AllowedMethods: ['PUT']
AllowedOrigins: ['*']
ResourcesBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket:
Ref: ResourcesBucket
PolicyDocument:
Statement:
# Read permission for CloudFront
- Action: s3:GetObject
Effect: "Allow"
Resource:
Fn::Join:
- ""
-
- "arn:aws:s3:::"
-
Ref: "ResourcesBucket"
- "/*"
Principal:
CanonicalUser: !GetAtt CloudFrontOriginAccessIdentity.S3CanonicalUserId
CloudFrontOriginAccessIdentity:
Type: AWS::CloudFront::CloudFrontOriginAccessIdentity
Properties:
CloudFrontOriginAccessIdentityConfig:
Comment:
Fn::Join:
- ""
-
- "Identity for accessing CloudFront from S3 within stack "
-
Ref: "AWS::StackName"
- ""
# I can use this instead of Fn::Join !Sub 'Identity for accessing CloudFront from S3 within stack #{AWS::StackName}' Getting benefit of
# serverless-pseudo-parameters plugin
# Cloudfront distro backed by ResourcesBucket
ResourcesCdnDistribution:
Type: AWS::CloudFront::Distribution
Properties:
DistributionConfig:
Origins:
# S3 origin for private resources
- DomainName: !Sub '${self:custom.resourcesBucketName}.s3-${self:provider.region}.amazonaws.com'
Id: S3OriginPrivate
S3OriginConfig:
OriginAccessIdentity: !Sub 'origin-access-identity/cloudfront/#{CloudFrontOriginAccessIdentity}'
# S3 origin for public resources
- DomainName: !Sub '${self:custom.resourcesBucketName}.s3-${self:provider.region}.amazonaws.com'
Id: S3OriginPublic
S3OriginConfig:
OriginAccessIdentity: !Sub 'origin-access-identity/cloudfront/#{CloudFrontOriginAccessIdentity}'
Enabled: true
Comment: CDN for public and provate static content.
DefaultRootObject: index.html
HttpVersion: http2
DefaultCacheBehavior:
AllowedMethods:
- DELETE
- GET
- HEAD
- OPTIONS
- PATCH
- POST
- PUT
Compress: true
TargetOriginId: S3OriginPublic
ForwardedValues:
QueryString: false
Headers:
- Origin
Cookies:
Forward: none
ViewerProtocolPolicy: redirect-to-https
CacheBehaviors:
-
PathPattern: 'private/*'
TargetOriginId: S3OriginPrivate
AllowedMethods:
- DELETE
- GET
- HEAD
- OPTIONS
- PATCH
- POST
- PUT
Compress: true
LambdaFunctionAssociations:
-
EventType: origin-request
LambdaFunctionARN: !Ref MdaAuthEdgeLambdaFunction.Version
#!Join [':', [!GetAtt MdaAuthAtEdgeLambdaFunction.Arn, '2']]
# arn:aws:lambda:eu-west-1:219511374676:function:mda-aws-functions-dev-authLambdaAtEdge:1
ForwardedValues:
QueryString: false
Headers:
- Origin
Cookies:
Forward: none
ViewerProtocolPolicy: redirect-to-https
-
PathPattern: 'public/*'
TargetOriginId: S3OriginPublic
AllowedMethods:
- DELETE
- GET
- HEAD
- OPTIONS
- PATCH
- POST
- PUT
Compress: true
ForwardedValues:
QueryString: false
Headers:
- Origin
Cookies:
Forward: none
ViewerProtocolPolicy: redirect-to-https
PriceClass: PriceClass_200
But I've faced many errors related to defining the version and so on. I searched, debugged, and investigated that for many hours but seems hard configuration. Any help on how to get lambda edge works with predefined cloud front through the serverless framework?
It's a bit tricky to do so using a serverless framework but I solved it by combining cloud formation with the serverless framework. I have the answer here to another question which contains a full description of how to do so:
How to access AWS CloudFront that connected with S3 Bucket via Bearer token of a specific user (JWT Custom Auth)
I don't want to repeat everything again here and also I found the question so important and facing many people without a concrete solution so pleas let me know in case you are facing any issue.
The approach is to just create the function inside the serverless.yml then inside the cloud formation you can do all the magic of creating the versions, roles and another function that will help you publish your arn and use it dynamically.
Here is my Serverless.yml:
service: mda-app-uploads
plugins:
- serverless-offline
- serverless-pseudo-parameters
- serverless-iam-roles-per-function
- serverless-bundle
custom:
stage: ${opt:stage, self:provider.stage}
resourcesBucketName: ${self:custom.stage}-mda-resources-bucket
resourcesStages:
prod: prod
dev: dev
resourcesStage: ${self:custom.resourcesStages.${self:custom.stage}, self:custom.resourcesStages.dev}
provider:
name: aws
runtime: nodejs12.x
stage: ${opt:stage, 'dev'}
region: us-east-1
versionFunctions: true
functions:
oauthEdge:
handler: src/mda-edge-auth.handler
role: LambdaEdgeFunctionRole
memorySize: 128
timeout: 5
resources:
- ${file(resources/s3-cloudfront.yml)}
Here is my resources/s3-cloudfront.yml:
Resources:
AuthEdgeLambdaVersion:
Type: Custom::LatestLambdaVersion
Properties:
ServiceToken: !GetAtt PublishLambdaVersion.Arn
FunctionName: !Ref OauthEdgeLambdaFunction
Nonce: "Test"
PublishLambdaVersion:
Type: AWS::Lambda::Function
Properties:
Handler: index.handler
Runtime: nodejs12.x
Role: !GetAtt PublishLambdaVersionRole.Arn
Code:
ZipFile: |
const {Lambda} = require('aws-sdk')
const {send, SUCCESS, FAILED} = require('cfn-response')
const lambda = new Lambda()
exports.handler = (event, context) => {
const {RequestType, ResourceProperties: {FunctionName}} = event
if (RequestType == 'Delete') return send(event, context, SUCCESS)
lambda.publishVersion({FunctionName}, (err, {FunctionArn}) => {
err
? send(event, context, FAILED, err)
: send(event, context, SUCCESS, {FunctionArn})
})
}
PublishLambdaVersionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Policies:
- PolicyName: PublishVersion
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action: lambda:PublishVersion
Resource: '*'
LambdaEdgeFunctionRole:
Type: "AWS::IAM::Role"
Properties:
Path: "/"
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
-
Sid: "AllowLambdaServiceToAssumeRole"
Effect: "Allow"
Action:
- "sts:AssumeRole"
Principal:
Service:
- "lambda.amazonaws.com"
- "edgelambda.amazonaws.com"
LambdaEdgeFunctionPolicy:
Type: "AWS::IAM::Policy"
Properties:
PolicyName: MainEdgePolicy
PolicyDocument:
Version: "2012-10-17"
Statement:
Effect: "Allow"
Action:
- "lambda:GetFunction"
- "lambda:GetFunctionConfiguration"
Resource: !GetAtt AuthEdgeLambdaVersion.FunctionArn
Roles:
- !Ref LambdaEdgeFunctionRole
ResourcesBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.resourcesBucketName}
AccessControl: Private
CorsConfiguration:
CorsRules:
- AllowedHeaders: ['*']
AllowedMethods: ['PUT']
AllowedOrigins: ['*']
ResourcesBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket:
Ref: ResourcesBucket
PolicyDocument:
Statement:
# Read permission for CloudFront
- Action: s3:GetObject
Effect: "Allow"
Resource:
Fn::Join:
- ""
-
- "arn:aws:s3:::"
-
Ref: "ResourcesBucket"
- "/*"
Principal:
CanonicalUser: !GetAtt CloudFrontOriginAccessIdentity.S3CanonicalUserId
- Action: s3:PutObject
Effect: "Allow"
Resource:
Fn::Join:
- ""
-
- "arn:aws:s3:::"
-
Ref: "ResourcesBucket"
- "/*"
Principal:
AWS: !GetAtt LambdaEdgeFunctionRole.Arn
- Action: s3:GetObject
Effect: "Allow"
Resource:
Fn::Join:
- ""
-
- "arn:aws:s3:::"
-
Ref: "ResourcesBucket"
- "/*"
Principal:
AWS: !GetAtt LambdaEdgeFunctionRole.Arn
CloudFrontOriginAccessIdentity:
Type: AWS::CloudFront::CloudFrontOriginAccessIdentity
Properties:
CloudFrontOriginAccessIdentityConfig:
Comment:
Fn::Join:
- ""
-
- "Identity for accessing CloudFront from S3 within stack "
-
Ref: "AWS::StackName"
- ""
# Cloudfront distro backed by ResourcesBucket
ResourcesCdnDistribution:
Type: AWS::CloudFront::Distribution
Properties:
DistributionConfig:
Origins:
# S3 origin for private resources
- DomainName: !Sub '${self:custom.resourcesBucketName}.s3.amazonaws.com'
Id: S3OriginPrivate
S3OriginConfig:
OriginAccessIdentity: !Sub 'origin-access-identity/cloudfront/#{CloudFrontOriginAccessIdentity}'
# S3 origin for public resources
- DomainName: !Sub '${self:custom.resourcesBucketName}.s3.amazonaws.com'
Id: S3OriginPublic
S3OriginConfig:
OriginAccessIdentity: !Sub 'origin-access-identity/cloudfront/#{CloudFrontOriginAccessIdentity}'
Enabled: true
Comment: CDN for public and provate static content.
DefaultRootObject: index.html
HttpVersion: http2
DefaultCacheBehavior:
AllowedMethods:
- DELETE
- GET
- HEAD
- OPTIONS
- PATCH
- POST
- PUT
Compress: true
TargetOriginId: S3OriginPublic
ForwardedValues:
QueryString: false
Headers:
- Origin
Cookies:
Forward: none
ViewerProtocolPolicy: redirect-to-https
CacheBehaviors:
-
PathPattern: 'private/*'
TargetOriginId: S3OriginPrivate
AllowedMethods:
- DELETE
- GET
- HEAD
- OPTIONS
- PATCH
- POST
- PUT
Compress: true
LambdaFunctionAssociations:
-
EventType: viewer-request
LambdaFunctionARN: !GetAtt AuthEdgeLambdaVersion.FunctionArn
ForwardedValues:
QueryString: false
Headers:
- Origin
Cookies:
Forward: none
ViewerProtocolPolicy: redirect-to-https
-
PathPattern: 'public/*'
TargetOriginId: S3OriginPublic
AllowedMethods:
- DELETE
- GET
- HEAD
- OPTIONS
- PATCH
- POST
- PUT
Compress: true
ForwardedValues:
QueryString: false
Headers:
- Origin
Cookies:
Forward: none
ViewerProtocolPolicy: redirect-to-https
PriceClass: PriceClass_200
But you will find the full description in my other question's answer.

Envoy INVALID_ARGUMENT:static_resources.clusters[0].hosts[0]: invalid name url: Cannot find field

I'm using Istio pilot-agent proxy in OpenShift cluster.
I have an error (INVALID_ARGUMENT:static_resources.clusters[0].hosts[0]: invalid name url: Cannot find field....
Config:
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 8080 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
stat_prefix: egress_http
use_remote_address: true
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: local-services
domains: ["*"]
routes:
- match: { prefix: "/service-a" }
route: { cluster: service-a }
http_filters:
- name: envoy.router
clusters:
- name: service-a
connect_timeout: 0.25s
# dns_lookup_family: V4_ONLY
lb_policy: round_robin
type: strict_dns
hosts:
- url : tcp://service-a.apps-stage.vm.mos.cloud.sbrf.ru:80
From what I can tell with Envoy, the error "Cannot find field" means that you requested a field name (in this case, url) in a data structure, but Envoy doesn't support that field name in that data structure.
The "hosts" block, in your example, would look like:
hosts:
- socket_address:
address: "service-a.apps-stage.vm.mos.cloud.sbrf.ru"
port_value: 80

Resources