Based on the official example on GitHub demonstrating a serverless REST API I enabled the localstack-serverless plugin so I could develop my services locally. I adjusted the serverless.yml file accordingly:
service: serverless-rest-api-with-dynamodb
frameworkVersion: ">=1.1.0 <2.0.0"
provider:
name: aws
runtime: python2.7
deploymentBucket:
name: ${self:service}-${opt:stage}-deployment-bucket
environment:
DYNAMODB_TABLE: ${self:service}-${opt:stage, self:provider.stage}
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource: "arn:aws:dynamodb:${opt:region, self:provider.region}:*:table/${self:provider.environment.DYNAMODB_TABLE}"
custom:
localstack:
stages:
- local
- dev
endpoints:
S3: http://localhost:4572
DynamoDB: http://localhost:4570
CloudFormation: http://localhost:4581
Elasticsearch: http://localhost:4571
ES: http://localhost:4578
SNS: http://localhost:4575
SQS: http://localhost:4576
Lambda: http://localhost:4574
Kinesis: http://localhost:4568
plugins:
- serverless-localstack
functions:
create:
handler: todos/create.create
events:
- http:
path: todos
method: post
cors: true
list:
handler: todos/list.list
events:
- http:
path: todos
method: get
cors: true
get:
handler: todos/get.get
events:
- http:
path: todos/{id}
method: get
cors: true
update:
handler: todos/update.update
events:
- http:
path: todos/{id}
method: put
cors: true
delete:
handler: todos/delete.delete
events:
- http:
path: todos/{id}
method: delete
cors: true
resources:
Resources:
TodosDynamoDbTable:
Type: 'AWS::DynamoDB::Table'
DeletionPolicy: Retain
Properties:
AttributeDefinitions:
-
AttributeName: id
AttributeType: S
KeySchema:
-
AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
TableName: ${self:provider.environment.DYNAMODB_TABLE}
Deploying on localstack running over docker:
SLS_DEBUG=3 serverless deploy --stage local --region us-east-1
it fails with the following message on the localstack side:
localstack_1 | 2019-12-04 10:41:33,692:API: 127.0.0.1 - - [04/Dec/2019 10:41:33] "PUT /serverless-rest-api-with-dynamodb-local-deployment-bucket HTTP/1.1" 200 -
localstack_1 | 2019-12-04 10:41:41,402:API: 127.0.0.1 - - [04/Dec/2019 10:41:41] "GET /serverless-rest-api-with-dynamodb-local-deployment-bucket?location HTTP/1.1" 200 -
localstack_1 | 2019-12-04 10:41:41,470:API: 127.0.0.1 - - [04/Dec/2019 10:41:41] "GET /serverless-rest-api-with-dynamodb-local-deployment-bucket?list-type=2&prefix=serverless%2Fserverless-rest-api-with-dynamodb%2Flocal HTTP/1.1" 200 -
localstack_1 | 2019-12-04 10:41:41,544:API: 127.0.0.1 - - [04/Dec/2019 10:41:41] "PUT /serverless-rest-api-with-dynamodb-local-deployment-bucket/serverless/serverless-rest-api-with-dynamodb/local/1575456101319-2019-12-04T10%3A41%3A41.319Z/compiled-cloudformation-template.json HTTP/1.1" 200 -
localstack_1 | 2019-12-04 10:41:41,569:API: 127.0.0.1 - - [04/Dec/2019 10:41:41] "PUT /serverless-rest-api-with-dynamodb-local-deployment-bucket/serverless/serverless-rest-api-with-dynamodb/local/1575456101319-2019-12-04T10%3A41%3A41.319Z/serverless-rest-api-with-dynamodb.zip HTTP/1.1" 200 -
localstack_1 | 2019-12-04T10:41:41:DEBUG:localstack.services.cloudformation.cloudformation_listener: Error response from CloudFormation (400) POST /: b'<ErrorResponse xmlns="http://cloudformation.amazonaws.com/doc/2010-05-15/">\n <Error>\n <Type>Sender</Type>\n <Code>ValidationError</Code>\n <Message>Stack with id serverless-rest-api-with-dynamodb-local does not exist</Message>\n </Error>\n <RequestId>cf4c737e-5ae2-11e4-a7c9-ad44eEXAMPLE</RequestId>\n</ErrorResponse>'
localstack_1 | 2019-12-04T10:41:41:WARNING:localstack.services.awslambda.lambda_api: Function not found: arn:aws:lambda:us-east-1:000000000000:function:serverless-rest-api-with-dynamodb-local-list
localstack_1 | 2019-12-04T10:41:41:WARNING:localstack.services.awslambda.lambda_api: Function not found: arn:aws:lambda:us-east-1:000000000000:function:serverless-rest-api-with-dynamodb-local-create
localstack_1 | 2019-12-04T10:41:41:WARNING:localstack.services.awslambda.lambda_api: Function not found: arn:aws:lambda:us-east-1:000000000000:function:serverless-rest-api-with-dynamodb-local-update
localstack_1 | 2019-12-04T10:41:41:WARNING:localstack.services.awslambda.lambda_api: Function not found: arn:aws:lambda:us-east-1:000000000000:function:serverless-rest-api-with-dynamodb-local-get
localstack_1 | 2019-12-04T10:41:41:WARNING:localstack.services.awslambda.lambda_api: Function not found: arn:aws:lambda:us-east-1:000000000000:function:serverless-rest-api-with-dynamodb-local-delete
localstack_1 | 2019-12-04T10:41:42:ERROR:localstack.services.generic_proxy: Error forwarding request: Unable to fetch template body (code 404) from URL https://s3.amazonaws.com/serverless-rest-api-with-dynamodb-local-deployment-bucket/serverless/serverless-rest-api-with-dynamodb/local/1575456101319-2019-12-04T10:41:41.319Z/compiled-cloudformation-template.json Traceback (most recent call last):
localstack_1 | File "/opt/code/localstack/localstack/services/generic_proxy.py", line 240, in forward
localstack_1 | path=path, data=data, headers=forward_headers)
localstack_1 | File "/opt/code/localstack/localstack/services/cloudformation/cloudformation_listener.py", line 151, in forward_request
localstack_1 | modified_request = transform_template(req_data)
localstack_1 | File "/opt/code/localstack/localstack/services/cloudformation/cloudformation_listener.py", line 70, in transform_template
localstack_1 | template_body = get_template_body(req_data)
localstack_1 | File "/opt/code/localstack/localstack/services/cloudformation/cloudformation_listener.py", line 105, in get_template_body
localstack_1 | raise Exception('Unable to fetch template body (code %s) from URL %s' % (response.status_code, url[0]))
localstack_1 | Exception: Unable to fetch template body (code 404) from URL https://s3.amazonaws.com/serverless-rest-api-with-dynamodb-local-deployment-bucket/serverless/serverless-rest-api-with-dynamodb/local/1575456101319-2019-12-04T10:41:41.319Z/compiled-cloudformation-template.json
Where we can see that the serverless deployment is trying to reach Amazon S3 directly, while I would expect it to use the local endpoint defined in the serverless.yml config (http://localhost:4572).
Is there a missing entry in the serverless.yml config file in order to force using the localstack S3 storage ?
There is a regression with the localstack image I was using. Using the docker image localstack/localstack:0.10.5 the deployment is successful.
Explicitely setting the region both in the config files AND the code also helps.
Related
How can I use !FindInMap in userdata section.
In the following userdata I want to update the MainSshKey with the mapping data.
Mappings:
AccountToStage:
"123456789012":
StageName: Beta
Beta:
us-east-1:
MainSshKey: ssh-rsa AAAAB3NzaC
AdminSshKey: ssh-rsa AAAAB3NzaC1
userdata:
Fn::Base64: !Sub |
#cloud-config
users:
- name: main
ssh_authorized_keys:
- ${MainSshKey}
- name: admin
ssh_authorized_keys:
- ${AdminSshKey}
this is what I have tried
#cloud-config
users:
- name: main
ssh_authorized_keys:
- ${MainSshKey}
- MainSshKey: !FindInMap
- !FindInMap
- AccountToStage
- !Ref "AWS::AccountId"
- StageName
- !Ref "AWS::Region"
- MainSshKey
- name: admin
ssh_authorized_keys:
- ${AdminSshKey}
Cloudformation is not able to resolve this.
Note: If I defined MainSshKey as a parameter, and it works fine, doesn't seems to work using FindInMap
Any pointers are much appreciated
You have to use a list form of Sub. So your could should be something along these lines. Note that you will probably fix all the indentations and further change it to work. Nevertheless, the list form of Sub is the key to your issue.
Fn::Base64: !Sub
- |
#cloud-config
users:
- name: main
ssh_authorized_keys:
- ${SubMainSshKey}
- name: admin
ssh_authorized_keys:
- ${AdminSshKey}
- SubMainSshKey: !FindInMap
- !FindInMap
- AccountToStage
- !Ref "AWS::AccountId"
- StageName
- !Ref "AWS::Region"
- MainSshKey
ALso, this is incorrect userdata, so I'm not sure what do you want to achieve.
I was able to solve this using the following, the plus tells it to keep a newline afterwards. Arg1 tells it to substitute the result of the Fn::FindInMap for the MainSshKey in arg0.
Fn::Base64: !Sub
- |+
#cloud-config
users:
- name: main
ssh_authorized_keys:
- ${MainSshKey}
- name: admin
ssh_authorized_keys:
- ${AdminSshKey}
- MainSshKey:
Fn::FindInMap:
- !FindInMap
- AccountToStage
- !Ref "AWS::AccountId"
- StageName
- !Ref "AWS::Region"
- MainSshKey
Also if you require substituting two values (example: MainSshKey and AdminSshKey), you can use the same list Sub construct example below.
Fn::Base64: !Sub
- |+
#cloud-config
users:
- name: main
ssh_authorized_keys:
- ${MainSshKey}
- name: admin
ssh_authorized_keys:
- ${AdminSshKey}
- MainSshKey:
Fn::FindInMap:
- !FindInMap
- AccountToStage
- !Ref "AWS::AccountId"
- StageName
- !Ref "AWS::Region"
- MainSshKey
AdminSshKey:
Fn::FindInMap:
- !FindInMap
- AccountToStage
- !Ref "AWS::AccountId"
- StageName
- !Ref "AWS::Region"
- AdminSshKey
In this template we are creating node groups that are to be deployed in the existing EKS cluster and VPC. The stack gets deployed successfully but I don't see the node groups inside my existing EKS cluster.
AWSTemplateFormatVersion: "2010-09-09"
Description: Amazon EKS - Node Group
Metadata:
"AWS::CloudFormation::Interface":
ParameterGroups:
- Label:
default: EKS Cluster
Parameters:
- ClusterName
- ClusterControlPlaneSecurityGroup
- Label:
default: Worker Node Configuration
Parameters:
- NodeGroupName
- NodeAutoScalingGroupMinSize
- NodeAutoScalingGroupDesiredCapacity
- NodeAutoScalingGroupMaxSize
- NodeInstanceType
- NodeImageIdSSMParam
- NodeImageId
- NodeVolumeSize
- KeyName
- BootstrapArguments
- Label:
default: Worker Network Configuration
Parameters:
- VpcId
- Subnets
Parameters:
BootstrapArguments:
Type: String
Default: ""
Description: "Arguments to pass to the bootstrap script. See files/bootstrap.sh in https://github.com/awslabs/amazon-eks-ami"
ClusterControlPlaneSecurityGroup:
Type: "AWS::EC2::SecurityGroup::Id"
Description: The security group of the cluster control plane.
ClusterName:
Type: String
Description: The cluster name provided when the cluster was created. If it is incorrect, nodes will not be able to join the cluster.
KeyName:
Type: "AWS::EC2::KeyPair::KeyName"
Description: The EC2 Key Pair to allow SSH access to the instances
NodeAutoScalingGroupDesiredCapacity:
Type: Number
Default: 3
Description: Desired capacity of Node Group ASG.
NodeAutoScalingGroupMaxSize:
Type: Number
Default: 4
Description: Maximum size of Node Group ASG. Set to at least 1 greater than NodeAutoScalingGroupDesiredCapacity.
NodeAutoScalingGroupMinSize:
Type: Number
Default: 1
Description: Minimum size of Node Group ASG.
NodeGroupName:
Type: String
Description: Unique identifier for the Node Group.
NodeImageId:
Type: String
Default: ""
Description: (Optional) Specify your own custom image ID. This value overrides any AWS Systems Manager Parameter Store value specified above.
NodeImageIdSSMParam:
Type: "AWS::SSM::Parameter::Value<AWS::EC2::Image::Id>"
Default: /aws/service/eks/optimized-ami/1.14/amazon-linux-2/recommended/image_id
Description: AWS Systems Manager Parameter Store parameter of the AMI ID for the worker node instances.
NodeInstanceType:
Type: String
Default: t3.medium
AllowedValues:
- a1.medium
- a1.large
- a1.xlarge
- a1.2xlarge
- a1.4xlarge
- c1.medium
- c1.xlarge
- c3.large
- c3.xlarge
- c3.2xlarge
- c3.4xlarge
- c3.8xlarge
- c4.large
- c4.xlarge
- c4.2xlarge
- c4.4xlarge
- c4.8xlarge
- c5.large
- c5.xlarge
- c5.2xlarge
- c5.4xlarge
- c5.9xlarge
- c5.12xlarge
- c5.18xlarge
- c5.24xlarge
- c5.metal
- c5d.large
- c5d.xlarge
- c5d.2xlarge
- c5d.4xlarge
- c5d.9xlarge
- c5d.18xlarge
- c5n.large
- c5n.xlarge
- c5n.2xlarge
- c5n.4xlarge
- c5n.9xlarge
- c5n.18xlarge
- cc2.8xlarge
- cr1.8xlarge
- d2.xlarge
- d2.2xlarge
- d2.4xlarge
- d2.8xlarge
- f1.2xlarge
- f1.4xlarge
- f1.16xlarge
- g2.2xlarge
- g2.8xlarge
- g3s.xlarge
- g3.4xlarge
- g3.8xlarge
- g3.16xlarge
- h1.2xlarge
- h1.4xlarge
- h1.8xlarge
- h1.16xlarge
- hs1.8xlarge
- i2.xlarge
- i2.2xlarge
- i2.4xlarge
- i2.8xlarge
- i3.large
- i3.xlarge
- i3.2xlarge
- i3.4xlarge
- i3.8xlarge
- i3.16xlarge
- i3.metal
- i3en.large
- i3en.xlarge
- i3en.2xlarge
- i3en.3xlarge
- i3en.6xlarge
- i3en.12xlarge
- i3en.24xlarge
- m1.small
- m1.medium
- m1.large
- m1.xlarge
- m2.xlarge
- m2.2xlarge
- m2.4xlarge
- m3.medium
- m3.large
- m3.xlarge
- m3.2xlarge
- m4.large
- m4.xlarge
- m4.2xlarge
- m4.4xlarge
- m4.10xlarge
- m4.16xlarge
- m5.large
- m5.xlarge
- m5.2xlarge
- m5.4xlarge
- m5.8xlarge
- m5.12xlarge
- m5.16xlarge
- m5.24xlarge
- m5.metal
- m5a.large
- m5a.xlarge
- m5a.2xlarge
- m5a.4xlarge
- m5a.8xlarge
- m5a.12xlarge
- m5a.16xlarge
- m5a.24xlarge
- m5ad.large
- m5ad.xlarge
- m5ad.2xlarge
- m5ad.4xlarge
- m5ad.12xlarge
- m5ad.24xlarge
- m5d.large
- m5d.xlarge
- m5d.2xlarge
- m5d.4xlarge
- m5d.8xlarge
- m5d.12xlarge
- m5d.16xlarge
- m5d.24xlarge
- m5d.metal
- p2.xlarge
- p2.8xlarge
- p2.16xlarge
- p3.2xlarge
- p3.8xlarge
- p3.16xlarge
- p3dn.24xlarge
- g4dn.xlarge
- g4dn.2xlarge
- g4dn.4xlarge
- g4dn.8xlarge
- g4dn.12xlarge
- g4dn.16xlarge
- g4dn.metal
- r3.large
- r3.xlarge
- r3.2xlarge
- r3.4xlarge
- r3.8xlarge
- r4.large
- r4.xlarge
- r4.2xlarge
- r4.4xlarge
- r4.8xlarge
- r4.16xlarge
- r5.large
- r5.xlarge
- r5.2xlarge
- r5.4xlarge
- r5.8xlarge
- r5.12xlarge
- r5.16xlarge
- r5.24xlarge
- r5.metal
- r5a.large
- r5a.xlarge
- r5a.2xlarge
- r5a.4xlarge
- r5a.8xlarge
- r5a.12xlarge
- r5a.16xlarge
- r5a.24xlarge
- r5ad.large
- r5ad.xlarge
- r5ad.2xlarge
- r5ad.4xlarge
- r5ad.12xlarge
- r5ad.24xlarge
- r5d.large
- r5d.xlarge
- r5d.2xlarge
- r5d.4xlarge
- r5d.8xlarge
- r5d.12xlarge
- r5d.16xlarge
- r5d.24xlarge
- r5d.metal
- t1.micro
- t2.nano
- t2.micro
- t2.small
- t2.medium
- t2.large
- t2.xlarge
- t2.2xlarge
- t3.nano
- t3.micro
- t3.small
- t3.medium
- t3.large
- t3.xlarge
- t3.2xlarge
- t3a.nano
- t3a.micro
- t3a.small
- t3a.medium
- t3a.large
- t3a.xlarge
- t3a.2xlarge
- u-6tb1.metal
- u-9tb1.metal
- u-12tb1.metal
- x1.16xlarge
- x1.32xlarge
- x1e.xlarge
- x1e.2xlarge
- x1e.4xlarge
- x1e.8xlarge
- x1e.16xlarge
- x1e.32xlarge
- z1d.large
- z1d.xlarge
- z1d.2xlarge
- z1d.3xlarge
- z1d.6xlarge
- z1d.12xlarge
- z1d.metal
ConstraintDescription: Must be a valid EC2 instance type
Description: EC2 instance type for the node instances
NodeVolumeSize:
Type: Number
Default: 20
Description: Node volume size
Subnets:
Type: "List<AWS::EC2::Subnet::Id>"
Description: The subnets where workers can be created.
VpcId:
Type: "AWS::EC2::VPC::Id"
Description: The VPC of the worker instances
Conditions:
HasNodeImageId: !Not
- "Fn::Equals":
- Ref: NodeImageId
- ""
Resources:
NodeInstanceRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- ec2.amazonaws.com
Action:
- "sts:AssumeRole"
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
- "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
- "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
Path: /
NodeInstanceProfile:
Type: "AWS::IAM::InstanceProfile"
Properties:
Path: /
Roles:
- Ref: NodeInstanceRole
NodeSecurityGroup:
Type: "AWS::EC2::SecurityGroup"
Properties:
GroupDescription: Security group for all nodes in the cluster
Tags:
- Key: !Sub kubernetes.io/cluster/${ClusterName}
Value: owned
VpcId: !Ref VpcId
NodeSecurityGroupIngress:
Type: "AWS::EC2::SecurityGroupIngress"
DependsOn: NodeSecurityGroup
Properties:
Description: Allow node to communicate with each other
FromPort: 0
GroupId: !Ref NodeSecurityGroup
IpProtocol: "-1"
SourceSecurityGroupId: !Ref NodeSecurityGroup
ToPort: 65535
ClusterControlPlaneSecurityGroupIngress:
Type: "AWS::EC2::SecurityGroupIngress"
DependsOn: NodeSecurityGroup
Properties:
Description: Allow pods to communicate with the cluster API Server
FromPort: 443
GroupId: !Ref ClusterControlPlaneSecurityGroup
IpProtocol: tcp
SourceSecurityGroupId: !Ref NodeSecurityGroup
ToPort: 443
ControlPlaneEgressToNodeSecurityGroup:
Type: "AWS::EC2::SecurityGroupEgress"
DependsOn: NodeSecurityGroup
Properties:
Description: Allow the cluster control plane to communicate with worker Kubelet and pods
DestinationSecurityGroupId: !Ref NodeSecurityGroup
FromPort: 1025
GroupId: !Ref ClusterControlPlaneSecurityGroup
IpProtocol: tcp
ToPort: 65535
ControlPlaneEgressToNodeSecurityGroupOn443:
Type: "AWS::EC2::SecurityGroupEgress"
DependsOn: NodeSecurityGroup
Properties:
Description: Allow the cluster control plane to communicate with pods running extension API servers on port 443
DestinationSecurityGroupId: !Ref NodeSecurityGroup
FromPort: 443
GroupId: !Ref ClusterControlPlaneSecurityGroup
IpProtocol: tcp
ToPort: 443
NodeSecurityGroupFromControlPlaneIngress:
Type: "AWS::EC2::SecurityGroupIngress"
DependsOn: NodeSecurityGroup
Properties:
Description: Allow worker Kubelets and pods to receive communication from the cluster control plane
FromPort: 1025
GroupId: !Ref NodeSecurityGroup
IpProtocol: tcp
SourceSecurityGroupId: !Ref ClusterControlPlaneSecurityGroup
ToPort: 65535
NodeSecurityGroupFromControlPlaneOn443Ingress:
Type: "AWS::EC2::SecurityGroupIngress"
DependsOn: NodeSecurityGroup
Properties:
Description: Allow pods running extension API servers on port 443 to receive communication from cluster control plane
FromPort: 443
GroupId: !Ref NodeSecurityGroup
IpProtocol: tcp
SourceSecurityGroupId: !Ref ClusterControlPlaneSecurityGroup
ToPort: 443
Problem seems to be over here
NodeLaunchConfig:
Type: "AWS::AutoScaling::LaunchConfiguration"
Properties:
AssociatePublicIpAddress: "true"
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
DeleteOnTermination: true
VolumeSize: !Ref NodeVolumeSize
VolumeType: gp2
IamInstanceProfile: !Ref NodeInstanceProfile
ImageId: !If
- HasNodeImageId
- Ref: NodeImageId
- Ref: NodeImageIdSSMParam
InstanceType: !Ref NodeInstanceType
KeyName: !Ref KeyName
SecurityGroups:
- Ref: NodeSecurityGroup
UserData: !Base64
"Fn::Sub": |
#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh ${ClusterName} ${BootstrapArguments}
/opt/aws/bin/cfn-signal --exit-code $? \
--stack ${AWS::StackName} \
--resource NodeGroup \
--region ${AWS::Region}
May be over here
NodeGroup:
Type: "AWS::AutoScaling::AutoScalingGroup"
Properties:
DesiredCapacity: !Ref NodeAutoScalingGroupDesiredCapacity
LaunchConfigurationName: !Ref NodeLaunchConfig
MaxSize: !Ref NodeAutoScalingGroupMaxSize
MinSize: !Ref NodeAutoScalingGroupMinSize
Tags:
- Key: Name
PropagateAtLaunch: "true"
Value: !Sub ${ClusterName}-${NodeGroupName}-Node
- Key: !Sub kubernetes.io/cluster/${ClusterName}
PropagateAtLaunch: "true"
Value: owned
VPCZoneIdentifier: !Ref Subnets
UpdatePolicy:
AutoScalingRollingUpdate:
MaxBatchSize: "1"
MinInstancesInService: !Ref NodeAutoScalingGroupDesiredCapacity
PauseTime: PT5M
Outputs:
NodeInstanceRole:
Description: The node instance role
Value: !GetAtt NodeInstanceRole.Arn
NodeSecurityGroup:
Description: The security group for the node group
Value: !Ref NodeSecurityGroup
Well, though the template is getting deployed but the nodegroups aren't visible in my EKS Cluster. Please do let me know if there are any updations to be made so that the nodegroups get deployed in the cluster.
Okay, I was going through the same problem. The problem is with the type which you chosen for the nodegroup. It should be of AWS::EKS::Nodegroup. You have chosen the wrong type. Change it and your nodegroup will be visible in the cluster.
Here is the link for the same:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-eks-nodegroup.html
I have this configuration:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/messages
- /var/log/secure
- /var/log/audit/audit.log
- /var/log/yum.log
- /root/.bash_history
- /var/log/neutron/*.log
- /var/log/nova/*.log
- /var/log/keystone/keystone.log
- /var/log/httpd/error_log
- /var/log/mariadb/mariadb.log
- /var/log/glance/*.log
- /var/log/rabbitmq/*.log
ignore_older: 72h
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.logstash:
hosts: ["sdsds"]
I would like to tag a log if it contains the following patter:
message:INFOHTTP*200*
I want to create a query on kibana to filter based on http response codes tag. How can I create this? Can you help me to create the condition with tags?
This response codes are in the nova-api and neutron server logs.
And I don't want to actually filter out the logs, I want to have everything in elastic search, just want to add tag to these kind of logs.
UPDATE:
I managed to figure out something, but I'm not sure what is the best way to list it, because I have many response codes:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/messages
- /var/log/secure
- /var/log/audit/audit.log
- /var/log/yum.log
- /root/.bash_history
- /var/log/neutron/*.log
- /var/log/keystone/keystone.log
- /var/log/httpd/error_log
- /var/log/mariadb/mariadb.log
- /var/log/glance/*.log
- /var/log/rabbitmq/*.log
- type: log
enabled: true
paths:
- /var/log/nova/*.log
include_lines: ["status: 200"]
fields_under_root: true
fields:
httpresponsecode: 200
ignore_older: 72h
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.logstash:
I have to create multiple times these 4 lines?
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/messages
- /var/log/secure
- /var/log/audit/audit.log
- /var/log/yum.log
- /root/.bash_history
- /var/log/keystone/keystone.log
- /var/log/neutron/*.log
- /var/log/httpd/error_log
- /var/log/mariadb/mariadb.log
- /var/log/glance/*.log
- /var/log/rabbitmq/*.log
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 200"]
fields:
httpresponsecode: 200
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 202"]
fields:
httpresponsecode: 202
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 204"]
fields:
httpresponsecode: 204
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 207"]
fields:
httpresponsecode: 207
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 403"]
fields:
httpresponsecode: 403
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 404"]
fields:
httpresponsecode: 404
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["status: 500"]
fields:
httpresponsecode: 500
- type: log
enabled: true
paths:
- /var/log/nova/*.log
fields_under_root: true
include_lines: ["HTTP 503"]
fields:
httpresponsecode: 503
ignore_older: 72h
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.logstash:
hosts: [
What is the best way to do this to multiple files and multiple codes?
UPDATE2:
My solution doesn't work, at the beginning it is sending and after completely stops.
I hope you can help me.
I hope that I understood your question, but in that case, I would go the grok route.
If you know that your status field always looks like this, then why not do a pattern like this:
match => {
"message" => "<prepending patterns> status: %{NUMBER:httpresponsecode} <patterns that follow>"
}
This would create a field called httpresponsecode which is filled with the number that follows the string "status: "
However, based on the ECS-Formats, I'd rather call the field something else, like
http.response.status(.keyword)
As for your specified logline, a valid grok pattern might look like this:
%{TIMESTAMP_ISO8601:timestamp} %{NONNEGINT:message.number} %{WORD:loglevel} %{DATA:application} \[-\] %{IP:source.ip} "(?:%{WORD:verb} %{NOTSPACE:http.request.path}(?: HTTP/%{NUMBER:http.version})?|%{DATA:rawrequest})" status: %{NONNEGINT:http.response.status} len: %{NUMBER:http.response.length} time: %{NUMBER:http.response.time}
Find the Grok-Patterns for logstash in the logstash repository
Use the Grok-Debugger included in Kibana to see how your pattern would match.
Rename the fields accordingly.
I am creating an application where I need to trigger a mail on particular event.
When I run following command,
python -m elastalert.elastalert --verbose --rule myrules\myrule.yml
I get the error as
ERROR:root:Error while running alert email: Error connecting to SMTP host: [Error 10013] An attempt was made to access a socket in a way forbidden by its access permissions
Here is content of my rule file:
es_host: localhost
es_port: 9200
name: Log Level Test
type: frequency
index: testindexv4
num_events: 1
timeframe:
hours: 4
filter:
- term:
log_level.keyword: "ERROR"
- query:
query_string:
query: "log_level.keyword: ERROR"
alert:
- "email"
email:
- "<myMailId>#gmail.com"
Here is content of config.yaml file
rules_folder: myrules
run_every:
seconds: 2
buffer_time:
seconds: 10
es_host: localhost
es_port: 9200
writeback_index: elastalert_status
alert_time_limit:
days: 2
Here is my smpt_auth file
alert:
- email
email:
- "<myMailId>#gmail.com"
smtp_host: "smtp.gmail.com"
smtp_port: 465
smtp_ssl: true
from_addr: "<otherMailId>#gmail.com"
smtp_auth_file: "smtp_auth_user.yaml"
Here is content of smtp_auth_user file:
user: "<myMailId>#gmail.com"
password: "<password>"
What change I need to make to resolve the issue?
I have a Cloudformation template which when I validate it it appears it appears to be valid I get no errors using the linter tool in Atom and I have also use an online yaml validate tool which confirms it is correct but when I go to deploy the template in CFN it fails with error
Template validation error: Template format error: Unresolved resource dependencies [AgentserviceSNSTopic] in the Resources block of the template
I can't see any errors (am not sure how the formatting will be but the template is below )
AWSTemplateFormatVersion: '2010-09-09'
Description: AgentService Web infra
Outputs:
AgentServiceFQDN:
Value:
'Fn::GetAtt':
- AgentServiceELB
- DNSName
Parameters:
AZ:
Default: 'ap-southeast-2a, ap-southeast-2b'
Description: >-
Comma delimited list of AvailabilityZones where the instances will be
created
Type: CommaDelimitedList
InstanceProfile:
Default: >-
arn:aws:iam::112888586165:instance-profile/AdvanceCodeDeployInstanceProfile
Description: >-
Use the full ARN for SimpleCodeDeployInstanceProfile or
AdvancedCodeDeployInstanceProfile
Type: String
InstanceType:
ConstraintDescription: 'Must be a valid EC2 instance type, such as t2.medium'
Default: t2.medium
Description: Provide InstanceType to be used
Type: String
KeyName:
ConstraintDescription: The name of an existing EC2 KeyPair.
Default: LMBRtraining
Description: Name of an existing EC2 KeyPair to enable SSH access to the instances
Type: 'AWS::EC2::KeyPair::KeyName'
PublicSubnets:
Default: 'subnet-bb0a3ade,subnet-fedd8389'
Description: Comma delimited list of public subnets
Type: CommaDelimitedList
VPCID:
Default: vpc-a18eccc4
Description: VPC ID
Type: String
WindowsAMIID:
Default: ami-5a989d39
Description: Windows AMI ID with IIS
Type: String
myIP:
Default: 0.0.0.0/0
Description: 'Enter your IP address in CIDR notation, e.g. 100.150.200.225/32'
Type: String
Resources:
AgentServiceASG:
Properties:
AvailabilityZones:
Ref: AZ
DesiredCapacity: '2'
HealthCheckGracePeriod: '600'
HealthCheckType: ELB
LaunchConfigurationName:
Ref: AgentServiceLaunchConfig
LoadBalancerNames:
- Ref: AgentServiceELB
MaxSize: '2'
MinSize: '2'
NotificationConfiguration:
NotificationTypes:
- 'autoscaling:EC2_INSTANCE_LAUNCH'
- 'autoscaling:EC2_INSTANCE_LAUNCH_ERROR'
- 'autoscaling:EC2_INSTANCE_TERMINATE'
- 'autoscaling:EC2_INSTANCE_TERMINATE_ERROR'
TopicARN:
Ref: AgentServiceSNSTopic
Tags:
- Key: Name
PropagateAtLaunch: 'true'
Value: AgentServiceServer
VPCZoneIdentifier:
Ref: PublicSubnets
Type: 'AWS::AutoScaling::AutoScalingGroup'
AgentServiceAutoscaleDownPolicy:
Properties:
AdjustmentType: ChangeInCapacity
AutoScalingGroupName:
Ref: AgentServiceASG
Cooldown: '300'
ScalingAdjustment: '-1'
Type: 'AWS::AutoScaling::ScalingPolicy'
AgentServiceAutoscaleUpPolicy:
Properties:
AdjustmentType: ChangeInCapacity
AutoScalingGroupName:
Ref: AgentServiceASG
Cooldown: '300'
ScalingAdjustment: '1'
Type: 'AWS::AutoScaling::ScalingPolicy'
AgentServiceCloudWatchCPUAlarmHigh:
Properties:
AlarmActions:
- Ref: AgentServiceAutoscaleUpPolicy
- Ref: AgentServiceSNSTopic
AlarmDescription: SNS Notification and scale up if CPU Util is Higher than 90% for 10 mins
ComparisonOperator: GreaterThanThreshold
Dimensions:
- Name: AutoScalingGroupName
Value:
Ref: AgentServiceASG
EvaluationPeriods: '2'
MetricName: CPUUtilization
Namespace: AWS/EC2
Period: '300'
Statistic: Average
Threshold: '90'
Type: 'AWS::CloudWatch::Alarm'
AgentServiceCloudWatchCPUAlarmLow:
Properties:
AlarmActions:
- Ref: AgentServiceAutoscaleDownPolicy
- Ref: AgentserviceSNSTopic
AlarmDescription: SNS Notification and scale down if CPU Util is less than 70% for 10 mins
ComparisonOperator: LessThanThreshold
Dimensions:
- Name: AutoScalingGroupName
Value:
Ref: AgentServiceASG
EvaluationPeriods: '2'
MetricName: CPUUtilization
Namespace: AWS/EC2
Period: '300'
Statistic: Average
Threshold: '70'
Type: 'AWS::CloudWatch::Alarm'
AgentServiceELB:
Properties:
ConnectionDrainingPolicy:
Enabled: 'true'
Timeout: '60'
CrossZone: true
HealthCheck:
HealthyThreshold: '3'
Interval: '15'
Target: 'HTTP:80/index.html'
Timeout: '5'
UnhealthyThreshold: '3'
Listeners:
- InstancePort: '80'
InstanceProtocol: HTTP
LoadBalancerPort: '80'
Protocol: HTTP
LoadBalancerName: AgentServiceELB
Scheme: internet-facing
SecurityGroups:
- Ref: AgentServiceSecurityGroup
Subnets:
Ref: PublicSubnets
Tags:
- Key: Network
Value: public
Type: 'AWS::ElasticLoadBalancing::LoadBalancer'
AgentServiceLaunchConfig:
Properties:
AssociatePublicIpAddress: 'true'
IamInstanceProfile:
Ref: InstanceProfile
ImageId:
Ref: WindowsAMIID
InstanceType:
Ref: InstanceType
KeyName:
Ref: KeyName
SecurityGroups:
- Ref: AgentServiceSecurityGroup
UserData:
'Fn::Base64':
'Fn::Join':
- ''
- - |
<script>
- |
echo hello world > c:\\inetpub\\wwwroot\\index.html
- |
hostname >> c:\\inetpub\\wwwroot\\index.html
- "if not exist \\"c:\\temp\\" mkdir c:\\temp\\n"
- >
powershell.exe -Command Read-S3Object -BucketName
aws-codedeploy-us-east-1/latest -Key codedeploy-agent.msi -File
c:\\temp\\codedeploy-agent.msi
- >
c:\\temp\\codedeploy-agent.msi /quiet /l
c:\\temp\\host-agent-install-log.txt
- |
powershell.exe -Command Get-Service -Name codedeployagent
- |
</script>
Type: 'AWS::AutoScaling::LaunchConfiguration'
AgentServiceSNSTopic:
Type: 'AWS::SNS::Topic'
AgentServiceSecurityGroup:
Properties:
GroupDescription: AgentServiceSecurityGroup
InstanceAccessHTTPS:
Properties:
CidrIp: 0.0.0.0/0
FromPort: '443'
GroupId: AgentServiceSecurityGroup
IpProtocol: tcp
ToPort: '443'
Type: 'AWS::EC2::SecurityGroupIngress'
InstanceAccessPSremote:
Properties:
CidrIp: 198.18.0.0/24
FromPort: '5985'
GroupId: AgentServiceSecurityGroup
IpProtocol: tcp
ToPort: '5985'
Type: 'AWS::EC2::SecurityGroupIngress'
InstanceAccessRDP:
Properties:
CidrIp: 0.0.0.0/0
FromPort: '3389'
GroupId: AgentServiceSecurityGroup
IpProtocol: tcp
ToPort: '3389'
Type: 'AWS::EC2::SecurityGroupIngress'
InstanceAccessSMB:
Properties:
CidrIp: 198.18.0.0/24
FromPort: '445'
GroupId: AgentServiceSecurityGroup
IpProtocol: tcp
ToPort: '445'
Type: 'AWS::EC2::SecurityGroupIngress'
VpcId:
Ref: VPCID
Type: 'AWS::EC2::SecurityGroup'
It would be interesting to know which online validator accepted your "valid YAML".
The Online YAML Parser and YAML Lint both complain when you use your YAML as input. After changing the line these YAML parsers indicate as problematic:
- "if not exist \\"c:\\temp\\" mkdir c:\\temp\\n"
which is a quoted scalar "if not exist \\"c:\\temp\\" followed by more mkdir ... into:
- "if not exist \"c:\\temp\" mkdir c:\\temp\n"
in which the quotes are escaped, or to the better readable:
- |
if not exist "c:\temp" mkdir c:\temp
The Code Beautify YAML Validator complain that your YAML has problems, but as usual cannot deal with the corrected YAML either, so don't use that.