I am trying to push the memory usage of my ec2 instance to cloudwatch. I have the cloudwatch agent running in the ec2 instance. I have the config file on AWS ssm. this is how it looks
{
"agent": {
"metrics_collection_interval": 60,
"run_as_user": "root"
},
"logs": {
"logs_collected": {
"files": {
"collect_list": [
{
"file_path": "/var/log/my-service/*.log",
"log_group_name": "my-service",
"log_stream_name": "{instance_id}"
}
]
}
}
},
"metrics": {
"append_dimensions": {
"AutoScalingGroupName": "${aws:AutoScalingGroupName}",
"ImageId": "${aws:ImageId}",
"InstanceId": "${aws:InstanceId}",
"InstanceType": "${aws:InstanceType}"
},
"metrics_collected": {
"disk": {
"measurement": [
"used_percent"
],
"metrics_collection_interval": 60,
"resources": [
"*"
]
},
"mem": {
"measurement": [
"mem_used_percent"
],
"metrics_collection_interval": 60
},
"statsd": {
"metrics_aggregation_interval": 10,
"metrics_collection_interval": 10,
"service_address": ":8125"
}
}
}
}
I am starting cloudwatch agent with this command
sudo
/opt/aws/amazon-cloudwatch-agent/bin/./amazon-cloudwatch-agent-ctl -a
fetch-config -m ec2 -c ssm:AmazonCloudWatch-linux -s
The logs are being pushed from the locations mentioned in the config, but I do not see the memory metric being pushed. If I understand correctly I should see the ip address of my instance under CWAgent--> host under Metrics. Unfortunately, I do not see this.
I tried checking the cloudwatch agent logs under /opt/aws/amazon-cloudwatch-agent/logs
/amazon-cloudwatch-agent.log
I do not see any logs which says, it pushed or trying to push any metric.
Any help is much appreciated.
I believe that do will you see the instance name not IP address
Related
While running AWSFIS-Run-CPU-Stress i am getting below error:
Unable to start action, due to a platform mismatch between the specified document and the targeted instances. I am trying this in Windows EC2 instance
My Experiment script look like this(removed confidential server info):
{
"description": "Test CPU stress predefined SSM document",
"targets": {
"testInstance": {
"resourceType": "aws:ec2:instance",
"resourceArns": [
"arn:aws:ec2:region:123456789012:instance/instance_id"
],
"selectionMode": "ALL"
}
},
"actions": {
"runCpuStress": {
"actionId": "aws:ssm:send-command",
"parameters": {
"documentArn": "arn:aws:ssm:region::document/AWSFIS-Run-CPU-Stress",
"documentParameters": "{\"DurationSeconds\":\"120\"}",
"duration": "PT5M"
},
"targets": {
"Instances": "testInstance"
}
}
},
"stopConditions": [
{
"source": "aws:cloudwatch:alarm",
"value": "arn:aws:cloudwatch:region:123456789012:alarm:awsec2-instance_id-GreaterThanOrEqualToThreshold-CPUUtilization"
}
],
"roleArn": "arn:aws:iam::123456789012:role/AllowFISSSMActions",
"tags": {}
}
I am trying to get a list of all PostgreSQL instances from Alicloud account using Aliyun CLI. But in the output, there are not tags, I would like to filter it through some tags:
aliyun rds DescribeDBInstances
Output (as from aliyun docs):
{
"RequestId": "1AD222E9-E606-4A42-BF6D-8A4442913CEF",
"PageNumber": 1,
"PageRecordCount": 10,
"TotalRecordCount": 100,
"Items": [
{
"VpcId": "vpc-uf6f7l4fg90xxxxxxx",
"DedicatedHostIdForLog": "dh-bpxxxx",
"CreateTime": "2018-11-05T11:26:02Z",
"PayType": "Postpaid",
"DedicatedHostNameForLog": "testlog",
"MutriORsignle": true,
"DedicatedHostGroupName": "testhostgroup",
"EngineVersion": "5.7",
"DedicatedHostGroupId": "dhg-7a9xxxxxxxx",
"VpcName": "test-huadong",
"DedicatedHostZoneIdForMaster": "cn-hangzhou-c",
"ConnectionString": "rm-uf6wjk5xxxxxxx.mysql.rds.aliyuncs.com",
"InstanceNetworkType": "Classic",
"MasterInstanceId": "rm-uf6wjk5xxxxxxxxxx",
"ExpireTime": "2019-02-27T16:00:00Z",
"DestroyTime": "2018-11-05T11:26:02Z",
"GuardDBInstanceId": "rm-uf64zsuxxxxxxxxxx",
"DedicatedHostNameForMaster": "testmaster",
"ZoneId": "cn-hangzhou-a",
"TipsLevel": 1,
"DBInstanceId": "rm-uf6wjk5xxxxxxxxxx",
"DedicatedHostIdForMaster": "dh-bpxxxx",
"TempDBInstanceId": "rm-uf64zsuxxxxxxxxxx",
"DBInstanceStorageType": "ModuleList.4.ModuleCode",
"ConnectionMode": "Standard",
"LockMode": "Unlock",
"GeneralGroupName": "TestGroup",
"VpcCloudInstanceId": "rm-uf6wjk5xxxxxxx",
"DedicatedHostZoneIdForSlave": "cn-hangzhou-d",
"Tips": "一切正常",
"DedicatedHostZoneIdForLog": "cn-hangzhou-b",
"DedicatedHostNameForSlave": "testslave",
"DBInstanceDescription": "测试数据库",
"DBInstanceNetType": "Internet",
"DBInstanceType": "Primary",
"LockReason": "instance_expired",
"DBInstanceStatus": "Running",
"RegionId": "cn-hangzhou",
"VSwitchId": "vsw-uf6adz52c2pxxxxxxx",
"DedicatedHostIdForSlave": "dh-bpxxxx",
"ResourceGroupId": "rg-acfmyxxxxxxx",
"Category": "Basic",
"Engine": "MySQL",
"DBInstanceClass": "rds.mys2.small",
"SwitchWeight": 100,
"ReadOnlyDBInstanceIds": [
{
"DBInstanceId": "rr-uf6wjk5xxxxxxx"
}
],
"DeletionProtection": true
}
],
"NextToken": "o7PORW5o2TJg**********"
}
As per documentation - Tags are given as a request parameter and not in return data. Is there any way to pull tags from db instances and get it with the return data?
I created a Cloudwatch Alarm and Dashboard for the disk space on my EC2 instance. Which is fine until autoscaling spins up a new instance and I have to manually edit the alarm + dashboard with the new instance id.
I suspect the solution lies with setting up Cloudwatch rules for when a new instance starts combined with Lambda although I'd really like some help with the Lambda side of things, if anyone can advise.
#Cloudwatch Alarm
{
"region": "eu-west-1",
"metrics": [
[ "CWAgent", "disk_used_percent", "path", "/", "InstanceId", "i-1234567890", "AutoScalingGroupName", "my-autoscalinggrp", "ImageId", "ami-a1b2c3d4e5", "InstanceType", "t3.micro", "device", "nvme0n1p1", "fstype", "xfs", { "stat": "Average" } ]
],
"view": "timeSeries",
"stacked": false,
"period": 300,
"annotations": {
"horizontal": [
{
"label": "disk_used_percent >= 80 for 1 datapoints within 5 minutes",
"value": 80
}
]
},
"title": "disk-used-alarm"
}
#Dashboard
{
"metrics": [
[ "CWAgent", "disk_used_percent", "path", "/", "InstanceId", "i-1234567890", "AutoScalingGroupName", "my-autoscalinggrp", "ImageId", "ami-a1b2c3d4e5", "InstanceType", "t3.small", "device", "nvme0n1p1", "fstype", "xfs", { "label": "Server1" } ]
],
"view": "singleValue",
"stacked": false,
"region": "eu-west-1",
"stat": "Average",
"period": 300,
"title": "Disk Usage Dashboard"
}
I'm generating a CloudFormation template with several AWS Lambda functions. As part of the CloudFormation template I also want to add a subscription filter so that CloudWatch logs will be sent to a different account.
However, since I don't know the name of the logs groups at advance and couldn't find any way to have a reference to them I wasn't able to solve it.
Is there a way to do so?
You can try to use custom function to invoke your lambda which in turn can run your lambda with a test payload or something like that, which eventually will create a log stream then you can refer to that log group for subscription as mentioned by praveen earlier.
You can use a function to get the log group name. For example:
"LogGroupName": {
"Fn::Join": [
"",
[
"/aws/lambda/",
{
"Ref": "MyLambdaFunction"
}
]
]
}
Note that MyLambdaFunction is the name of your Lambda function block in the CloudFormation template.
The way Serverless does it should work for you. It creates a log group resource with a name matching what your Lambda function will use. You can then reference that log group wherever you need it. You will have to give your Lambda function a name and not use the default naming behavior. You can use the stack name to make it unique.
Something like:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"FunctionLogGroup": {
"Type": "AWS::Logs::LogGroup",
"Properties": {
"LogGroupName": {
"Fn::Sub": "/aws/lambda/MyFunction-${AWS::StackName}"
}
}
},
"MyFunctionNameRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"ManagedPolicyArns": ["arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"],
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Action": ["sts:AssumeRole"],
"Effect": "Allow",
"Principal": {
"Service": ["lambda.amazonaws.com"]
}
}]
}
}
},
"MyFunction": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Code": {
"ZipFile": "def index():\n return 'hello world'\n"
},
"FunctionName": {
"Fn::Sub": "MyFunction-${AWS::StackName}"
},
"Handler": "handler.index",
"MemorySize": 128,
"Role": {
"Fn::GetAtt": [
"MyFunctionNameRole",
"Arn"
]
},
"Runtime": "python3.6"
},
"DependsOn": [
"FunctionLogGroup"
]
},
"MySubscriptionFilter": {
"Type" : "AWS::Logs::SubscriptionFilter",
"Properties" : {
"DestinationArn": "TODO TODO",
"FilterPattern": "",
"LogGroupName": {"Ref": "FunctionLogGroup"},
"RoleArn": "TODO TODO"
}
}
}
}
T2 instances can now be started with an additional option to allow more CPU bursting for additional cost.
SDK: http://docs.aws.amazon.com/aws-sdk-php/v3/api/api-ec2-2016-11-15.html#runinstances
I tried it, I can switch my instances to unlimited so it should be possible.
However, I added the new configuration option to the array and nothing changed, it's still set to "standard" as before.
Here a JSON dump of the runinstances option array:
{
"UserData": "....",
"SecurityGroupIds": [
"sg-04df967f"
],
"InstanceType": "t2.micro",
"ImageId": "ami-4e3a4051",
"MaxCount": 1,
"MinCount": 1,
"SubnetId": "subnet-22ec130c",
"Tags": [
{
"Key": "task",
"Value": "test"
},
{
"Key": "Name",
"Value": "unlimitedtest"
}
],
"InstanceInitiatedShutdownBehavior": "terminate",
"CreditSpecification": {
"CpuCredits": "unlimited"
}
}
It starts the EC2 instance successfully just as before, however the CreditSpecification setting is ignored.
Amazon denies normal users to contact support, so I hope maybe someone here has a clue about it.
Hmmm... Using qualitatively the same run-instances JSON
{
"ImageId": "ami-bf4193c7",
"InstanceType": "t2.micro",
"CreditSpecification": {
"CpuCredits": "unlimited"
}
}
worked for me - the instance shows this:
T2 Unlimited Enabled
in the "description" tab after selecting this instance in the ec2 console.