Stream AWS cloudwatch logs to elasticcloud using functionbeat - elasticsearch

I can not find a good document which gives details about how to stream data from AWS cloudwatch to elastic cloud.
I have set it up as follows for now, but I cannot see data in elastic cloud index
I have installed functionbeat locally and updated the config as follows:
functionbeat.provider.aws.endpoint: "s3.amazonaws.com"
functionbeat.provider.aws.deploy_bucket: "filebeat-deploy"
functionbeat.provider.aws.functions:
- name: cloudwatch
enabled: true
type: cloudwatch_logs
description: "lambda function for cloudwatch logs"
triggers:
- log_group_name: my_log_group_name
cloud.id: "<cloud_id>"
cloud.auth: "<username:password>"
I followed this document - https://www.elastic.co/guide/en/beats/functionbeat/current/configuration-functionbeat-options.html
And then I ran ./functionbeat deploy cloudwatch to deploy the function.
I have checked I can see the deployment in the bucket filebeat-deploy
I can not see the logs from my_log_group_name in elastic cloud

This is possibly because AWS is not able to make a successful connection with your elastic cloud. Usually protocol issue (if u are making output.elasticsearch: host as localhost:9200 because AWS doesn't able to reach to this localhost url unless it is a public one) or permission issue. If you check the functionbeat lambda function cloudwatch logs you can able to see the actual issue. Put logging.level: debug in functionbeat.yml for detailed logs.
Also, you cannot see the logs in kibana right after deploying the functionbeat. Once the subscription filter has added to the log group after the successful deployment you have to invoke the function which you have added subscription filter not the functionbeat lambda function. Because the trigger is added to the functionbeat lambda function.In your case you are added trigger to get logs from this log group "my_log_group_name". So whenever a new item get added into this log group then it will automatically invoke functionbeat lambda function.

Related

How to restart an ec2 server when CloudWatch Synthetics Canary fails?

I have a site I'm working on, and one of the pages retrieves data from another server, let's call it server B.
Occasionally server B fails to return data, and the main site will give a 500 error.
I want to restart server B when that happens, and I was thinking I could use CW synthetics to do that. I've created a CW alarm to trigger, but I don't have a direct way to restart an ec2 server, since it's not associated directly with one.
I've thought of calling a lambda that will restart the server, but I'm wondering if there's a simpler configuration/solution I can use.
Thanks
You can creat an Event Bridge rule for failed canary run by selecting Synthetics Canary TestRun Failure from AWS events then in Event pattern -> AWS service select Cloudwatch Syntheticsand in Event type select Synthetics Canary TestRun Failure. From the Target select AWS service -> Select a target select EC2 Rebootinstances API call and give the instance id.
UPDATED:
You can use custom patterns and pass your json which can match the failure pattern.
In your case I would use something like,
{
"source": ["aws.synthetics"],
"detail-type": ["Synthetics Canary TestRun Failure"],
"region": ["us-east-1"],
"detail": {
"account-id": ["123456789012"],
"canary-id": ["EXAMPLE-dc5a-4f5f-96d1-989b75a94226"],
"canary-name": ["events-bb-1"]
}
}
Create an Event Bridge rule for a failed canary run, that triggers a Lambda function. Have the Lambda function restart the EC2 server via the AWS API/SDK.

How can I make a fabric8 k8s client in my AWS Lambda function authenticate properly with EKS?

I have a snippet of code that I'm invoking as a Lambda Function in AWS. Inside the lambda function, I'm making a call to an EKS cluster using the fabric8 client. All I'm doing at this point is having the client retrieve unbound PVCs:
KubernetesClient client = new DefaultKubernetesClient();
MixedOperation<PersistentVolumeClaim, PersistentVolumeClaimList,
DoneablePersistentVolumeClaim, Resource<PersistentVolumeClaim,
DoneablePersistentVolumeClaim>> operation = client.persistentVolumeClaims();
PersistentVolumeClaimList pvcList = operation.inAnyNamespace().list();
java.util.List<PersistentVolumeClaim> unboundItems = pvcList.getItems()
.stream()
.filter(pvc -> !"bound".equalsIgnoreCase(pvc.getStatus().getPhase()))
.collect(Collectors.toList());
One of the DevOps guys at work added my user ARN to a ConfigMap, so if I run this locally and just passing the AWS credentials through System Properties, I get results back. He has also experimented with adding the Lambda role ARN, but it doesn't work. If I deploy the code in my lambda, I get a 403 error:
"errorMessage": "Failure executing: GET at: https://XYZ.us-west-2.eks.amazonaws.com/api/v1/persistentvolumeclaims. Message: persistentvolumeclaims is forbidden: User \"system:anonymous\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope. Received status: Status(apiVersion=v1, code=403, details=StatusDetails(causes=[], group=null, kind=persistentvolumeclaims, name=null, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=persistentvolumeclaims is forbidden: User \"system:anonymous\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=Forbidden, status=Failure, additionalProperties={}).",
"errorType": "io.fabric8.kubernetes.client.KubernetesClientException"
I'm pretty new to k8s, fabric8, and EKS, so apologies if I'm doing something obviously dumb here. I do have a ~/.kube/config file, which I include in the Lambda ZIP file (I've been told that this is insecure). The config file seems to work fine when I run the code locally. I'm setting the kubeconfig system property to point to the file. I understand that there needs to be some external mechanism like aws-iam-authenticator available to the k8s client to properly authenticate with EKS. What I don't know is if I should use some other mechanism when this runs in a Lambda?
I found an article that describes the general process of having a Lambda talk to EKS, and I've played around with the IAM role configuration as suggested, as well as adding the lambda role ARN to a ConfigMap, but so far nothing has helped. I'm really blocked on this, and if anyone has any tips, I'm all ears. Thanks.

how to see console.log in AWS lambda functions

Where do you see the console.log() calls made inside of AWS Lambda functions? I looked at AWS Cloud Watch event log and didn't see them there. Is there a CLI way to see them?
console.log() should definitely end up in the CloudWatch logs for your function. You should be able to find the correct log group in the web console interface for your function under the Monitoring tab - Jump to Logs. Note that you will have a different log stream for each invocation of your function, and there may be a delay between logs being written and logs showing up in a stream, so be patient.
It's possible you do not have the IAM permissions to create log groups or write to log streams. Ashan has provided links on how to fix that.
Additionally, you can use the awslogs tool to list groups/streams, as well as to download or tail groups/streams:
To list available groups: awslogs groups
To list available streams in group app/foo: awslogs streams app/foo
To "tail -f" all streams from a log group app/foo: awslogs get app/foo ALL --watch
Make sure you the IAM role assigned to the AWS Lambda function has permission to write to CloudWatch Logs. For more information regarding the policy refer Using Identity-Based Policies (IAM Policies)for CloudWatch Logs.
In addition, you should be able to view the CloudWatch log group by clicking on the CloudWatch Logs under Add Triggers in Lambda Console.

Monitoring AWS Lambda errors

I want to view AWS lambda last hour errors of two types:
Lambda function that finished with an error
Lambda function returned http 500
How should I do that?
If you have many lambdas, in can be difficult to identify exactly which lambda caused an error. Here is how to find it out, even if you have hundreds of lambdas.
In CloudWatch, go to the Metrics page, then go to the Graph Metrics tab, then navigate to the dropdown menu item “Math expression > Search > Lambda Throttles or Errors.”
This will give you error counts per lambda in a graph, mouse over to get the name of the offending lambda.
Once you launched an AWS Lambda project, automatically that is watched by CloudWatch.
Lambda function that finished with an error
You can see Lambda function errors from monitoring tab on Lambda default view.
Lambda function returned http 500
I guess your Lambda function is WEB API. If your WEB API created by Lambda function, you need to output logs with standard output in order to see logging on CloudWatch.
Please find documents from Accessing Amazon CloudWatch Logs for AWS Lambda
NOTE: only if you use serverless:
Alternatively, you can monitor your lambda function logs using serverless cli.
For example, to get log in the past 1 hours:
sls logs -f functionName --startTime 1h
You also can filter based on the string 'error' in the past 1 hours:
sls logs -f functionName --startTime 1h --filter error
Please check on the doc.
You could enable X-Ray traces from the lambda dashboard
Lambda Console Enable X-Ray Tracing
The X-Ray service displays trace mappings for lambda execution results. The service is great for checking the results of errors within lambda functions, but if you are looking for detailed error result logs, CloudWatch is your best bet.
You could also try something like Logbird that processes CloudWatch streams for all errors in AWS Lambda, API Gateway and other cloud services and can trigger notifications.

Timestamp issue with Amazon CloudWatch Logs integration for Windows event logs

I've just configured my Windows box to send its event logs (System, Security, Application) to Cloudwatch Logs (https://blogs.aws.amazon.com/application-management/post/Tx1KG4IKXZ94QFK/Using-CloudWatch-Logs-with-Amazon-EC2-Running-Microsoft-Windows-Server). CloudWatch Logs receives the event logs but they don't have timestamp!
It seems we can just set the timestamp for IIS logs, Custom logs, etc., but it's not possible to set the "datetime_format" parameter for Event logs (configuration file: AWS.EC2.Windows.CloudWatch.json), right?!!
http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-configuration-cwl.html
If it's right, that's weird!! What can I do with a log record that has no timestamp?!!
Thanks,
If you're viewing your logs from the AWS Management Console you should be able to see the event timestamp by clicking on the "gear icon" from the top right that shows/hides the table columns.
If you're interacting with your log data from the AWS CLI or from the AWS SDK, each log event record should come with a timestamp attribute.

Resources