I've just configured my Windows box to send its event logs (System, Security, Application) to Cloudwatch Logs (https://blogs.aws.amazon.com/application-management/post/Tx1KG4IKXZ94QFK/Using-CloudWatch-Logs-with-Amazon-EC2-Running-Microsoft-Windows-Server). CloudWatch Logs receives the event logs but they don't have timestamp!
It seems we can just set the timestamp for IIS logs, Custom logs, etc., but it's not possible to set the "datetime_format" parameter for Event logs (configuration file: AWS.EC2.Windows.CloudWatch.json), right?!!
http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-configuration-cwl.html
If it's right, that's weird!! What can I do with a log record that has no timestamp?!!
Thanks,
If you're viewing your logs from the AWS Management Console you should be able to see the event timestamp by clicking on the "gear icon" from the top right that shows/hides the table columns.
If you're interacting with your log data from the AWS CLI or from the AWS SDK, each log event record should come with a timestamp attribute.
Related
What makes Kibana to not show docker container logs in APM "Transactions" page under "Logs" tab.
I verified the logs are successfully being generated with the "trace.id" associated for proper linking.
I have the exact same environment and configs (7.16.2) up via docker-compose and it works perfectly.
Could not figure out why this feature works locally but does not show in Elastic Cloud deploy.
UPDATE with Solution:
I just solved the problem.
It's related to the Filebeat version.
From 7.16.0 and ON, the transaction/logs linking stops working.
Reverted Filebeat back to version 7.15.2 and it started working again.
If you are not using file beats, for example - We rolled our own logging implementation to send logs from a queue in batches using the Bulk API.
We have our own "ElasticLog" class and then use Attributes to match the logs-* Schema for the Log Stream.
In particular we had to make sure that trace.id was the same as the the actual Traces, trace.id property. Then the logs started to show up here (It does take a few minutes sometimes)
Some more info on how to get the ID's
We use OpenTelemetry exporter for Traces and ILoggerProvider for Logs. The fire off batches independently of each other.
We populate the Trace Id's at the time of instantiation of the class as a default value. This way you in the context of the Activity. Also helps set the timestamp exactly when the log was created.
This LogEntry then gets passed into the ElasticLogger processor and mapped as displayed above to the ElasticLog entry with the Attributes needed for ES
I jave a JAVA application in which I am using GCP to create VM instances from images.
In this application, I would like to allow the user to view the vm creation logs in order to be updated on the status of the creation, and to be able to see failure points in detail.
I am sure such logs exist in GCP, but have been unable to find specific APIOs which let me see a specific action, for example creation of instance "X".
Thanks for the help
When you create a VM, the answer that you have is a JobID (because the creation take time and the Compute Engine API answer immediately). To know the status of the VM start (and creation) you have to poll regularly this JobID.
In the logs, you can also filter with this JobID to select and view only the logs that you want on the Compute API side (create/start errors).
If you want to see the logs of the VM, filter the logs not with the JobID but with the name of the VM, and its zone.
In Java, you have client libraries that help you to achieve this
I can not find a good document which gives details about how to stream data from AWS cloudwatch to elastic cloud.
I have set it up as follows for now, but I cannot see data in elastic cloud index
I have installed functionbeat locally and updated the config as follows:
functionbeat.provider.aws.endpoint: "s3.amazonaws.com"
functionbeat.provider.aws.deploy_bucket: "filebeat-deploy"
functionbeat.provider.aws.functions:
- name: cloudwatch
enabled: true
type: cloudwatch_logs
description: "lambda function for cloudwatch logs"
triggers:
- log_group_name: my_log_group_name
cloud.id: "<cloud_id>"
cloud.auth: "<username:password>"
I followed this document - https://www.elastic.co/guide/en/beats/functionbeat/current/configuration-functionbeat-options.html
And then I ran ./functionbeat deploy cloudwatch to deploy the function.
I have checked I can see the deployment in the bucket filebeat-deploy
I can not see the logs from my_log_group_name in elastic cloud
This is possibly because AWS is not able to make a successful connection with your elastic cloud. Usually protocol issue (if u are making output.elasticsearch: host as localhost:9200 because AWS doesn't able to reach to this localhost url unless it is a public one) or permission issue. If you check the functionbeat lambda function cloudwatch logs you can able to see the actual issue. Put logging.level: debug in functionbeat.yml for detailed logs.
Also, you cannot see the logs in kibana right after deploying the functionbeat. Once the subscription filter has added to the log group after the successful deployment you have to invoke the function which you have added subscription filter not the functionbeat lambda function. Because the trigger is added to the functionbeat lambda function.In your case you are added trigger to get logs from this log group "my_log_group_name". So whenever a new item get added into this log group then it will automatically invoke functionbeat lambda function.
Where do you see the console.log() calls made inside of AWS Lambda functions? I looked at AWS Cloud Watch event log and didn't see them there. Is there a CLI way to see them?
console.log() should definitely end up in the CloudWatch logs for your function. You should be able to find the correct log group in the web console interface for your function under the Monitoring tab - Jump to Logs. Note that you will have a different log stream for each invocation of your function, and there may be a delay between logs being written and logs showing up in a stream, so be patient.
It's possible you do not have the IAM permissions to create log groups or write to log streams. Ashan has provided links on how to fix that.
Additionally, you can use the awslogs tool to list groups/streams, as well as to download or tail groups/streams:
To list available groups: awslogs groups
To list available streams in group app/foo: awslogs streams app/foo
To "tail -f" all streams from a log group app/foo: awslogs get app/foo ALL --watch
Make sure you the IAM role assigned to the AWS Lambda function has permission to write to CloudWatch Logs. For more information regarding the policy refer Using Identity-Based Policies (IAM Policies)for CloudWatch Logs.
In addition, you should be able to view the CloudWatch log group by clicking on the CloudWatch Logs under Add Triggers in Lambda Console.
I created a custom AMI for use in an Elastic Beanstalk environment as described here. It all works, except for requesting log files from the instances. When using the Console, when I click "Request Logs|Last 100 lines", it show a Loading spinner for a rather long time and then leaves the list of downloadable logs empty.
I already tried enabling all Logging related checkboxes in the EC2Config dialog before creating the AMI, but this did not help.