I know you can export Aurora's audit logs to Cloudwatch, but I don't see any way to do that for the error logs/errorlog. That would enable us to set up a Cloudwatch alarm for failed scheduled events, etc. Am I just missing it?
It looks like they support it for MySQL RDS, but not Aurora.
I already have code that will query the error log files, but being able to set up an alert would make more sense (plus I wouldn't scan the logs every x minutes).
The AWS doc does not provide any info about this in the English page but it does in the French page.
Please see the link here.
Then you could translate thanks to Chrome.
Publication logs of Aurora MySQL in CloudWatch Logs with AWS management Consoles.
You can publish logs of Aurora MySQL in CloudWatch Logs by the way of the console. To publish newspapers Aurora MySQL from the console :
Open the console Amazon RDS to the address https://console.aws.amazon.com/rds/
In the shutter of navigation, choose Clusters.
Choose the cluster of database Aurora MySQL whose data of logs you want to publish.
For Actions, choose Modify the cluster.
In the section Exports of logs, choose the logs which you want to begin to publish into CloudWatch Logs.
Choose Continue, then Modify DB Cluster on the recapitulative page.
Related
I have the elk setup in a ec2 server.With Beats like metricbeat,filebeat,heartbeat.
I have setup the elastic apm for some applications like jenkins & sonarqube.
Now In uptime I can see only few monitors like sonarqube and jenkins
Other application are missing..
When I see data from yesterday not available in elasticsearch for particular application
The best way to troubleshoot what is going on is to check if the events from Heartbeat are being collected. The Uptime application only displays events from Heartbeat, and therefore — this is the Beat that you need to check.
First, check the connectivity of Heartbeat and the configured output:
metricbeat test output
Secondly, check if the events are being generated. You can check this by commenting out your existing output (Likely Elasticsearc/Elastic Cloud) and enabling either the Console output or the File output. Then start your Metricbeat and check if events are being generated. If they are, then it might be something with the backend side of things; maybe Elasticsearch is rejecting the documents sent and refusing to index them.
Apropos, Elastic is implementing a native Jenkins plugin that allows you to observe your CI pipeline using OpenTelemetry compatible backends such as Elastic APM. You can learn more about this plugin here.
I have created the hello world application from the SAP Cloud SDK archetypes and pushed this to the cloud foundry environment, binding it to an application logging service instance. My understanding is that this should already provide me with the ability to analyze all logs in the Kibana dashboard of the cloud platform and previously it also worked this way.
However, this time the Kibana dashboard remains empty, so I am wondering if I missed a step or configuration. Looking at the documentation of the service and the respective tutorial blog, I was not able to identify any additional required steps. In the Logs view on the SCP cockpit I can definitely see the entries, but they are not replicated to the ELK stack in the background.
Problem was not SDK related, but seems to have been an incident on the SCP - now works correctly without any changes.
I am trying to push the cloudwatch logs to elastic search either using a Lambda function or Amazon Kinesis. I have the log groups setup and the elastic search domain running using terraform. Please suggest on how can I push the logs from the log group to elastic search. Please share if you have the terraform codes for the same.
This answer documents some example Terraform code for creating a lambda and Cloudwatch subscription that ships logs from a Cloudwatch log group to a Sumologic HTTP collector (just a basic HTTP POST endpoint). The Cloudwatch subscription invokes the Lambda every time a new batch of log entries is posted to the log group.
The cloudwatch-sumologic-lambda referred to in that Terraform code was patterned off of the Sumologic Lambda example.
I'd imagine you would to do something similar, but re-writing the Lambda to format the HTTP however ElasticSearch requires. I'd bet some quick googling on your part will turn up plenty of examples.
Alternatively to all this Terraform config though, you can just go to your Cloudwatch console, select the log group you're interested in and select "Stream to Amazon ElasticSearch".
Though I think that will only work if you're using the AWS "ElasticSearch service" offering - meaning if you installed/configured ElasticSearch on some EC2 instances yourself it probably won't work.
I’ve added the autoscaling settings to my ServiceFabric template and after deploying it, the portal shows that auto scale is configured, but what I am not able to see is the table WADPerformanceCounters; mentioned in the documentation; in my storage account. So how is the auto scaling executed without the information about the couters?
Thanks.
If autoscale cannot find the data it's configured to look at, it will set your capacity equal to the "default" configured in the autoscale rule.
As for what could explain the behavior you're seeing, here are a couple hypotheses:
1) There are two types of metrics in Azure today: host and guest; host metrics live in Azure-internal data stores and as such don't require a storage account to store data in. Guest metrics, however, do live in a storage account. So depending on how you added autoscale, you might have added host metrics instead of guest metrics? For more info, see this doc: https://learn.microsoft.com/en-us/azure/monitoring-and-diagnostics/insights-autoscale-common-metrics
2) As you can see in this template using guest metrics, for guest metrics the scale set must have the WAD extension configured to point to the storage account; it's probably worth checking that the storage account specified in the WAD extension config is the same storage account you looked for the table in.
For host metrics, you can find the list of supported metrics here:
https://learn.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-supported-metrics#microsoftcomputevirtualmachinescalesets
For guest metrics, as mentioned above you need to configure the Windows Azure Diagnostic (WAD) extension correctly on your VMSS. Specifically autoscale engine will query from the WAD{value}PT1M{value} tables in your configured diagnostic storage account. These tables contain the local 1 minute aggregation of the performance counter data.
I am using Big SQL from Analytics for Apache Hadoop in Bluemix and would like to look into logs in order to debug (e.g. map reduce job log - usually available under http://my-mapreduce-server.com:19888/jobhistory, bigsql.log from the Big SQL worker nodes).
Is there a way in Bluemix to access those logs?
Log files for most IOP components (e.g. MapReduce Job History Log, Resource Manager Log) are accessible from Ambari console's Quick Links. Just navigate to the respective service page. Log files for BigSQL is currently not available. Since the cluster is not hosted as Bluemix appls, they cannot be retrieved using the Bluemix cf command.