How to enable disk space metrics in Spring-Boot/Micrometer? - spring

I have read over the Spring Metrics docs and I have system metrics enabled in my application.yml file. This, according to the docs, is supposed to give me metrics prefixed with process., system. and disk.. I see results for the first two of these, but I am not getting any metrics about disk space usage. I've even looked in the code and have found a MeterBinder class named DiskSpaceMetrics that seems to send the two values disk.free and disk.total. Can someone please tell me how to get my Spring app to send these disk space metrics?
I am sending my metrics to AWS CloudWatch.
I found this Question: How to enable DiskSpaceMetrics in io.micrometer. It seems to be about seeing the disk space values in Spring's actuator dashboard. I do see the values there. What I want is for those values to be periodically reported as metrics values.

It turns out that my app WAS sending out the metrics. The AWS Cloudwatch console just wasn't showing them to me. I brought up the metrics just fine via Grafana. Even once I knew they were there, I could find no way to get the AWS console to show them to me. Strange. I might have to put in a request to AWS asking them what's up with that.

Related

Using ILogger to send logging to x-ray via OpenTelemetry

All,
Thanks in advance for your time. We are moving to OpenTelemetry from ILogger/log4net logging to files. We were on-prem now moving to the cloud logging to files is not going to work. We use AWS. I have the aws-otel-collector working with tracing. Logging seems to be to console only - there is no way to get logs to xray via OT. In on-prem we made extensive use of file based logging now the auto-instrumentation in OT and AWS does most of what we need. There are times where we all wish we could peek inside the code at runtime and see a few values that the auto instrumentation does not provide. That is what I would like to log to x-ray via OT. There are samples (with warning that say not best practice) that explain how to do this in native AWS but that means I have to run the aws-otel-collector and the x-ray daemon. The use of logs would be very limited and judicious but I would really like to have them covered by one API. Is this possible?
Again - thanks in advance for your time.
Steve
It looks like you don't differentiate between traces and logs. They are not the same. You can include "logs" (correct term is "event in the span") into trace, but that must be done when traces are generated. If you own the code, then check documentation how to do that.
Opentelemetry (OTEL) is designated for metrics, traces, logs. But implemenetation for logs is still not stable. See https://opentelemetry.io/status/#logging
So I would use OTEL for now only for traces (X-Ray), metrics (AWS Prometheus). Logs should be processed outside of OTEL and stored in correct log storage - that's not X-Ray (that's a trace storage), but OpenSearch, CloudWatch logs, ...

Unable to upload 100 MB file Using Rest Service in PCF

I have a requirement that requires to upload file up to 150MB. I have written a java based rest service using Spring boot 1.5. I am not able to upload larger file. The code works for smaller file size. I have configured all payload/ multipart related configuration for tomcat.It is not working for large files. I am getting "502: Bad Gateway:Registered endpoint unable to handle the request. The code is deployed in Pivotal Cloud Foundry. My question is "Is there any size limit for payload that is configured at Go Router Level which is causing this issue?" Any help is appreciated.
Thanks
Here's what I would suggest:
Run your app locally. Ensure that you can upload a 150M+ file. That will ensure that you have Spring Boot configured correctly, and that there are no limits in Tomcat (embedded) or Spring which would cause this.
When you deploy to a Cloud Foundry installation, there will not be any additional size based restrictions. Gorouter does not directly limit the size of a file that can be uploaded. However, Gorouter has an upper limit on how much time a request can consume in it's entirety (i.e. receive request, process and respond). By default, that is 900s (your CF platform may differ, consult with your platform operator to get a specific value).
I mention this because the upload bandwidth of your client will come into play here. If you have a client that is slowly uploading a 150M file, let's say it would take an hour to upload that file, then it will fail with a response like you're seeing.
My suggestion to confirm, would be to run cf logs and look for the log entry tagged with [RTR] that corresponds to your failed request. It'll have the 502 status code. Now, check the response_time field and see if it matches the max request time as set on your platform (900s default). If it's a match, then that's your issue.
If none of that helps, you're going to need to look for more information. Perhaps try increasing the log levels and running cf logs to see if you get any more clues from your application.
Try to increase multipart size with configuraiton
spring.http.multipart.max-file-size=200MB // you ca
and
Tomcat's configuration or AP
webapps/manager/WEB-INF/web.xml
<multipart-config>
<max-file-size>52428800</max-file-size>
<max-request-size>52428800</max-request-size>
<file-size-threshold>0</file-size-threshold>
</multipart-config>

How to download 300k log lines from my application?

I am running a job on my Heroku app that generates about 300k lines of log within 5 minutes. I need to extract all of them into a file. How can I do this?
The Heroku UI only shows logs in real time, since the moment it was opened, and only keeps 10k lines.
I attached a LogDNA Add-on as a drain, but their export also only allows 10k lines export. To even have the option of export, I need to apply a search filter (I typed 2020 because all the lines start with a date, but still...). I can scroll through all the logs to see them, but as I scroll up the bottom gets truncated, so I can't even copy-paste them myself.
I then attached Sumo Logic as a drain, which is better, because the export limit is 100k. However I still need to filter the logs in 30s to 60s intervals and download separately. Also it exports to CSV file and in reverse order (newest first, not what I want) so I have to still work on the file after its downloaded.
Is there no option to get actual raw log files in full?
Is there no option to get actual raw log files in full?
There are no actual raw log files.
Heroku's architecture requires that logging be distributed. By default, its Logplex service aggregates log output from all services into a single stream and makes it available via heroku logs. However,
Logplex is designed for collating and routing log messages, not for storage. It retains the most recent 1,500 lines of your consolidated logs, which expire after 1 week.
For longer persistence you need something else. In addition to commercial logging services like those you mentioned, you have several options:
Log to a database instead of files. Something like Apache Cassandra might be a good fit.
Send your logs to a logging server via Syslog (my preference):
Syslog drains allow you to forward your Heroku logs to an external Syslog server for long-term archiving.
Send your logs to a custom logging process via HTTPS.
Log drains also support messaging via HTTPS. This makes it easy to write your own log-processing logic and run it on a web service (such as another Heroku app).
Speaking solely from the Sumo Logic point of view, since that’s the only one I’m familiar with here, you could do this with its Search Job API: https://help.sumologic.com/APIs/Search-Job-API/About-the-Search-Job-API
The Search Job API lets you kick off a search, poll it for status, and then when complete, page through the results (up to 1M records, I believe) and do whatever you want with them, such as dumping them into a CSV file.
But this is only available to trial and Enterprise accounts.
I just looked at Heroku’s docs and it does not look like they have a native way to retrieve more than 1500 and you do have to forward those logs via syslog to a separate server / service.
I think your best solution is going to depend, however, on your use-case, such as why specifically you need these logs in a CSV.

How do I deal with old collected metrics in Prometheus?

I am using a Gauge Vector in my application for collecting and exposing a particular metric with labels from my application in the Prometheus metrics format. The problem is that once I have set a metric value for a particular set of labels, even if that metric is not collected again it will be scraped by Prometheus until the application restarts and the metric is removed from memory. This means that even if that metric is no longer valid anymore (hasn't been set again for a day say) Prometheus will still be scraping it as if it's a fresh metric.
Is it possible to either set an expiry time for collected metrics or to remove the collected metric completely? Or are problems like this dealt with on the Prometheus server side?
These are the correct semantics. Prometheus deals with metrics and metrics don't go away just because they haven't changed in a while. What you should be doing is keeping the gauge up to date.
It sounds like you might want a logs-based monitoring system, such as provided by the ELK stack.

Using Grafana with Jmeter

I am trying to make Grafana display all my metrics (CPU, Memory, etc).
I have already configured Grafana on my server and have configured influxdb and of course I have configured Jmeter listener (Backend Listener) but still I cannot display all grpahas, any idea what should I do in order to make it work ?
It seems like that system metrics (CPU/Memory, etc.) are not in the scope of the JMeter Backend Listener implementation. Actually capturing those KPIs is a part of PerfMon plugin, which currently doesn't seem to support dumping the metrics to InfluxDB/Graphite (at least it doesn't seem to work for me). It might be a good idea to raise such a request at https://groups.google.com/forum/#!forum/jmeter-plugins. Until this gets done, I guess you also have the option of using some alternative metric-collection tools to feed data in InfluxDB/Graphite. Those would depend on the server OS you want to monitor (e.g. Graphite-PowerShell-Functions for Windows or collectd for everything else)
Are you sure that JMeter posts the data to InfluxDB? Did you see the default measurements created in influxDB?
I am able to send the data using backend listener to influxdb. I have given the steps in this site.
http://www.testautomationguru.com/jmeter-real-time-results-influxdb-grafana/

Resources