Can we retrieve multiple real time metrics from GCP? - performance

I am looking to retrieve 100+ metrics from GCP for CPU, disk, network etc.
I went through the StackDriver API, but could not find a way to call the API with multiple metrics.
Also I don't require a time series data, only aggregated or real time data is the requirement.
Is there a way to get that from GCP?

Related

how to setup openTelemetry tracing with Grafana Tempo

I have the following scenario: 2 Services (client) running in one location that also make calls to one central backend not in the same location. The latency to this backend is not very good since the physical distance is very large.
I have now instumented everything using Open Telemerty. Focus relies on traces.
But now I am struggeling to decide how to correctly setup the infrastructure i.e. Storage Backend, OTel Collector. Ideally I would find a solution that collects locally and then on a pull basis gets the data to a central location.
my idea
I am new to this topic and so I need some input please if this makes sense at all.
I also dont want too much overhead for the tracing. Only what is necessary in order for it to work properly.

Databricks or AWS Lambda for low throughput event driven architecture

I am looking to setup an event driven architecture to process messages from SQS and load into AWS S3. The events will be low volume and I was looking at either using Databricks or AWS lambda to process these messages as these are the 2 tools we already have procured.
I wanted to understand which one would be best to use as I'm struggling to differentiate them for this task as the throughput is only up to 1000 messages per day and unlikely to go higher at the moment so both are capable.
I just wanted to see what other people would consider and see as the differentiators between the two of these products so I can make sure this is future proofed as best I can?
We have used lambda more where I work and it may help to keep it consistent as we have more AWS skills in house but we are looking to build out databricks capability and I do personally find it easier to use.
If it was big data then I would have made the decision easier.
Thanks
AWS Lambda seems to be a much better choice in this case. Following are some benefits you will get with Lambda as compared to DataBricks.
Pros:
Free of cost: AWS Lambda is free for 1 Million requests per month and 400,000 GB-seconds of compute time per month, which means your request rate of 1000/day will easily be covered under this. More details here.
Very simple setup: The Lambda function implementation will be very straight-forward. Connect the SQS Queue with your Lambda function using the AWS Console or AWS cli. More details here. The Lambda function code will just be a couple of lines. It receives the message from SQS queue and writes to S3.
Logging and monitoring: You won't need any separate setup to track the performance metrics - How many messages were processed by Lambda, how many were successful, how much time it took. All these metrics are automatically generated by AWS CloudWatch. You also get an in-built retry mechanism, just specify the retry policy and AWS Lambda will take care of the rest.
Cons:
One drawback of this approach would be that each invocation of Lambda will write to a separate file in S3 because S3 doesn't provide APIs to append to existing files. So you will get 1000 files in S3 per day. Maybe you are fine with this (depends on what you want to do with this data in S3). If not, you will either need a separate job to join all files periodically or do a download of existing file from S3, append to it and upload back, which makes your Lambda a bit more complex.
DataBricks on the other hand, is built for different kind of use cases - Loading large datasets from Amazon S3 and performing analytics, SQL-like queries, builing ML models etc. It won't be suitable for this use case.

Measuring HTTP Performance using Micrometer in Spring Boot Application

I am developing a Spring Boot 2 Application with Micrometer for reporting Metrics. One of the functionality is sending large amounts of Data to a Restful Web Service.
I would like to measure the amount of data sent and the time taken to complete the request. Using the Timer metric gives me the time as well as the number of times the request is made. But how can I include the bytes transferred also in the same metric? My Grafana dashboard is supposed to plat the amount of data transferred and the time is taken to accomplish it.
I looked at Counter and Gauges but they don't look like the right fit for what I am trying to do. Is there a way to add a custom field to the Grafana metric?
You'd use a DistributionSummaryfor that. See here and here.
Regarding instrumentation, you'd have to currently instrument your Controllers manually or wire an Aspect around them.
IIRC at least the Tomcat metrics provide some data-in and data-out metrics, but not down to the path level.

Google App Engine Datastore uploading image very slow,taking too much time

I am uploading a bitmap image in form of blob data to datastore via endpoints classes.The upload is taking too much time. is there some way i can improve performance.Are there different 'tires' with different ram and clock speed to choose from as it is in cloud SQL?
Look at the App Engine Dashboard. It offers information on CPU utilization, memory usage and latency. Run a few tests and see how they affect the stats. Then, look at the logs. See cpu ms versus total ms.
These are good places to start investigating the problem. Once you know what constrains your app performance, you can start looking for a solution.

Measuring Application Performance

I was wondering if there is a tool to keep track of application performance. What I have in mind is a tool that will listen for updates and register performance metrics published by an application. i.e. time to serve a request, time a certain operation took to finish. And this tool would then aggregate the data and measure performance trends.
If you want to measure your application from outside, then you can use RRDtool to collect the data.
You can use slamd for webapp written in Java.
For Django use hotshot.
Search for profiler + your language, framework
Take a look at HP SiteScope. It's ability to drive the system with a Web User Script, to monitor the metrics on the backend, even to the extent of creation of custom shell scripts and database queries, plus the ability to add logic for report/alert against these combined data sets appears to be what you need.
Other mechanisms that you might consider would be a roll your own service using CURL to push information in, queries to the systems involved to pull metrics or database information and then your own interface for alerting and reporting.
Then it becomes a cost question, can you roll the level of functionality for less money than you can purchase an already existing solution on the open market.
Ref:
HP SiteScope Wiki Page

Resources