Sentry: Transactions quota 80% depleted - sentry

I get this mail from sentry:
Sentry: Transactions quota 80% depleted
Approaching Transactions Quota
Your organization FooBar has consumed 80% of its transactions capacity for the current usage period. It’s important to keep in mind that, should you hit your quota, and consume your on-demand spend, any excess transactions will be dropped until you roll over into the next period after Dec. 18, 2021.
I just want to use use sentry to monitor uncaught exceptions.
But it seems that sentry monitors all my transactions. That's not what I want.
How can I disable the monitoring of my transactions, so that only uncaught exceptions get monitored?

If you reduce the traces_sample_rate, then less samples will be sent to sentry.
sentry_sdk.init(
dsn="https://39f146...#xxx.ingest.sentry.io/xxx",
integrations=[DjangoIntegration()],
traces_sample_rate=0.1, # < ------------------- reduce this
Docs: https://docs.sentry.io/platforms/python/guides/bottle/configuration/sampling/

Related

Kafka consumption rate is low as compare to message publish on topic

Hi I am new to Spring Boot #kafkaListener. Service A publishes message on kafka topic continuously. My service consume the message from that topic. Partitions of topic in both service (Service A and my service) is same, but rate of consuming the message is low as compare to publishing the message. I can see consumer lag in kafka.
How can I fill that lag? Or how can I increase the rate of consuming the message?
Can I have separate thread for processing message. I can consume a message in Queue (acknowledge after adding into queue) and another thread will read from that queue to process that message.
Is there any settings or property provides by Spring to increase the rate of consumption?
Lag is something you want to reduce, not "fill".
Can you consume faster? Yes. For example, changing the consumer max.poll.records can be increased from the default of 500, per your I/O rates (do your own benchmarking) to fetch more data at once from Kafka. However, this will increase the surface area for consumer error handling.
You can also consume and immediately ack the offsets, then toss records into a queue for processing. There is possibility for skipping records in this case, though, as you move processing off the critical path for offset tracking.
Or you could only commit once per consumer poll loop, rather than ack every record, but this may result in duplicate record processing.
As mentioned before, adding partitions is the best way to scale consumption after distributing producer workload
You generally will need to increase the number of partitions (and concurrency in the listener container) if a single consumer thread can't keep up with the production rate.
If that doesn't help, you will need to profile your consumer app to see where the bottleneck is.

How to Resolve a 403 error: User rate limit exceeded in Google Drive API?

I am getting
"code": 403,
"message": "User Rate Limit Exceeded"
while using Google Drive API in my web app
Although the quota is 10,000 requests per 100 seconds and my average is less than 2:
How can I resolve this error? How to implement exponential backoff as the documents say?
There are sevrail types of quotas with Google apis.
Project based quotas which effect your project itself. These quotas can be extended. If for example you your project can make 10000 requests pre 100 seconds. you could request that this be extended.
Then there is the user based quotas. these quotas limit how much each user can send.
User Rate Limit Exceeded
Means that you are hitting a user rate quota. User rate quotas are flood protection they ensure that a single user of your application can not make to many requests at once.
These quotas can not be extended.
if you are getting a user rate limiting quota then you need to slow down your application and implement exponential backoff.
How you implement exponential backoff is up to you and the language you are using but it basically involves just retrying the same request again only adding wait times each time it fails
the graph
the graph in the google cloud console is an guestimate and it is not by any means accurate. If you are getting the error message you should go by that and not by what the graph says.
After hours of searching and thinking, I found out that,
'User Rate Limit Exceeded' has a spam protection which allow max 10 requests per second.
Thus I found out a lazy trick to do so by delaying the calls using:
usleep(rand(1000000,2000000);
It simply delays the call by a random duration between 1 and two seconds.

What is default api rate limit of braintree?

What is default api rate limit of brain tree?
Because after very requests in a time period I am getting 403(Too many request) exception.
Raised when requests associated with your account reach unsafe levels. We may limit API resources by merchant if activity risks negative impact to other merchants.
https://developers.braintreepayments.com/reference/general/exceptions/node#too-many-requests-error

GetStream update limit in free plan

Which are the limitations of free plan in getstream?
I have 8 members and 5 administrators following each others feed and I always get an email from GetStream alerting the rate limit when I make them follow.
Now I have an issue when updating activities. Perhaps I have reached my update limit, because sometimes when I try to create an activity, I get a ERROR TIMEDOUT.
We send out two rate limit messages: one for API calls, and one for feed updates. Our API call rate limit is about 2000 activities per minute, but feed updates are more like 50-100 per minute on the free plan. Setting up a follow relationship will trigger some feed updates as old activities get copied from other feeds to the new follower's feed.
When you do hit a rate limit, we don't stop your incoming traffic, but we de-prioritize slightly it so it takes a little longer to catch up. Our API v2 coming out soon will report rate limit information in API calls so you'll have more visibility into how close you are to hitting those limits before getting emails.
Regarding timeouts, which region is your app in (us-east, us-west, eu-central) and where are you located related to that area? We're going to be rolling out multi-region support later this year to minimize latencies there as well.

Consequences of changing USERPostMessageLimit

One of our legacy applications relies heavily on PostThreadMessage() for inter-thread communication, so we increased USERPostMessageLimit in the registry (way) beyond the normal 10.000.
However, documentation on MSDN states that "This limit should be sufficiently large. If your application exceeds the limit, it should be redesigned to avoid consuming so many system resources." [1]
Can anyone enlighten me as to how exactly consuming too many system resources manifests itself? What exactly are system resources? Can I somehow monitor an application's usage of system resources? Any information would be very helpful in deciding whether it is worth the time and effort to redesign this application.
The resources it is refering to are those used by the threads for receiving/handling the messages. You can monitor the thread pool size & other resources using the Taskmanager (look at View->Select Columns). It it may help you identify the specific resource if the consumer is resource locked, look for a resource count that tops out even while your threads are increasing.
However; if you need to increase USERPostMessageLimit then message producer is simply overloading the message consumer; by increasing this limit you are compounding your problem not fixing it. Reducing USERPostMessageLimit back to the default, and if your message producer cannot post the message try sleeping before retrying, allowing the consuming thread to clear some messages.

Resources