I am trying to use pubsublite for pub sub and I noticed that after creation of 16 topics it came back with Quota exceeded. My understanding was there is no limit on the number of topics. Topics have no reservation. Anyone who would know how to get past this restriction as it is not something in the limits section too.
Anyone who has used it for 1000s of topics please let me know
Straighforward
Create a Topic
Create a Subscription to pass to the client
Works for 16 and then throws Quota exception
Related
Please help. I unstaked my Solana on TrustWallet from 5 or so delegators at once. Since then, when I go to claim my rewards, I receive the following error: encoded solana_sdk::transaction::versioned::Versioned-Transaction too large: 1876 bytes (max: encoded/raw 1683/1232). There's no option to separate the transactions in TrustWallet. Please help! -A Noob
I've tried staking new amounts, unstaking that amount, changing the node settings on trustwallet, contacting trustwallet, trying to contact Soloana, etc.
I'm having the exact same issue. The problem is that Trust Wallet tries to claim from all validators at once, creating a larger than accepted transaction payload.
From Solana's website:
"Solana's networking stack uses a conservative MTU size of 1280 bytes which, after accounting for headers, leaves 1232 bytes for packet data like serialized transactions. Developers building applications on Solana must design their on-chain program interfaces within the above transaction size limit constraint."
So the issue needs to be solved by Trust, nothing Solana can do about this.
I reached out to customer support, but they just gave me a standard reply to update the app and make sure I have enough free SOL for transaction fees, which I do and has nothing to do with this error.
This is really annoying - the funds are stuck in the app while the price is in a freefall.
I'm trying to press them to open a bug ticket and solve this with the next update, but no reply yet.
I referred Msdn documentation:- Rate limit in Teams.
From this, I get to know that we can only send 15000 messages from bot to users per hour.
but I am confused with bot per thread limit.
secondly, In order to handle 429 exception, I am now using this Code :-Back off Code.
earlier I was just using "connector.Conversations.ReplyToActivityAsync( (Activity)reply)" to send message to user.
Am I doing right, or is there any more precaution that I have to take care of? and how this exponential backoff will solve the rate limit?
Any help is going to be appreciated. :)
Thanks in advance.
I am trying to create an "alerting" application that has to process information from multiple kafka topics. There exist thousands of topics, but realistically only a few hundred need to be processed at any given time based on the alerting configuration. If I continuously update my topics list with "subscribe" then the latency of rebalancing may delay my alerts.
How can I efficiently implement a consumer group that subscribes to a set of constantly changing topics?
I'd say the answer to this today is to use assign() instead of subscribe and manually add in the new topic partitions removing any unused ones as you need to. Though it might be helpful for you to take a step back and ask if it makes more sense for the number of topics to be static and identify things to monitor by keys. That might make your life easier.
Sometimes I am getting the following error:
503: Max Client Queue and Topic Endpoint Flow Exceeded
What I need to configure to prevent such issue?
The number of "flows" is, roughly speaking, the number of endpoints to which you are subscribed. There are two types: ingress (for messages from your application into Solace) and egress (for messages from Solace into your application). You violated one of those limits. You can tell which by looking at the stack trace.
By default the limit on flows is 100. Before you increase this limit, ask yourself: are you really supposed to be subscribed to more than 100 queues/topics? If not, you may have a leak. Just as you wouldn't fix a memory leak by increasing memory, you shouldn't fix this leak by increasing the max flow. Are you forgetting to close your subscriptions? Are you using temporary queues? Despite their name, temporary queues last for the life of the client session unless you close them.
But if you really are supposed to be subscribed to so many endpoints, you may increase the max ingress and/or max egress. This can be done in SolAdmin by editing the Client Profile and selecting the Advanced Properties tab, or in solacectl by setting max-ingress or max-egress under configure/client-profile/message-spool (as explained here). (There is also max setting per message spool, but you are unlikely to have violated that.)
It looks like the "Max Egress Flows" setting in your client-profile has been exceeded. One egress flow will be used up for each endpoint that your application is binding to.
The "Max Egress Flows" setting can be located under the "Advanced Properties" tab, when you edit the client-profile.
We hit the same issue during our load test. With mere few hundred messages we started getting 503 error. We identified the issue was in our producer topic creation. Once we added caching to topic destination object and the issue was resolved.
I have Queue A. I have used this in one of my message flow which is up and running. I want to know the number of message dropped in to the queue A on particular day or time interval.
Kindly help me in finding out this.
Please have a look at the product manuals, in particular:
Queue statistics information tells how to enable queue statistics and ensure that you are collecting all the relevant data.
Displaying accounting and statistics information which explains, well, pretty much what the title says.
And to keep this on-topic for Stack Overflow, check out the Statistics messages format page which tells you how to programmatically access the queue stats messages. By writing your own code to collect the messages, you can store them to a database, save them off to a file, slice and dice the numbers for pie charts, whatever.
You did not mention which version of MQ that you are using and that is usually important. However, all modern versions of MQ have some queue stats instrumentation. The links I provided are from the v8.0 Knowledge Center.