Discord.py - Send many changes to the api - discord.py

Situation
Let's say there are 200 people in the Voicechannel "Lobby".
We want to start our game night and write !start.
The bot then has to randomly put those people into rooms with a max size of 10.
Bot has to create 20 Channels
Bot has to randomly move the 200 People into the 20 Channels.
Easy so far.
Issue
If we just fire those 220 Requests to the API at the same time, it will not respond to every request.
If we put a thread.sleep 1s between every request, it will take 220 seconds longer.
Idea
My idea would be to create 10 additional bots which then execute 22 requests each. So everything should be done after 22+x seconds.
Would that be within discords terms of service?
Is there a better way?

Related

Laravel bulk mail queue (divided by quantity and time)

I have a scheduling system that sends emails with the respective calendars of each system member.
My mailing list has increased significantly (more than 800 recipients), and my email provider is generating some kind of restriction, something like SMTP tarpitting.
I think I could take all these recipients and split and send them in small packages, ie, I could use Mail::queue().
The point is:
Is there any way that I can add queued at intervals, for example 10 minutes and that always added at the end of the queue, even if there is a new mailing package.
The idea would be (I don't know if it's the best solution), take this total amount, example 800, divide by 150, which would give 5 interactions, and of these 5 interactions, send 25 emails every 10 minutes. (25 X (60/10) X 5 == 750).
You can throw all mails to your queue and then configure the queue to do a specific amount at a given time (one need redis for this): https://laravel.com/docs/master/queues#rate-limiting
So, you can focus on what you are doing and less on the how you are doing it 😉

Error 429: Insufficient tokens (DefaultGroupUSER-100s) What defines a user?

tl/dr do 100 devices all using the same Client ID count as 100 users, with their own limits, or one user sharing limits?
I have a webpage which reads and writes to a Google Sheet.
Because the webpage needs to know if a cell has changed, it polls the server once every 1000ms:
var pollProcId = window.setInterval(pollForInput, 1000);
where pollForInput does a single:
gapi.client.sheets.spreadsheets.values.get(request).then(callback);
When I tried to use this app with a class of 100 students I got many 429 error codes (more than I got successful reads) in response to google.apps.sheets.v4.SpreadsheetsService.GetValues requests:
Many of my users never got as far as seeing even the first request come back.
As far as I can make out, these are AnalyticsDefaultGroupUSER-100s errors which, according to the error responses page:
Indicates that the requests per 100 seconds per user per project quota
has been exhausted.
But with my app only requesting once per 1000 milliseconds, I wouldn't expect to see this many 429s as I have a limit of 100 requests per 100 seconds (1 per second) so only users whose application didn't complete in 100 seconds should have received a 429.
I know I should implement Exponential Backoff (which I'll do, I promise) but I'm worried I'm misunderstanding what a "user" in this context is.
Each user is using their own device (so presumably has a different IP address) but they are all using my "Client ID".
Does this scenario count as many users making one request per second, or a single user making a hundred requests per second?
Well, the user in the per user quota means that a single user making a request. So let's take the Sheets API, it has a quota of 100 for the Read requests per 100 seconds per user. So meaning only a single user can make a request per second. Note that Read request has a same set of quota as the Write request. But these two sets of quotas have their own set of quota and didn't share the same limit quota.
If you want a higher quota than the default, then you can apply for a higher quota by using this form or by visiting your developer console, then click the pencil icon in the quota that you want to increase.
I also suggest you to do the Exponential Backoff as soon as possible, because it can help you to avoid getting this kind of error.
Hope it helps you.

Pricing: Are push notifications really free?

According to the parse.com pricing page, push notifications are free up to 1 million unique recipients.
API calls are free up to 30 requests / second.
I want to make sure there is no catch here.
An example will clarify: I have 100K subscribed users. I will send weekly push notifications to them. In a month, that will be 4 push "blasts" with 100K recipients each. Is this covered by the free tier? Would this count as 4 API calls, 400K API calls, or some other amount?
100k users is 1/10 the advertised unique recipient limit, so that should be okay.
Remember that there's a 10sec timeout, too. So the only way to blast 100k pushes within the free-tier resource limits is to create a scheduled job that spends about 2 hours (that's a safe rate of 15 req/sec) doing pushes and writing state so you can pick up later where you left off.
Assuming there's no hidden gotcha (you'll probably need to discover those empirically), I think the only gotcha in plain sight is the fact that the free tier allows only one (1) scheduled job. Any other long-running processing -- and there are bound to be some on 100k users -- are going to have to share the job, making the what-should-this-single-job-work-on-now logic pretty complex.
You should take a look at the FAQ for Parse.com:
https://www.parse.com/plans/faq
What is considered an API request?
Anytime you make a network call to
Parse on behalf of your app using one of the Parse SDKs or REST API,
it counts as an API request. This does include things like queries,
saves, logins, amongst other kinds of requests. It also includes
requests to send push notifications, although this is seen as a single
request regardless of how many recipients are targeted. Serving Parse
files counts as an API request, including static assets served from
Parse Hosting. Analytics requests do have a special exemption. You can
send us your analytics events any time without being limited by your
app's request limit.

Using Twilio SMS API, can I specify multiple destination phones in one post?

Twilio limits long code SMS to 1/sec. To improve my throughput, I split my batch into 5 phone numbers. I've found each HTTP POST to the Twilio API takes about 0.5 seconds.
One would think using 5 twilio phone numbers to send a message to 1000 cell phones would take 200 seconds, but it will take 500 seconds just to POST the requests. So two phone numbers will double my throughput, but more would not make a difference.
Am I missing something? I was thinking it would be nice if the API would take a list of phone numbers for the "To" parameter. I don't want to pay for a short code, but even if I do it seems the maximum throughput is 2/sec unless you resort to the complexity of having multiple threads feeding Twilio.
I've noticed TwiML during a call let's you include multiple sms nodes when constructing a response so it seems like there should be a way to do the same for outbound SMS.
Twilio Evangelist here. At the moment, we require that you submit each outgoing SMS message as its own API request.
The current rate limit on a longcode is 1 message per second. If more messages per second are sent, Twilio queues them up and sends them out at a rate of 1 per second.
A potential workaround is to make async requests across multiple phone numbers. This can be accomplished with the twilio node.js module or an evented framework such as EventMachine for Ruby or a similar toolset for your language of choice.
Hope this helps!
Here's a more modern answer. Twilio now supports Messaging Services. It basically lets you create a service that can group multiple outbound phone numbers together. So, when you fire off requests for a text to be sent, it can use ALL the numbers in the message group to perform the sending. This effectively overcomes the 1 text per second limit.
Messaging services also comes with Copilot. It adds several features such as "sticky sender". This ensures the same end user always gets texts from the same number in the pool instead of getting a text from different numbers.
If you are using the trial account, even looping with a 5s timeout between each item in the array did not work for me. And that was for just two numbers. Once I upgraded the account the code worked immediately without needing a timeout.
You know it's the trial account if the SMS you receive (when sending to only one number) says "Sent from your Twilio trial account - ".

google places api requests - OVER_QUERY_LIMIT before actually over the limit

I have developed a google places application to get info about places. I have verified my identity with google and as per the limits, I should be allowed up to 100 000 requests per day. However, after under 300 requests (different numbers go through each day), I get the message back: OVER_QUERY_LIMIT. Any similar experiences or ideas how to enable the requests to go through?
Thank you.
D Lax
You can track your requests at
https://code.google.com/apis/console/?noredirect#:stats
I ran into this issue as well. I was able to find 3 throttle limits for the places api, given by google.
10 api calls per every 1 second
100 api calls per every 100 seconds
50,000 api calls per 1 day
If I were to go over any of these limits, I would receive the OVER_QUERY_LIMIT error and it would return no results for that given address.
I found a way to have my program sleep for 11 seconds after calling the places api with a dataset of 10 addresses. Then the program would call the places api with a new dataset of 10 address. This solution gets around the 10calls/second and the 100calls/100seconds throttle limits. However, I did run into the OVER_QUERY_LIMIT error once I tried my 25th dataset of 10 address (after 240 api calls). So it is clear that there are other throttles not published to help protect the google maps platform.
But, I did see that the limits mentioned above may be changed if you get in contact with the google api help team and sort it out with them.

Resources