What is quota per month for bing web search api? [closed] - bing-api

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 days ago.
Improve this question
I cannot find any mention of what the actual monthly quota is.
https://learn.microsoft.com/en-us/bing/search-apis/bing-news-search/reference/error-codes

If you look inside the documentation for Bing Web Search API, you can read that:
The service and your subscription type determine the number of queries
per second (QPS) that you can make. Make sure your application
includes the logic necessary to stay within your quota. If the QPS
limit is met or exceeded, the request fails and returns an HTTP 429
status code. The response includes the Retry-After header, which
indicates how long you must wait before sending another request.
And if you head over to the Pricing section, you can find the TPS (transactions per second) limit and the limit per month.

Related

Laravel how to deal with 120 calls to external apîs? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 months ago.
Improve this question
I am trying to figure how to use use multiple 120 calls on a paid apis
1 - should i store all response on db and call them from the db axcording to the connected user ?
2 - should i store all jsons on a folder and call them according to connected user ?
I am confused about the way to deal with
When a user have valide subscription calls will be made to external apis as scheduled job
What you can do is cache the response you get from the paid API.
$value = Cache::remember('cache-key', 'time-to-live-in-seconds', function () {
// send request to paid api and return the data
});
Checkout the official docs
(https://laravel.com/docs/9.x/cache#retrieving-items-from-the-cache).
By default the cache driver is file, you can switch to redis or memcached if need be.
Now what you need to understand is the cache key and time-to-live-in-seconds.
Cache Key : This is the key Laravel will use to associate the cached data, so if the request is dependent on say, the logged in user, you can use the user id as the key here.
Time to live in seconds : This tells how long the data should be cached. So you have to know how often the paid api changes so that you do not keep stale data for a long time.
Now when you try to send a request, Laravel will first check if the data exists in cache, if it does, it will verify whether the data has expired based on time-to-live-in-seconds. It will return the cached data if its valid or send the paid api request and return the data if its not.

Dialog Flow CX, pricing, by session? by request? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
I would like to understand what price will generate my chatboot developed in Dialog Flow CX.
I have read the page https://cloud.google.com/dialogflow/pricing but it is not entirely clear to me.
With Dialogflow CX, you pay per session? by request?
And that is considered a request?
A request is generated every time that the end user writes a question in the chat bot?
Or a request is equal to a session?
According to the price page, $0.007 per request.
For example:
A user opens a conversation with the chatbot
And ask 5 questions, with its 5 responses from the chatbot
What price would it generate?
$0.007 by one session
or $0.007 by questions? $0.007 x 5 = $0,035
Thanks
As per the Pricing documentation, a request is defined as any call to the Dialogflow service. Note that a request is not equivalent to a session - a session may consist of multiple requests.
Moreover, also note that a new Dialogflow CX pricing model (2021-09) will take effect on September 1, 2021 - customers will be charged per request. Until that date, the original prices (2020-09) are in effect - customers are currently charged per session.
For your example, where the end-user asks 5 questions by sending text requests to a chatbot and receives 5 responses from the Dialogflow CX agent, this counts as 5 text requests, and the cost would be:
CX Agent (2020-09) (current): $20 per 100 chat sessions
If you have 5 requests that are sent by the same end-user, it is considered as 1 session only. Hence, the cost will be $0.20.
If you have 5 requests where each request is sent by different end-users, it is considered as 5 sessions. Hence, the cost will be $1.
CX Agent (2021-09) (starting September 1, 2021): $0.007 x 5 = $0.035
Basically, each time an end-user sends a query to the agent, the Dialogflow API’s detectIntent or StreamingDetectIntent method gets called and that counts as one text request or one audio request, depending on whether the application sends text or voice data to Dialogflow.
so for the most recent CX Agent (2021) you will pay $0.007 per request, so using your example if a user asks 5 questions thats 0.007*5 = $0.035.
If you're using a 2020 CX Agent you'd be paying $20 per 100 sessions, a session would be any conversation the agent would have with one user. So if your user asked 100 questions it would still be considered one session.

Best model for processing rss feeds [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am creating a podcast website and I was wondering what would be the best way to keep a database up to date within 15 mins with the podcast rss feeds.
Currently I parse a feed on request and store in redis cache for 15 mins. But I would prefer to keep a database with all the data (feeds and all episodes).
Would it be better to bake the data by constantly hitting all feeds every 15 mins on a processing server or to process the feeds when requested?
If I were to update rss feed when requested I would have to:
check database -> check if 15 mins old -> done || parse feed -> check for neew feeds -> done || add to database -> done
where done = send data to user.
Any thoughts?
That's a way to do it. There are protocols like PubSubHubbub which can help you avoid polling "dumbly" every 15 minutes... You could also use Superfeedr and just wait for us to send you the data we find in finds.

Restricting random generated email on site to register [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I want user not to register on site using random emails generated randomly. For example Mailinator.com
How I can restrict those emails from my site when use register using those emails
Notice that Mailinator has many different domain names. You should see where the A or MX records of the domain name part resolves to, to filter mailinator effectively. Notice that it will also cause me to not use your service:
% host mailinator.com
mailinator.com has address 207.198.106.56
mailinator.com mail is handled by 10 mailinator.com.
% host suremail.info
suremail.info has address 207.198.106.56
suremail.info mail is handled by 10 suremail.info.
So effectively you'd want your blacklist to block by all of these
- the domain part of the address
- the A record of the domain
- the A record of the highest priority MX record of the domain
There is one more way but i am not sure it will work or not. This is Link for PhpBB blacklisted email. you can add them in your database table named blacklists
(according to cakephp modelname requirement)
Then in singup function compare both email
$mailchk = $this->request->data['User']['email'];
$mailexists = $this->request->data['Blacklist']['email']
compare this both email and if they mathch kick that user out.
but this is ideal way, I am not sure it will work or not because Programming functions have their own limt
you can use preg_match or FILTER_VALLIDATE_EMAIL to compare both data

Session in "Sql*net message from client", but is active [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I've Googled around, and my impression is that
Sql*net message from client
suggests the Oracle DBMS is waiting for the client to send new commands to the DBMS, and therefore any time spent in this event should be client-side time and not consume DB server CPUs. In other words, normally, if a session is in this event, it should be "INACTIVE" rather than "ACTIVE".
What's puzzling to us is that starting from this week (after we started using connection pools [we use dbcp]), we occassionally see sessions in the
Sql*net message from client
event and showing "ACTIVE" at the same time for extended periods of times. And during all this time, CPU usage on the DB is high.
Can anyone shed some light on what this means? If the DB session is waiting for the client to send a message, what can it be "ACTIVE" and consuming CPU cycles for?
If you see this event in the V$SESSION view you need to check the value of the STATE column as well to determine if the session is idle or is in fact working.
This is based on the following Oracle Magazine article:
you cannot look at the EVENT column alone to find out what the session
is waiting for. You must look at the STATE column first to determine
whether the session is waiting or working and then inspect the EVENT
column.

Resources