Best model for processing rss feeds [closed] - performance

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am creating a podcast website and I was wondering what would be the best way to keep a database up to date within 15 mins with the podcast rss feeds.
Currently I parse a feed on request and store in redis cache for 15 mins. But I would prefer to keep a database with all the data (feeds and all episodes).
Would it be better to bake the data by constantly hitting all feeds every 15 mins on a processing server or to process the feeds when requested?
If I were to update rss feed when requested I would have to:
check database -> check if 15 mins old -> done || parse feed -> check for neew feeds -> done || add to database -> done
where done = send data to user.
Any thoughts?

That's a way to do it. There are protocols like PubSubHubbub which can help you avoid polling "dumbly" every 15 minutes... You could also use Superfeedr and just wait for us to send you the data we find in finds.

Related

What is quota per month for bing web search api? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 days ago.
Improve this question
I cannot find any mention of what the actual monthly quota is.
https://learn.microsoft.com/en-us/bing/search-apis/bing-news-search/reference/error-codes
If you look inside the documentation for Bing Web Search API, you can read that:
The service and your subscription type determine the number of queries
per second (QPS) that you can make. Make sure your application
includes the logic necessary to stay within your quota. If the QPS
limit is met or exceeded, the request fails and returns an HTTP 429
status code. The response includes the Retry-After header, which
indicates how long you must wait before sending another request.
And if you head over to the Pricing section, you can find the TPS (transactions per second) limit and the limit per month.

Laravel how to deal with 120 calls to external apîs? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 months ago.
Improve this question
I am trying to figure how to use use multiple 120 calls on a paid apis
1 - should i store all response on db and call them from the db axcording to the connected user ?
2 - should i store all jsons on a folder and call them according to connected user ?
I am confused about the way to deal with
When a user have valide subscription calls will be made to external apis as scheduled job
What you can do is cache the response you get from the paid API.
$value = Cache::remember('cache-key', 'time-to-live-in-seconds', function () {
// send request to paid api and return the data
});
Checkout the official docs
(https://laravel.com/docs/9.x/cache#retrieving-items-from-the-cache).
By default the cache driver is file, you can switch to redis or memcached if need be.
Now what you need to understand is the cache key and time-to-live-in-seconds.
Cache Key : This is the key Laravel will use to associate the cached data, so if the request is dependent on say, the logged in user, you can use the user id as the key here.
Time to live in seconds : This tells how long the data should be cached. So you have to know how often the paid api changes so that you do not keep stale data for a long time.
Now when you try to send a request, Laravel will first check if the data exists in cache, if it does, it will verify whether the data has expired based on time-to-live-in-seconds. It will return the cached data if its valid or send the paid api request and return the data if its not.

Polling two APIs in a single infinite for loop but with different delay [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 10 months ago.
Improve this question
I have two APIs on a server. Let's say API A and API B. I want to call API A every 3 seconds and API B every 200 seconds. I have coded the program in the following structure:
Main: that handles authentication and handles API calls.
API-A: that calls API A and processes its data
API-B: that calls API B and processes its data.
Can anyone tell me how can I implement both of the API calls in a single program (Main). I am running a single for loop for API A with sleeping it for 3 seconds now I want to fit the API B with its condition of sleeping.
I want both of them to run simultaneously with their condition, while both be working in one program, main as it is handling authentication and I don't want to make two seperate programs for these two APIs.
You can set up two timers and wait for events on both channels in a loop.
aTicker := time.NewTicker(time.Second * 3)
bTicker := time.NewTicker(time.Second * 200)
for {
select {
case <-aTicker.C:
callApiA()
case <-bTicker.C:
callApiB()
}
}

Locking a a concert ticket to prevent other users from booking it in Laravel [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I am working on a ticketing system using Laravel. Are there any known techniques to prevent double bookings in a ticketing system.
Scenario
Ticket A has only 6 tickets available, User A comes in and adds 4 into their basket. I intend to make an API call to the database to deduct 4 from the available ticket number and then start a 10 minute timer within which User A must complete payment or else the tickets will be added back to the database.
However, a big flaw in my method is this: if the user simply closes the window, I have no way of checking the elapsed time to re-add the tickets back. Any ideas or other known techniques that I can make use of?
I already took a look at this question but still run into the same issue/flaw
Locking while accessing the Model would solve most of your worries and don't let core business logic being enforced on the front end.
Use database transactions to secure only one row is modified at a time and check that the ticket amount is available or else fail. This can produce database locks, that should be handled for better user experiences. There will not be written anything to the database, before the transaction is executed without any errors.
Thorwing the exception will cancel the operation and secure the operation to be atomic.
$ticketsToBeBought = 4;
DB::transaction(function ($query) {
$ticket = Ticket::where('id', 'ticket_a')->firstOrFail();
$availableTickets = $ticket->tickets_available;
$afterBuy = $availableTickets - $ticketsToBeBought;
if ($afterBuy < 0) {
throw new NoMoreTicketsException();
}
$ticket->tickets_available = $afterBuy;
$ticket->save();
// create ticket models or similar for the sale
});
This is a fairly simple approach to a very complex problem, that big companies normally tackle. But i hope it can get you going in the right direction and this is my approach to never oversell tickets.

Laravel: which is better dispatching single job for multiple emails or multiple jobs for each email [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am working on a task in which I have to send thousands of emails in laravel. My question is that is it better to create a single job that sends email to all emails or creating a job that sends email to single email and dispatch it multiple times?
If you have 1millions of emails, your job may run out of memory or time out quickly. Its optimal to spread sending each email in their own process for two reasons:
There's less possibility of memory exhaustion and time out
If there's a failure e.g wrong email, network failure etc. you can isolate failed email with failed job and perform necessary actions on them.

Resources