Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I've Googled around, and my impression is that
Sql*net message from client
suggests the Oracle DBMS is waiting for the client to send new commands to the DBMS, and therefore any time spent in this event should be client-side time and not consume DB server CPUs. In other words, normally, if a session is in this event, it should be "INACTIVE" rather than "ACTIVE".
What's puzzling to us is that starting from this week (after we started using connection pools [we use dbcp]), we occassionally see sessions in the
Sql*net message from client
event and showing "ACTIVE" at the same time for extended periods of times. And during all this time, CPU usage on the DB is high.
Can anyone shed some light on what this means? If the DB session is waiting for the client to send a message, what can it be "ACTIVE" and consuming CPU cycles for?
If you see this event in the V$SESSION view you need to check the value of the STATE column as well to determine if the session is idle or is in fact working.
This is based on the following Oracle Magazine article:
you cannot look at the EVENT column alone to find out what the session
is waiting for. You must look at the STATE column first to determine
whether the session is waiting or working and then inspect the EVENT
column.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 days ago.
Improve this question
I cannot find any mention of what the actual monthly quota is.
https://learn.microsoft.com/en-us/bing/search-apis/bing-news-search/reference/error-codes
If you look inside the documentation for Bing Web Search API, you can read that:
The service and your subscription type determine the number of queries
per second (QPS) that you can make. Make sure your application
includes the logic necessary to stay within your quota. If the QPS
limit is met or exceeded, the request fails and returns an HTTP 429
status code. The response includes the Retry-After header, which
indicates how long you must wait before sending another request.
And if you head over to the Pricing section, you can find the TPS (transactions per second) limit and the limit per month.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 months ago.
Improve this question
I am trying to figure how to use use multiple 120 calls on a paid apis
1 - should i store all response on db and call them from the db axcording to the connected user ?
2 - should i store all jsons on a folder and call them according to connected user ?
I am confused about the way to deal with
When a user have valide subscription calls will be made to external apis as scheduled job
What you can do is cache the response you get from the paid API.
$value = Cache::remember('cache-key', 'time-to-live-in-seconds', function () {
// send request to paid api and return the data
});
Checkout the official docs
(https://laravel.com/docs/9.x/cache#retrieving-items-from-the-cache).
By default the cache driver is file, you can switch to redis or memcached if need be.
Now what you need to understand is the cache key and time-to-live-in-seconds.
Cache Key : This is the key Laravel will use to associate the cached data, so if the request is dependent on say, the logged in user, you can use the user id as the key here.
Time to live in seconds : This tells how long the data should be cached. So you have to know how often the paid api changes so that you do not keep stale data for a long time.
Now when you try to send a request, Laravel will first check if the data exists in cache, if it does, it will verify whether the data has expired based on time-to-live-in-seconds. It will return the cached data if its valid or send the paid api request and return the data if its not.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I am working on a ticketing system using Laravel. Are there any known techniques to prevent double bookings in a ticketing system.
Scenario
Ticket A has only 6 tickets available, User A comes in and adds 4 into their basket. I intend to make an API call to the database to deduct 4 from the available ticket number and then start a 10 minute timer within which User A must complete payment or else the tickets will be added back to the database.
However, a big flaw in my method is this: if the user simply closes the window, I have no way of checking the elapsed time to re-add the tickets back. Any ideas or other known techniques that I can make use of?
I already took a look at this question but still run into the same issue/flaw
Locking while accessing the Model would solve most of your worries and don't let core business logic being enforced on the front end.
Use database transactions to secure only one row is modified at a time and check that the ticket amount is available or else fail. This can produce database locks, that should be handled for better user experiences. There will not be written anything to the database, before the transaction is executed without any errors.
Thorwing the exception will cancel the operation and secure the operation to be atomic.
$ticketsToBeBought = 4;
DB::transaction(function ($query) {
$ticket = Ticket::where('id', 'ticket_a')->firstOrFail();
$availableTickets = $ticket->tickets_available;
$afterBuy = $availableTickets - $ticketsToBeBought;
if ($afterBuy < 0) {
throw new NoMoreTicketsException();
}
$ticket->tickets_available = $afterBuy;
$ticket->save();
// create ticket models or similar for the sale
});
This is a fairly simple approach to a very complex problem, that big companies normally tackle. But i hope it can get you going in the right direction and this is my approach to never oversell tickets.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
What is event-driven programming and has event-driven programming anything to do with threading? I came to this question reading about servers and how they handle user requests and manage data. If user sends request, server begins to process data and writes the state in a table. Why is that so? Does server stop processing data for that user and start to process data for another user or processing for every user is run in a different thread (multithread server)?
Event driven programming != Threaded programming, but they can (and should) overlap.
Threaded programming is used when multiple actions need to be handled by a system "simultaneously." I use simultaneously loosely as most OS's use a time sharing model for threaded activity, or at least they do when there are more threads than processors available. Either way, not germane to your Q.
I would use threaded programming when I need an application to do two or more things - like receiving user input from a keyboard (thread 1) and running calculations based upon the received input (thread 2).
Event driven programming is a little different, but in order for it to scale, it must utilize threaded programming. I could have a single thread that waits for an event / interrupt and then processes things on the event's occurrence. If it were truly single threaded, any additional events coming in would be blocked or lost while the first event was being processed. If I had a multi-threaded event processing model then additional threads would be spun up as events came in. I'm glossing over the producer / worker mechanisms required, but again, not germane to the level of your question.
Why does a server start processing / storing state information when an event is received? Well, because it was programmed to. :-) State handling may or may not be related to the event processing. State handling is a separate subject from event processing, just like events are different than threads.
That should answer all of the questions you raised. Jonny's first comment / point is worth heeding - being more specific about what you don't understand will get you better answers.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am creating a podcast website and I was wondering what would be the best way to keep a database up to date within 15 mins with the podcast rss feeds.
Currently I parse a feed on request and store in redis cache for 15 mins. But I would prefer to keep a database with all the data (feeds and all episodes).
Would it be better to bake the data by constantly hitting all feeds every 15 mins on a processing server or to process the feeds when requested?
If I were to update rss feed when requested I would have to:
check database -> check if 15 mins old -> done || parse feed -> check for neew feeds -> done || add to database -> done
where done = send data to user.
Any thoughts?
That's a way to do it. There are protocols like PubSubHubbub which can help you avoid polling "dumbly" every 15 minutes... You could also use Superfeedr and just wait for us to send you the data we find in finds.