Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
One of my API's routes associated with Laravel controller that returns URL of image stored on AWS S3.
I have function that looks like
public function getImage($params) {
//... $image is fetched from database
return Storage::disk('s3')->response("some_path/".$image->filename);
}
This codes works fine when I'm requesting few images, but when I try to use it inside some list which can be scrolled very fast some of request are failing. What am I doing wrong?
Because you are quickly scrolling and populating your lists, a lot of requests are being made to your server.
Laravel has a throttle middleware installed by default on your routes to mitigate security risks.
In your case you are hiring the limits of the throttle, resulting in 429 error codes.
Your PHP code is correct, your front-end code should be less greedy in trying to fetch images.
.... Or you should raise the allowed throttle amount in laravel, or remove it all together, but I wouldn't recommend it.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 months ago.
Improve this question
I am trying to figure how to use use multiple 120 calls on a paid apis
1 - should i store all response on db and call them from the db axcording to the connected user ?
2 - should i store all jsons on a folder and call them according to connected user ?
I am confused about the way to deal with
When a user have valide subscription calls will be made to external apis as scheduled job
What you can do is cache the response you get from the paid API.
$value = Cache::remember('cache-key', 'time-to-live-in-seconds', function () {
// send request to paid api and return the data
});
Checkout the official docs
(https://laravel.com/docs/9.x/cache#retrieving-items-from-the-cache).
By default the cache driver is file, you can switch to redis or memcached if need be.
Now what you need to understand is the cache key and time-to-live-in-seconds.
Cache Key : This is the key Laravel will use to associate the cached data, so if the request is dependent on say, the logged in user, you can use the user id as the key here.
Time to live in seconds : This tells how long the data should be cached. So you have to know how often the paid api changes so that you do not keep stale data for a long time.
Now when you try to send a request, Laravel will first check if the data exists in cache, if it does, it will verify whether the data has expired based on time-to-live-in-seconds. It will return the cached data if its valid or send the paid api request and return the data if its not.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I am working on a ticketing system using Laravel. Are there any known techniques to prevent double bookings in a ticketing system.
Scenario
Ticket A has only 6 tickets available, User A comes in and adds 4 into their basket. I intend to make an API call to the database to deduct 4 from the available ticket number and then start a 10 minute timer within which User A must complete payment or else the tickets will be added back to the database.
However, a big flaw in my method is this: if the user simply closes the window, I have no way of checking the elapsed time to re-add the tickets back. Any ideas or other known techniques that I can make use of?
I already took a look at this question but still run into the same issue/flaw
Locking while accessing the Model would solve most of your worries and don't let core business logic being enforced on the front end.
Use database transactions to secure only one row is modified at a time and check that the ticket amount is available or else fail. This can produce database locks, that should be handled for better user experiences. There will not be written anything to the database, before the transaction is executed without any errors.
Thorwing the exception will cancel the operation and secure the operation to be atomic.
$ticketsToBeBought = 4;
DB::transaction(function ($query) {
$ticket = Ticket::where('id', 'ticket_a')->firstOrFail();
$availableTickets = $ticket->tickets_available;
$afterBuy = $availableTickets - $ticketsToBeBought;
if ($afterBuy < 0) {
throw new NoMoreTicketsException();
}
$ticket->tickets_available = $afterBuy;
$ticket->save();
// create ticket models or similar for the sale
});
This is a fairly simple approach to a very complex problem, that big companies normally tackle. But i hope it can get you going in the right direction and this is my approach to never oversell tickets.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Im refactoring a monolith to microservies. I am not clear on data responsibility and access with microservices. From what I read we should take vertical slices.
So each service should be responsible for its own UI/WebAPI/DB, with distinct responsibility.
For example if I had a monolith shopping cart app, I could break it into the following services:
CustomerAccount
ProductSearch
ProductMaintenance
ShoppingCart
Ordering
What do I do with shared data, how do I determine what part of the system is responsible for it?
e.g. In my shopping cart example...
The CustomerAccount, ShoppingCart and Ordering need to know about the Customer data.
The ProductSearch, ProductMaintenance, ShoppingCart and Ordering need to know about Product Data data.
The Ordering will update the number of products available, but so should the productMaintenance.
So should the services send messages back and forth to get data from one another,
or should there be a master service, which handles the communication/workflow between services
or should they read/write from a common database
or something else?
This seem little late to answer but it may be good for future use.
Microservice calling another Microservice is totally fine, what you should be aware of is in case the communication between Microservices becomes to chatty than you should look at a different solution(maybe duplication of data across services, or have it with in same service).
In your case I would build a separate services for each entity that you call for common and reevaluate the situation afterwards.
Hope this helps
Best regrads
Burim
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I want user not to register on site using random emails generated randomly. For example Mailinator.com
How I can restrict those emails from my site when use register using those emails
Notice that Mailinator has many different domain names. You should see where the A or MX records of the domain name part resolves to, to filter mailinator effectively. Notice that it will also cause me to not use your service:
% host mailinator.com
mailinator.com has address 207.198.106.56
mailinator.com mail is handled by 10 mailinator.com.
% host suremail.info
suremail.info has address 207.198.106.56
suremail.info mail is handled by 10 suremail.info.
So effectively you'd want your blacklist to block by all of these
- the domain part of the address
- the A record of the domain
- the A record of the highest priority MX record of the domain
There is one more way but i am not sure it will work or not. This is Link for PhpBB blacklisted email. you can add them in your database table named blacklists
(according to cakephp modelname requirement)
Then in singup function compare both email
$mailchk = $this->request->data['User']['email'];
$mailexists = $this->request->data['Blacklist']['email']
compare this both email and if they mathch kick that user out.
but this is ideal way, I am not sure it will work or not because Programming functions have their own limt
you can use preg_match or FILTER_VALLIDATE_EMAIL to compare both data
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
We have functionality on our registration form that uses an AJAX call to check whether a username is available.
It's quite straight forward
Make a call to our service
Check username against database
If record of username found, return taken, otherwise return available.
We execute the call to our service once a user stops typing for a couple of seconds.
Our problem however, is that an attacker could use some means of brute force on our service and compile a list of all our usernames.
Does anyone know of any good ways to help prevent this sort of "attack"?
The only one I could think of was asking for a Captcha up front, but that wouldn't be a good user experience and might put people off filling out our form.
If it helps at all, we're using ASP.NET MVC, C#, SQL Server.
Any help would be greatly appreciated, thanks!
I suppose the best way is to rate limit it, either by allowing a user only a certain number of requests or by adding a 0.5-1 second waiting time onto each request. By doing either of those it'll become much harder for an attacker to enumerate a decent number of usernames in a reasonable amount of time.
I think a better way of securing your application however would be to treat it as if everyone already has a list of your users and work from there. Assuming an attacker knows all your users, how would you protect against brute force attacks? By rate limiting password attempts. By allowing only a few password attempts per 10 minutes or so, you will secure your application's users substantially.
Personally I believe that all passwords that are non-obvious (such as "password" and "qwerty") ought to be secure - for example, "soccerfan" should be a secure password. Why? Because you aren't going to guess "soccerfan" immediately. It'll maybe be 100th or so in your brute-forcer's dictionary and by the time they've guessed attempted to login with anywhere near that amount they should be banned and the user should have been notified. (By the way, I'm not suggesting people should use such passwords, the more complex the better).
you could check that the ajax request has come from the same origin, and also put some sort of throttle on there, you can also sign the request.
By throttling, me mean for example one IP address is allow a maximum of 10 requests per day.
Another approach is to let the client compute something that takes some seconds.
On server-side the computed value can be checked with very little CPU resources. And the request is only processed if the result computed by the client is correct.
Such algorithms are called Trapdoor functions