I have some simple websites (not Laravel applications) with forms where people can input there postalcode and housenumber where the street and city field automatically gets filled in with the associated information. To accomplish this I make an API call with a ajax request to my Laravel application which returns the associated street and city. My Laravel application then makes a call to a third-party api which costs me around € 0.01 per request.
No I want to avoid unwanted an unauthorized access to my Laravel api calls, because each call costs me money. Because at this moment it is very easy to replicate such calls and someone with bad intentions could make a script that could perform thousands of calls per minute.
So my questions is how I can prevent unwanted and unauthorized api calls. I already read about Sanctum and passport, but from what I read this applies only for authenticated users. And using a token in the request header seems unnecessary, because anybody with a little knowledge can trace the token and use it.
Note that the people who fill in the forms can be random people and don't have an account.
There are probably many approaches. A simple but effective one would be sessions. You can save the user in a session. This way you can also count his Api accesses. As soon as they are larger than allowed, you can block their requests. You also write the block in the session. But pay attention to the session duration. It must be long enough.
But the user with bad intentions can get a new session. To avoid this, you can also put his IP on an internal blacklist for a day.
Note: But an open api is always a point of attack.
Related
I need to implement PayPal payment in my Laravel site. I was going for the server side integration, in order to save all the data, transactions and know what they actually bought. But turns out that [the older] server-side integration method [that I was looking at] is archived, and not really the preferred method anymore. Instead, they suggest using the smart buttons, with front end integration only.
Questions:
Is front end only safe? what prevents the user from messing with the JavaScript and editing the sum with whatever they want?
How do I know what they ordered if it is all front end?
What would I have to do if the payed sum does not correspond to the articles in the cart?
What should I be aware of with this system?
Smart Payment Buttons can be used with or without a server-side component.
Here is the front-end pattern that communicates with a server-side integration: https://developer.paypal.com/demo/checkout/#/pattern/server
Notice the fetches to two '/demo/...' placeholder endpoints, which need to be replaced with actual routes of yours. The first should create a v2/order via API and return the orderID. The second should capture that order after the payer approves it via Smart Payment Buttons.
The answers to your questions re: a serverless (client-side only) pattern are:
Nothing
Only what you program the JavaScript to tell you and which it actually successfully tells you, or what you read via email or in your PayPal account or app notifications
Refund the transaction
It's for people who don't want to do the work of implementing server-side routes and API calls.
I have about 1 million URI logs of user activity on my network, I want to know how many of those 1 million are for Facebook, how many are for Twitter, and so on..
It's easy to link URIs like cdn.xyz.twitter.com , platform.twitter.com to Twitter
However, the problem I'm facing is that I'm not able to link no more than 40% of the URLs captured to real websites, a URL like xys.1234.com can be something in facebook for example, but there isn't a link between that URL and facebook.com domain, thus will just be listed as a stand-alone website, which is wrong (or not what I want).
Also, all API calls won’t also be easily linked to their domains because some websites are maybe using amazon web services and that's what is being logged.
And Many of the URIs are generated from ad services, I want to know where this ad is generated from ( on what website or mobile application did the user click on the ad? ).
Snapshots of URIs so you would understand the whole picture.
https://imgur.com/a/2Ocqi
https://imgur.com/a/bmhNv
So you're trying to match up outgoing requests? How do you expect to know that a user who accessed xyz.1234.com did it through Facebook rather than independently by typing the URL into the address bar? Or by clicking a link from some other page? Your log doesn't contain information that tells you which URLs are linked from which page. Without another source of information, you can't be sure.
You could examine the requests for multiple users and infer relationships. That is, if you notice that all (or a majority of) requests to xyz.1234.com occur after a Facebook request, you can infer that the request occurred as a result of a click on a Facebook page. Doing so will require some interesting pattern matching. How well it works will depend on how much data you have to work with, how well you write the pattern matching, and how much time you're willing to let the algorithm run.
There's no simple answer, though. If you don't have data that explicitly says, "this request was made by clicking on a link from Twitter," then you have to either get another source of information or you have to write code that will infer that information.
I want to implement a tool (a website that can edit a user's own websites) that receives uploads from the browser and stores them in a website specified in the request. However, I want to protect the user from other sites creating requests to my endpoint and doing dirty things with the user's data.
The industry standard for this is to include a randomized token in every rendering of the page, submit it together with the input data, and check the validity of the token on the server side before processing the submitted request.
Is there an automated mechanism for this in the Boomla framework, or is something like this planned?
Implemented, no. Planned, yes.
Currently (v0.9.1), I believe Boomla does check the Referer header, but it stops there. So long, maybe you could implement a cryptographic solution yourself?
How pressing is the issue for you?
Consider that currently, side effects are not possible (eg. send data), thus data leaks are not possible, it won't cause data loss, since we have built in version control. (We are going to expose a casual version control mechanism that works automatically, without commiting, so you'll be backed up even without commiting.) Thus, in effect, your users are safe.
Please disagree if you think otherwise.
I am developing a user-generated content site. The goal is that users are rewarded if their content is viewed by a certain number of people. Whereas a user account is required to post content, an account is not required to view content.
I am currently developing the algorithm to count the number of valid views, and I am concerned about the possibility that users create bots to falsely increase their number of views. I would exclude views from the content generator’s IP, but I do not want to exclude valid views from other users with the same external IP address. The same external IP address could in fact account for a large amount of valid views in a college campus or corporate setting.
The site is implemented in python, and hosted on apache servers. The question is more theoretical in nature, as how can I establish whether or not traffic from the same IP is legitimate or not. I can’t find any content management systems that do this, and was just going to implement it myself.
You cannot reliably do this. Any method you create can be automated.
That said, you can raise the bar. For instance every page viewed can have a random number encoded into a piece of JavaScript that will submit an AJAX request. Any view where you have that corresponding AJAX request is probably a real browser, and is likely to be a real human since few bots handle JavaScript correctly. But absolutely nothing stops someone from having an automatic script to drive a real browser.
Well... you can make them login (through facebook or google id etc, if you don't want to create your own infrastructure). This way it is much easier to track ratings.
I have searched the Internet, but I can't find the info I'm looking for. So I'm sorry if it's a simple question or asked a miljon times.
I'm developing a website with (probably) a lot of Ajax functions. Now I'm wondering how FaceBook is doing this, for example the 'Like' button. If I use a ajax call to a page addLike.php?post_id=1, then an visitor (with evil intentions) can use this url to manipulate my db by adding random values to the post_id.
How can I prevent this? Or what's the best way to do this?
First and foremost, calls to make mutation to data entries (something stored in the database, files, etc.) should never be accessible from a 'GET' HTTP request, e.g. deleting a post in a forum. That is to say, if you're adding like to a post, you should use $.post to perform an ajax request (supposing you're using jQuery).
And also, authentication and authorization should be done before responding to every request. Which means, if the user wanted to add like to a post, he/she should have been authenticated and also permitted to perform the specific action. Prevailing web frameworks will help you to achieve this automatically (with configurations).
And further more, you should also prevent XSS attack by data sanitization. You can google it for more details.