Bypass reCAPTCHA from a specified origin - recaptcha

We have a page on our site that uses Google's reCAPTCHA before allowing the user to download a file.
It works great and we totally stopped all the evil bots from spamming our servers.
Now we want to allow a specific entity (user, domain, whatever) to be able to automatically download files without solving the challenge. Or maybe solving it once per session (which will be longer than 2 minutes) and not once per file.
Is there some way we can issue them a multi-use token or have them get a token from Google that will allow them (temporary?) unfettered access to our file downloads? Can we whitelist their domain in the Google admin settings?
Or is this something I need to build myself?
EDIT: It turns I didn't get all the requirements for this assignment. Whitelisting will not satisfy the requirements since it is apparently multiple entities, and that will indubitably change in the future.

reCAPTCHA does not provide specific whitelisting for users or domains.
Instead, you should be looking at making this dynamic on your side. For example, disable reCAPTCHA for signed-in users or generate a token on your server with an expiry time, set that as a cookie on the client, and disable reCAPTCHA for valid tokens.

Related

Debug redirect_url in oauth2 flow

We are using Go server side code to interact with Google Ads REST API.
Namely, we authenticate it with help of "golang.org/x/oauth2" and
"golang.org/x/oauth2"packages.
In May (and recently again) we've got a email from Google regarding deprecation
Out-of-band flow, essentially rewording of this
one.
But additionally to common information, Google email listed account, which we
are using to authenticate, as being used in OOB flow and going to be blocked.
We checked our sources and available sources of mentioned packages but was not
able to find redirect URIs which are said to be used for OOB flow as one of
those:
redirect_uri=urn:ietf:wg:oauth:2.0:oob
urn:ietf:wg:oauth:2.0:oob:auto
oob
We explicitly use http://localhost in our code and long-live refresh token
(which seems never expires).
We also tried to use tcpdump to monitor our API calls, but was not able to
learn much from it, because calls are made via https and, therefore, encrypted.
We considered to use man-in-the-middle kind of proxy like
https://www.charlesproxy.com/, but haven't tried it yet, because it become
non-free and because of complexity of setup.
We tried to log our requests to API endpoint with custom RoundTripper, but
have not spotted anything suspicious. It seems that we're using refresh token
only and exchange of code to refresh token just never happen in the code.
Because of this, we don't think that further logging or monitoring with
decrypting https packages may help (but we open to suggestions how to do it better).
Finally, we decided to create a new OAuth 2 Client in the Google console with
fresh set of client id, client secret and refresh token. We obtained a new
refresh token with oauth2l and replaced
credentials in our configuration. But still, we are not sure that new account
will not be blocked by Google due OOB deprication, because seemingly it looks
the same as old one.
Questions:
Why may Google mark our account as OOB?
How can we ensure that newly created account will not be blocked?
Same here.
I find out an answer, that says "Desktop" type of Credentials uses OOB by default. Probably you need to create new Credentials with type "Web"

Secure public api for mobile app with laravel

I Guys, i have to create a mobile app that need to make a request to a laravel endpoint, app no require registration or login, which is the best way to protect my api? To make sure the only my application can call it?
Thanks!
There's no full proof method of securing your api, because with the right tools and following some tutorials on the web, anyone could view your whole api request, headers, tokens, etc.
Anything you do or store on the app is already compromised, so signatures,ssl, encryption,tokens, etc are not that helpful if malicious users have access to the app. It can make it more troublesome for malicious users, but a dedicated one could overcome it.
Using authentication atleast forces users to register before they can use your api and you can block the user when needed. Along with requiring email verification, users who wish to misuse your api would then need valid email addresses atleast. But since you mention securing without authentication, this goes out of scope.
You can secure your api somewhat by using rate limiting. laravel has an inbuilt rate limiting with the throttle middleware. You can use this to restrict the number of times an api can be called in a particular time interval by an ip address.
Next would be Ip blocking. If any malicious activity is found, you could block the ip address. But this can be overcome with a vpn, and a malicious user could also block someone elses ip in this manner.
Captcha can help against bots, but would also annoy regular users.
Another method would be restriction with cors, those who have faced cors issues know exactly how annoying it can be, but it wont work on native apps (or you could try pwa).
And in a worse case scenario you could go with some terms and conditions and some legal action
A simple solution You can create a table for devices with api key which will be generated for each device app, and always use it to send requests to the api end point, then used it to fetch data from the rest api. The same process like if you are loging in, but you will use the api key unstead and the key will be fixe not refreshed evrey time.

Method(s) for securing embedded images delivered to authenticated users?

As part of an application my users can create documents with embedded images/files/text etc. Viewing and editing this content requires the user to log in. At the moment the images and files though are delivered as permanent links so if those links are shared any non-authenticated user can access them forever.
I would like to make these files secure. My initial thought was to use the login token and user's id to check if they have access and only deliver the files if they do. But then I started working on it and it seems the most practical solution would involve generating a link that will expire at some point in the future. This doesn't remove the exposure to unauthenticated access but maybe reduces it enough.
The questions that come to mind are:
Is there a common approach or a few options on how this should be implemented?
I've seen returning urls with expiration periods used
Google docs seems to do something more sophisticated for it's embedded images but I can't tell what
Others?
Basic design points?
Pros/Cons of each?
Yes, it reduces the authenticated access to a fixed time but theoretically it provides un-authenticated access. So a security professional will claim it has no authentication. This kind of timed expiry link is usually used to safeguard against one time un-authenticated access like password reset(along with an expiring token independent from the time).
What is your goal? From whom are you trying to protect the data? Is the users who already have access to files and you want to limit providing an expiry time? From the question, you need to secure the access to the files/documents which has text and embedded images in it from everyone. You are right about the timed expiry design. It will not guarantee you authentication and integrity of the document and if it is over non-secure HTTP it will not even provide you integrity of the document from a potential adversary.
you can use cookies(secure cookie) over HTTPS. As long as the user has the non-expired cookie, allow access to the files/documents. The cookie approach needs distributed cookie management if you to host the solution in multiple boxes with a reverse proxy in-front. Though cross-site scripting is a threat but still most of major web application providers are using cookie based solutions. Please note, cookie breaks the REST nature of the web-application.
Another approach (similar to cookie) is to generate authenticated tokens tied to user/documents which has access for N number of attempts for a time period set while generating the token. This method has to be used over HTTPS to avoid un-wanted listeners.
An always changing link is very costly to manage and not scalable over time because it is too much state to manage and application crash makes it even more costly. Re-directing to authentication is a safe bet for you provided you have already cookie management in place or you have one application instance to take care of.
Or you can you HTTP digest authentication provided that your framework supports it so that you do not have to worry about the cookie-hell. Please note that you may need to write up some client-side java script based on your use case.

Restrict Google+ Sign-In to specific Apps Domain

Currently using the OAuth server side one-time-code flow, discussed here:
https://developers.google.com/+/web/signin/server-side-flow
Works perfectly for google login.
I want the ability, though, to limit this login to only work for users that belong to a specific apps domain.
Is there any way to enforce this through the api?
OR am I limited to only doing this on my end after google authentication by regexing the email domain? (I would like to avoid this).
Thanks!
There is no support for doing this through Google login. We could allow a developer to set some restrictions on the client id if there are good use cases and a lot of developers would benefit with it. The primary issue I see with is the error message that we have to display to the user. It is better to display that error (and explain) on your site.
In general, as a good practice, you would always want to do the checks on your system/services regarding the authorized user (e.g. check domain)
The only way I can see to do this on the API is to use the fully server side flow (OpenID Connect).
The docs are here:
https://developers.google.com/accounts/docs/OpenIDConnect
With the parameter of interest here:
https://developers.google.com/accounts/docs/OpenIDConnect#hd-param
It doesn't appear to be possible with the server side one time code flow

How is CORS safer than no cross domain restrictions? It seems to me that it can be used maliciously

I've done a bit of reading on working around the cross domain policy, and am now aware of two ways that will work for me, but I am struggling to understand how CORS is safer than having no cross domain restriction at all.
As I understand it, the cross domain restriction was put in place because theoretically a malicious script could be inserted into a page that the user is viewing which could cause the sending of data to a server that is not associated (i.e. not the same domain) to site that the user has specifically loaded.
Now with the CORS feature, it seems like this can be worked around by the malicious guys because it's the malicous server itself that is allowed to authorises the cross domain request. So if a malicious script decides to sending details to a malicious server that has Access-Control-Allow-Origin: * set, it can now recieve that data.
I'm sure I've misunderstood something here, can anybody clarify?
I think #dystroy has a point there, but not all of what I was looking for. This answer also helped. https://stackoverflow.com/a/4851237/830431
I now understand that it's nothing to do with prevention of sending data, and more to do with preventing unauthorised actions.
For example: A site that you are logged in to (e.g. social network or bank) may have a trusted session open with your browser. If you then visit a dodgy site, they will not be able to perform a cross site scripting attack using the sites that you are logged in to (e.g. post spammy status updates, get personal details, or transfer money from your account) because of the cross domain restriction policy. The only way they would be able to perform that cross site scripting attack would be if the browser didn't have the cross site restriction enabled, or if the social network or bank had implemented CORS to include requests from untrusted domains.
If a site (e.g. bank or social network) decides to implement CORS, then they should be sure that it can't result in unauthorised actions or unauthorised data being retrieved, but something like a news website content API or yahoo pipes has nothing to lose by enabling CORS on *
You may set more precise origin filter than "*".
If you decide to open your specific page to be included in another page, it means you'll handle the consequences.
But the main problem cannot be that a server can receive strange data : that's nothing new : everything that is received by a server is suspect. The protection is mainly for the user which cannot be abused by an abnormal composition of sources (the englobing one being able to read the englobed data, for example). So if you allow all origins for a page, don't put inside data that you want to share only with your user.

Resources