ReCAPTCHA: where should hostname for domain validation come from? - recaptcha

During user's response verification on our server, the response from the Google server contains hostname key.
I have to validate not only user's response token, but also the hostname against the hostname, which was returned from Google server. E.g. mywebsite.com
If you check https://developers.google.com/recaptcha/docs/domain_validation, it suggests that Turning off this protection by itself poses a large security risk . So I want to verify it.
However, I don't understand where should the hostname to be validated come from?
Should it be in the request headers or request body, when user makes a request to our server? Or should I install some script, which would somehow inject the hostname/website name, to be sent together with the request?

Related

EWS Autodiscover endpoints

I need to get value for X-AnchorMailbox and X-PublicFolderMailbox header for public folder requests. I was using both of those articles first and second to retrieve values for headers but a problem happened during autodiscover process.
To send autodiscover request I use derived endpoint because i write my application in C++ and use only SOAP/POX requests to retrieve any data from EWS. If i understood correctly this kind of endpoints should be derived from user's e-mail address. So if the user has address user#test.onmicrosoft.com one of the endpoints should be https://test.onmicrosoft.com/autodiscover/autodiscover.xml (for POX). But this endpoint doesn`t work at all.
Is there any way to get correct endpoint or other ways to retrieve values for headers?
There are multiple endpoints (https and http redirect). Plus the endpoints from AD and DNS.
Start at Autodiscover for Exchange
In your particular case (redirect to a hosted M365 mailbox), you will most likely end up going through the unsecured (http://autodiscover.YourDomain.demo/autodiscover/autodiscover.xml) redirect (301, 302, 307, 308) to https://outlook.office365.com/autodiscover/autodiscover.xml
You can also see autodiscover steps if you try the connectivity analyzer at
https://testconnectivity.microsoft.com/tests/Ola/input

Maintain State Between HTTP Requests to Keycloak in JMeter

So I am trying to automate a JMeter script that creates Keycloak users and then signs them in.
First It GETs the login page and stores the code, here is an example request:
GET http://Keycloak.com:8001//auth/realms/REALM/protocol/openid-connect/auth?response_type=code&client_id=CLIENT&scope=openid%20profile%20email&nonce=N5b3a2da23c04a&response_mode=form_post&resource=RESOURCE&state=2SJwtlVZrswlGkw&redirect_uri=REDIRECTURI
However, when I then GET the registration page, the code changes and the tab_id also changes. How can I keep keycloak from generating a new code token with every HTTP request in a thread?
In addition, why is each HTTP request with JMeter acting like a new session instead of the next request in a series?
EDIT:
I am using Regular Expression Extractors in order to track the code and execution variables, in addition to using a HTTP Cookie Manager and HTTP Cache Manager for the thread.
Looking at my POST request, both variables are the same as those from the previous HTTP request, and all of my cookies are being maintained, yet every time I try this automated login, I get a 400 error and the keycloak event log displays an invalid_code error.
Edit:
As requested here is a screenshot of all my sign in requests
Most probably your Regular Expression Extractor is not nested in the HTTP Request you are trying to extract data from.
If its scope is too wide, it applies to all HTTP Requests, so first time it succeeds extracting, but then for the next request that does not contain the token, the extractor runs and overwrites the old value by an empty one.
See scoping rules in JMeter:
https://jmeter.apache.org/usermanual/test_plan.html#scoping_rules
You need to maintain the corelation between hits. Please go through below blog
https://www.blazemeter.com/blog/how-to-handle-correlation-in-jmeter
According to keycloak you must use https if you are using keycloak.com
Keycloak can run out of the box without SSL so long as you stick to private IP addresses like localhost, 127.0.0.1, 10.0.x.x, 192.168.x.x, and 172..16.x.x. If you don’t have SSL/HTTPS configured on the server or you try to access Keycloak over HTTP from a non-private IP adress you will get an error.
So you have 3 options: use private IP address, use a reverse proxy or load balancer to handle HTTPS or enable HTTPS for the Keycloak server.

XHR2 allow origin - can this be faked?

With regards to the way resources accessed over XHR2/CORS can block the request unless it came from a whitelisted domain:
which header is read to determine the referrer domain - is it the standard HTTP_REFERRER?
could someone send a request pretending to be from another domain somehow?
I'm aware CORS is not a reliable means of securing data - I ask only as a point of curiosity.
The header that is read is Origin. As any HTTP header it can be faked. The idea behind COSR is to enable sending data, while still securing the user / preventing abusing of user session. The cross-domain requests are forbidden to protect the user, not the server.
The attacker should both send the request pretending this is another domain AND send the cookies the user has. And this is not something you can achieve via XSS alone - you have to steal the cookie and send the request on your own. But you cannot steal the cookie for site A from site B. If A however accepts request from any domain, via XSS on B you can trick the user's browser to send request to A, the browser will send the cookies and you can read the response back.
The Origin header contains the requesting domain.
This browser is in complete control of this header, and it cannot be faked. The browser controls this header on behalf of the user, and user's cannot override the value in the JS code.
Note that I said "the browser"; as with any HTTP request, the user could craft a curl request with any Origin header. But this has limited use as an attack vector, since the hacker would have to trick a valid user into issuing the correct curl request, which is unlikely.

Is it secure to pass login credentials as plain text in an HTTPS URL?

Is it secure to pass login credentials as plain text in an HTTPS URL?
https://domain.com/ClientLogin?Email=jondoe#gmail.com&Passwd=123password
Update: So let's say this is not being entered in the browser, but being generated programmatically and being requested with a POST request (not a GET request). Is it secure?
Solution:
It is not secure to use this type of URL in a GET request (i.e. typing the URL into the browser) as the requested URL will be saved in browser history and server logs.
However, it is secure to submit as a POST request to https://domain.com/ClientLogin (i.e. submitting a form) while passing the credentials as part of the POST body, since the POST body is encrypted and sent after making a connection to the requested URL. So, the form action would be https://domain.com/ClientLogin and the form field values will be passed in the POST body.
Here are some links that helped me understand this better:
Answer to StackOverflow Question: Are https URLs encrypted?
Straightforward Explanation of SSL and HTTPS
Google Answers: HTTPS - is URL string itself secure?
HTTP Made Really Easy
No. They won't be seen in transit, but they will remain in:
browser history
server logs
If it's at all possible, use POST over HTTPS on authentication, and then set a "authenticated" cookie, or use HTTP Digest Authorization over HTTPS, or even HTTP Basic auth over HTTPS - but whatever you do, don't put secret/sensitive data in the URL.
Edit: when I wrote "use POST", I meant "send sensitive data over HTTPS in POST fields". Sending a POST http://example.com/ClientLogin?password=hunter2 is every bit as wrong as sending it with GET.
TL;DR: Don't put passwords in the URL. Ever.
Passing login info in url parameters is not secure, even with SSL
Passing login info in POST body with SSL is considered secure.
If you're using SSL, consider HTTP Basic authentication. While this is horribly problematic without SSL, it is no worse than POST with credentials, it achieves what you want, but does so according to an established standard, rather than custom field names.

Posting HTML Form Values Over HTTPS

I have a website that is currently using https for secure login and transactions. You can't navigate to the to the main site unless you login.
I have had a request from a partner who have asked if they can seamlessly navigate to our site from their own web application, without logging in. There site is also using https.
I've set up a "PartnerLoginPage.aspx" page, and allowed them to POST html form values into this page (they have the correct user login details). I then authenticate them based on the posted values and redirect them to the main site. They don't need to login then, I've already authenticated them and it works perfectly.
My biggest concern is that this is not a secure way of authenticating the user. If you POST html form values into a https page is the data still encrypted? Just out of interest, if their site was not an http site (it is) would the data still be encrypted?
eg THEIR HTTPS-> FORM POST VALUES -> OUR HTTPS -> ARE FORM POST VALUES DATA ENCRYTPED?
and
THEIR HTTP (note: no 's') ->FORM POST VALUES -> OUR HTTPS -> ARE FORM POST VALUES ENCRTYPED?
Thanks for any help,
Stuart
Assuming all keys are valid, of course...
If the request is made an https page, then the request is encrypted (meaning the POST values, which are sent via the request, are encrypted, regardless of destination).
If the request is made from a non https page to an https page, the request is not encrypted, but the response would be, so the post variables are NOT encrypted (but the value returned would be).
HTTPS essentially sets up whether the server/page that is talking is using encryption or not, so http -> https = non-encrypted request, encrypted response, https -> http = encrypted-request, non-encrypted response.
Of course, there are levels of security that can be set at the script level, but I don't think your answer is worried about that.
Quick Post Script
Why don't you give the partner sites a service account like "username :partners, pw: sheswithme" or some such? You could use cURL to set up the cookie and pass the server variables and have them point their form to a script that makes the request instead of having their users having semi-direct access to your script.

Resources