Mod Security rules to specific country - filter

How can i make a rule for mod security to only allow specific IP database to access a file name, for example i want to block any IP out of Indonesia IP to accesss register.php
Below is the rule to only block:
SecRule REQUEST_HEADERS:User-Agent "#pmFromFile china_ip.txt" "id:999999,rev:1,severity:2,deny,log,msg:'Block China'"

I'm not a mod-security specialist, but I believe you may use Positive security model to deny access to all requests that doesn't fulfill specific rules.
In your case, you want first to check if the requested URI is register.php, and if the IP is from Indonesia then allow access. I think you may chain the two conditions - it would look something like this:
SecRule REQUEST_URI "GET /.register.php" chain
SecRule REQUEST_HEADERS:User-Agent "#pmFromFile indonesia_ip.txt" "id:999999,nolog,phase:1,allow"
I have no way to test it right now, but I hope this serves as hint of what to to.
Keep in mind though that there is no way to detect proxy-routed accesses, so IP-based blocks may only ward off direct connections.

Related

Changing domains in IIS environment

I am working with IIS (Internet Information Services ) in a windows server with URL Rewrite.
Need to redirect a URL (https://page.olddomain.com) to a new URL (https://page.newdomain.com). Everything remains the same, just need to URL to change if a user goes to https://page.olddomain.com.
Wondering if I'm following the right process of thought here.
I have a Inbound Rule created that should work.
Match URL
Requested URL: Matches the Pattern Using: Exact Match Pattern:
https ://page.olddomain.com (ignore case)
No conditions set
No server variables
Action
Action Type: Rewrite
Action Properties
Rewrite URL: https://page.newdomain.com
Append query string: checked
Am I missing anything here?
For this problem, if you can't implement the redirect step, I think it was caused by the wrong input in Pattern box which same with the comment mentioned by Lex Li.
We do not need to input the base url(such as https://page.olddomain.com) in Pattern, we just need to input the append url after the base url in Pattern. For your requirement, you just need to do it as below screenshot:
I suggest you to use "Regular Expressions" instead of "Exact Match", it can success implement your requirement.
And by the way, maybe you want to input / in Pattern(just ignore baseurl), but it will not work. So please input .* in Pattern. Apart from this, you'd better also define a condition to specify the HTTP_HOST equal your old host url.
For the comment you mentioned about SSL, I think it will not be affected by the redirect/rewrite rule.

Rest API path with many to many relationship

I am defining a rest path for a many to many relationships.
I want to get a list of users which are guest of a company.
Is the path below sufficient?
/api/v1/users/companies/{companyId}/guests
Because I put it in UserController so it cant not be
/api/v1/companies/{companyId}/guests
Do you have any suggestion?
I primarily intended to write a comment, but it's the way too long so I've written an answer instead.
First of all, REST is an architectural style and not a cookbook for designing URIs. REST doesn't enforce any URI design (as long as it's compliant with the RFC 3986) and it's totally up to you to pick the URIs that better identify your resources.
Do you have any suggestion?
Answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. What you'll read from this point is my personal opinion.
If the guests and the companies resources can be managed independently, I would use the following mappings:
/companies
/guests
Then you can use a query parameter to filter the guests for a given company:
GET /guests?company={id} HTTP/1.1
Host: example.org
To create a guest resource for a given company, you could use:
POST /guests HTTP/1.1
Host: example.org
Content-Type: application/json
{
"name": "John Appleseed",
"companyId": 1
}
REST doesn't care what spellings you use for your resource identifiers, so long as they are consistent with the production rules defined in RFC 3986.
/api/v1/users/companies/{companyId}/guests
That's fine
/api/v1/companies/{companyId}/guests
That would also be fine.
/d4158568-c40f-4c51-93cd-25642f6f42e2
So would that.
/api/v1/companies/guests?companyId={companyId}
On the web, you are perhaps more likely to see an identifier like this one; forms are a useful way of enabling a client to provide data, and HTML has production rules for creating a URI from data in a form. Of course, HTTP also has mechanisms that allow you to redirect the clients attention from one URI to another, so you don't have to restrict identifiers to those appropriate for a specific client.

Sanitizing url and parameters

Currently, my software has the following workflow
User performs an search through a REST API and selects an item
Server performs the same search again to validate the user's selection
In order to implement step 2, the user has to send the URL params that he used for his search as a string (ex. age=10&gender=M).
The server will then http_get(url + "?" + params_str_submitted_by_user)
Can a malicious user make the server connect to an unintended server by manipulating params_str_submitted_by_user?
What is the worst case scenario if even newlines are left in and the user can arbitrarily manipulate the HTTP headers?
As you are appending params_str_submitted_by_user to the base URL after the ? delimiter, you are safe from this type of attack used where the context of the domain is changed to a username or password:
Say URL was http://example.com and params_str_submitted_by_user was #evil.com and you did not have the / or ? characters in your URL string concatenation.
This would make your URL http://example.com#evil.com which actually means username example.com at domain evil.com.
However, the username cannot contain the ? (nor slash) character, so you should be safe as you are forcing the username to be concatenated. In your case URL becomes:
http://example.com?#evil.com
or
http://example.com/?#evil.com
if you include the slash in your base URL (better practise). These are safe as all it does is pass your website evil.com as a query string value because #evil.com will no longer be interpretted as a domain by the parser.
What is the worst case scenario if even newlines are left in and the user can arbitrarily manipulate the HTTP headers?
This depends on how good your http_get function is at sanitizing values. If http_get does not strip newlines internally it could be possible for an attacker to control the headers sent from your application.
e.g. If http_get internally created the following request
GET <url> HTTP/1.1
Host: <url.domain>
so under legitimate use it would work like the following:
http_get("https://example.com/foo/bar")
generates
GET /foo/bar HTTP/1.1
Host: example.com
an attacker could set params_str_submitted_by_user to
<space>HTTP/1.1\r\nHost: example.org\r\nCookie: foo=bar\r\n\r\n
this would cause your code to call
http_get("https://example.com/" + "?" + "<space>HTTP/1.1\r\nHost: example.org\r\nCookie: foo=bar\r\n\r\n")
which would cause the request to be
GET / HTTP/1.1
Host: example.org
Cookie: foo=bar
HTTP/1.1
Host: example.com
Depending on how http_get parses the domain this might not cause the request to go to example.org instead of example.com - it is just manipulating the header (unless example.org was another site on the same IP address as your site). However, the attacker has managed to manipulate headers and add their own cookie value. The advantage to the attacker depends on what can be gained under your particular setup from them doing this - there is not necessarily any general advantage, it would be more of a logic flaw exploit if they could trick your code into behaving in an unexpected way by causing it to make requests under the control of the attacker.
What should you do?
To guard against the unexpected and unknown, either use a version of http_get that handles header injection properly. Many modern languages now deal with this situation internally.
Or - if http_get is your own implementation, make sure it sanitizes or rejects URLs that contain invalid characters like carriage returns or line feeds and other parameters that are invalid in a URL. See this question for list of valid characters.

Order of intercept-url patterns in Spring Security

In appSecurity.xml I have this:
intercept-url pattern="/users/profile/**" access="hasRole('VIEW_PROFILES')".
intercept-url pattern="/users/profile/edit/**" access="hasRole('EDIT_PROFILES')"
I have a page /users/profiles/edit/addnew and when user with role VIEW_PROFILES is trying to access this page, he gets it successfully but the access to user with role EDIT_PROFILES is blocked.
What I'm doing wrong?
Since "/users/profile/edit/" is more specific than "/users/profile/", it should be placed higher in the list.
Why
Patterns are always evaluated in the order they are defined. Thus it is important that more specific patterns are defined higher in the list than less specific patterns. This is reflected in our example above, where the more specific /secure/super/ pattern appears higher than the less specific /secure/ pattern. If they were reversed, the /secure/ pattern would always match and the /secure/super/ pattern would never be evaluated.
Source: Core Security Filters
Both John Farrelly and Ritesh are correct. The intercept-url patterns are matched in the order listed. As soon as a match is found, the rest of the patterns specified are ignored. This is why you should list more specific patterns earlier.
In your case, the pattern for /users/profile/edit/somepage matches the pattern specified in the first intercept-url pattern, so Spring is appropriately checking to see if the user in question has the access role specified. Apparently, your EDIT_PROFILES users do not have VIEW_PROFILES authority, so they are being denied access. Likewise, your intention to restrict access to ../edit/ to users with EDIT_PROFILES authority is being undermined by the earlier statement which grants access to users with VIEW_PROFILES authority.
Switch the order for the easy fix, and you probably want to give your EDIT_PROFILES users VIEW_PROFILES authority (in addition to EDIT_PROFILES authority). Then, consider using access="hasAnyRole('REQUIRED_ROLE')" rather than access="hasRole('REQUIRED_ROLE')", to simplify the access statements.
Make sure that your EDIT_PROFILES rule is above the VIEW_PROFILES rule. If you take a look at the expression for VIEW_PROFILES, you will see that it includes every URL that would match EDIT_PROFILES. That means that if the VIEW_PROFILES rule is first, spring security will never bother to try the EDIT_PROFILES rule.

Is this possible to valid the domain is belongs to somebody?

For example, I have a field that give user to type their domain, the user can type any domain on this, but I don't valid this domain is belong that user. Of course, I can generate a random number text file for user to upload, and when I get the random number text file, if it is match, I can just treat it as a valid domain holder. But, except from this method, is that anyway to do so? Thanks.
Options I have seen:
Have user Create a Text file in document root, check for it
Send Email to contacts listed in whois (Or other ROLE type accounts (postmaster, hostmaster, etc...), with token they need to
return
Have them create an 'A' record in their DNS that is unique and you can query for.
There really isn't any other way of telling if they have control over the domain. Using whois information isn't 100% accurate as people don't update it, or their info isn't registered to them, or is hidden behind something like domains by proxy. There is no standard information in DNS, that can tell you ownership. Since google uses the DNS method and the text file method (I think), you can probably safely assume that is a good way to verify it.

Resources