Could not set variable "USER.user_false_counter" as the collection does not exist in ModSecurity logs Apache (Windows OS) - windows

We are using apache module for our web server(windows OS). We need to prevent unsuccessful authentication attempt by the user. Hence we thought to use Mod Security module. I uses this standard configuration setting in "modsecurity-minimal.conf" as below
SecStatusEngine On
SecRule IP:bf_block "#eq 1"
"id:'2000004',phase:4,deny,
logdata:'Access denied [by IP] IP: #%{REMOTE_ADDR}, user: %{USER.name}'
SecRule USER:bf_block "#eq 1"
"id:'2000005',phase:4,deny,
logdata:'Access denied [by USER] IP: #%{REMOTE_ADDR}, user: %{USER.name}'
SecRule REQUEST_HEADERS:authorization "Basic ([a-zA-Z0-9]+=*)$" "phase:3,nolog,pass,id:2000012,chain,capture"
SecRule TX:1 "^([-a-zA-Z0-9_]+):" "t:base64Decode,chain,capture"
SecAction initcol:USER=%{TX.1},setvar:USER.name=%{TX.1},initcol:IP=%{REMOTE_ADDR}
SecRule RESPONSE_STATUS "401" \
"phase:5,pass,id:2000015,chain,logdata:'basic auth de #%{IP}, var: %{IP.begin}, user: %{USER.name}, ufc: %{USER.user_false_counter}, block: %{USER.bf_block}, IPblock: %{IP.bf_block}, ifc: %{IP.ip_false_counter}'"
SecAction setvar:USER.user_false_counter=+1,setvar:IP.ip_false_counter=+1,expirevar:USER.user_false_counter=300,expirevar:IP.ip_false_counter=300
# Check for too many failures for a single username, blocking 30 seconds after 3 tries
SecRule USER:user_false_counter "#ge 2" \
"id:'2000020',phase:3,t:none,pass,\
setvar:USER.bf_block,\
setvar:!USER.user_false_counter,\
expirevar:USER.bf_block=30"
# Check for too many failures from a single IP address. Block for 5 minutes after 10 tries.
SecRule IP:ip_false_counter "#ge 2" \
"id:'2000021',phase:3,pass,t:none, \
setvar:IP.bf_block,\
setvar:!IP.ip_false_counter,\
expirevar:IP.bf_block=300"
However when I see the modsec_debug.log, I get following error.
Could not set variable "USER.user_false_counter" as the collection does not exist.
Could not set variable "IP.ip_false_counter" as the collection does not exist.
Please help me how to resolve this issue.

This is a very complicated rule set (Is it taken from the ModSec Handbook?) and it may take hours to debug it. So it is not likely you will get the right support here.
What I can see immediately, is that you are not always initializing the collection and there is a chance rule 2000015 hits without the initialization. That is when a browser requests a resource without basic auth, the server responds with 401, then your rule 2000015 hits and only on the subsequent request would the browser request the same URI with the basic auth header.
So it looks to me as if your logic / rule architecture was garbled.
When I write complicated rule sets like this, I log every rule and I write and test them step by step and only if every rule works on its own, then I start to put them together, then I optimize them and then I put most of them to nolog.
This may take some time, so be warned.

Related

Ansible Try Multiple Passwords for Same User

I need to login into 50 hosts and perform a specific task.
Each host has one of 2 passwords (ex: pass1 and pass2) for a specific user (ex: foo).
I do not know on which host "foo" is set with "pass1" and on which host "foo" is set with "pass2". I have both passwords in a vault file.
Using Ansible, how can I first make a task where I try to login as "foo" with "pass1", then if unsuccessful login with "pass2" and finally setting a fact with the correct vault value (depending on which password worked i.e. "foo" managed to login).
I then want to use that fact to perform additional tasks on that same host.

How to if/else properly in Ansible and registering variables parallelly

what I am trying to do is to register variables accordingdly. I am pinging an interface and then I try to verify if the ping was successful or not. However my code is not working the way I have imagined. I simplify it to following sample:
- name: Verify ping
ansible.builtin.shell: echo "yes it is online"
register: tmp
when: ping_output is search("5/5")
- name: Verify ping
ansible.builtin.shell: echo "no it is offline"
register: tmp
when: ping_output is search("0/5")
The variable ping_output was registered before in the previous tasks. Well what I have observed during my tests is that when I want to evaluate my variable tmp it is not set sometimes. This happens when the first conditional is true. Then tmp.stdout is set to "yes its online". Even if ansible is skipping the next conditional ("the else statement"), however it unsets the variable tmp.stdout and at some point I receive tmp.stdout is not defined. Do you have better suggestions how to solve it?
Thanks in advance
EDIT:
Hi, thanks for the review. I'm trying to store information into variables and yes right, later I reference to tmp.stdout. I actually ping from a Cisco device and save the output. There are several pings, so several outputs. Then there are tasks which evaluate these outputs. The goal is to prepare an appropriate description which I send via JSON as an API call. What the enduser sees in the end, is a description in an external trouble ticketing system with information like which interfaces were pingable, which were not etc

gcloud cli failing to add record when contents start with dash

I'm working with the LetsEncrypt dns-01 challenge system which entails dynamically creating a TXT record in Google Cloud DNS with specific content, so LE can assert proof of ownership for generating a wildcard certificate (so I can't use http-01). The problem is sometimes LE tells me to create a TXT record that starts with a "-", for example -E_DFDFHJKF1783FSHDJ. I cannot get the gcloud cli to properly accept this data no matter what I do.
Example:
gcloud dns record-sets transaction start --zone=myzone
gcloud dns record-sets transaction add "-E_ASDFSDF" --ttl=30 --zone=myzone --name=test --type=TXT
gcloud dns record-sets transaction remove "-A_DSFKHSDF" --ttl=30 --zone=myzone --name=test2 --type=TXT
If you run those commands and inspect the resulting transaction.yaml you can see whether it properly contains the right string. If it did it correct, you should see something like:
- kind: dns#resourceRecordSet
name: test.
rrdatas:
- '"ASDFASDF"'
ttl: 30
type: TXT
I am executing this via Node's child_process, but I have the issue even if I execute it directly from bash, so Node isn't really meaningful issue at the moment. I've tried echoing the value in. I've tried setting an environment variable and using that in the string.
No matter what I do I get an error like the following:
ERROR: (gcloud.dns.record-sets.transaction.add) unrecognized arguments: -E_ASDFSDF
It turns out some characters need to be escaped in the CLI. I can confirm that the following works:
gcloud dns --project=myprojectid record-sets transaction add "\-test123" --name=test.mydomain.com. --ttl=300 --type=TXT --zone=myzoneid

Login test using Taurus

Test login action using Taurus
execution:
-
concurrency: 5
ramp-up: 5
hold-for: 1m
scenario: Buyer-logs-in
scenarios:
Buyer-logs-in:
variables:
baseurl: http://localhost:3000
default-address: ${baseurl}
data-sources:
- path: './login.csv'
delimeter: ','
variable-names: userName, password
keepalive: true
retrieve-resources: false
requests:
- url: 'http://localhost:3000/login'
label: login
method: POST
body:
user[email]: {userName}
user[password]: {password}
assert:
- contains:
- 200
subject: http-code
- url: 'http://localhost:3000/action'
label: page1
method: GET
assert:
- contains:
- 200
subject: http-code
This is my sample Taurus code to simulate login and measure peformance.
In my app, only one user can login at a time and my csv file has 2 users. The test still works when I set a concurrency of 5 and Taurus says 5 users logged in. How is that possible. When the same user logs in again he will be kicked out of the first browser where he logged in. So with 2 user logins, how does Tuarus simulate 5 users?
With that asked, does taurus really login using the credentials i give in the csv file? Or should I use selenium/Taurus to simulate it?
What really confused me was when I deleted all users in csv file, the test still did not gave me 200 for the login and page1.
TIA
If you don't specify executor Taurus will use jmeter as default, it means that your YAML config will be translated into Apache JMeter test plan
You can see the generated test plan by running bzt your-test.yaml -gui command
data-sources is translated to CSV Data Set Config which looks like:
it means that each thread (virtual user) will pick up the new value from the CSV file each iteration like:
virtual user 1 - iteration 1 - 1st line
virtual user 2 - iteration 1 - 2nd line
virtual user 3 - iteration 1 - 1st line
virtual user 1 - iteration 2 - 2nd line
etc.
I don't think so, you're reading the credentials from the CSV file but not using it anywhere, the correct syntax for JMeter Variables is ${variable_name_here} so you need to set the login request body to:
user[email]: ${userName}
user[password]: ${password}
as long as you properly configure JMeter to behave like a real browser there is no need to use Selenium
You might be getting false positive results because your Response Assertion doesn't do a lot of useful job, JMeter automatically considers HTTP Status Codes below 400 as successful. So instead of checking status code I would rather recommend verifying that the use is logged in, i.e. "Welcome" message is there or API response has some specific text for successful login and/or doesn't contain errors.

OpenLDAP as a Proxy cache only, no local database

I am trying to get a local LDAP proxy cache running. The idea is this:
Currently a computer (A) is sending all ldap requests to a remote ldap server (L)
Instead of that, there should be a proxy cache "server" running on A to act as an intermediate between A and L. The cache would store all queries and all their attributes (until it is filled up and then it starts "recycling").
OpenLDAP's Proxy Cache Engine looks pretty good, but there is not much information about how to set it up. There is an example config file, but I cannot get it to work.
When connected to the internet, running this command will successfully bind me.
ldapwhoami -vvv -h localhost -D "CN=Melka Martin,OU=something,OU=else,(...),DC=int,DC=somedomain,DC=com" -x -w <passwd>
However, each following request will still pool the remote LDAP server (as shown by sniffing the connection, and when the machine is disconnected from the internet, the local bind fails).
In the slapd output there is a lot of stuff, but the elligible:
56449abd QUERY NOT ANSWERABLE
56449abd QUERY CACHEABLE
This is the current config file, which should cache all the bind requests
database ldap
suffix "dc=int,dc=somedomain,dc=com"
rootdn "cn=admin,dc=int,dc=somedomain,dc=com"
rootpw <something>
uri ldap://dc-04.int.somedomain.com:389
overlay pcache
pcache hdb 100000 1 1000 100
pcacheAttrset 0 *
pcacheTemplate (sn=) 0 3600
pcacheBind (sn=) 0 3600 sub dc=int,dc=somedomain,dc=com
cachesize 200
directory /var/lib/ldap
index objectClass eq
index cn eq,sub
I have created the /var/lib/ldap directory, added a default DB_CONFIG file in there and then edited the slapd.conf file. If there are more things to do to set it up properly, could you instruct me?
I am a little confused about the rootdn/rootpw directives. They are used to write into the remote LDAP server, correct?
Edit: Below here is the original issue, which was resolved by using the full proper DN.
As this is supposed to only be a proxy cache, I shouldn't need to set up a local database. So the config file looks like this:
include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/nis.schema
moduleload pcache.la
database ldap
suffix "dc=int,dc=somedomain,dc=com"
rootdn "dc=int,dc=somedomain,dc=com"
uri ldap://dc-04.int.somedomain.com:389
overlay pcache
pcache hdb 100000 1 1000 100
pcacheAttrset 0 *
pcacheTemplate (sn=) 0 3600
cachesize 20
directory /var/lib/ldap
index objectClass eq
index cn eq,sub
Now I would expect that any request to ldap://localhost would mirror to the remote LDAP, if not in the cache.
I use this command to test the auth on the remote server:
ldapwhoami -vvv -h dc-04.int.somedomain.com -p 389 -D melka#somedomain.com -x -w <passwd>
Which works well, I get the auth.
However, when I try to run the same command on localhost:
ldapwhoami -vvv -h localhost -p 389 -D melka#somedomain.com -x -w <passwd>
It fails, saying
ldap_initialize( ldap://localhost:389 )
ldap_bind: Invalid DN syntax (34)
additional info: invalid DN
Slapd is listening on localhost, netstat contains this line:
tcp 0 0 0.0.0.0:389 0.0.0.0:* LISTEN 10352/slapd
Is there something I am missing?
Thanks
melka#somedomain.com
That may be a DN in the target LDAP system, who knows, but it certainly isn't in OpenLDAP. You need to provide a proper Distinguished Name.

Resources