Disable LDAP Referral - windows

I'm currently trying to integrate an SSO with Active Directory. The SSO Service has told me that my server is responding with LDAP "referrals".
Is there a way to disable these referrals? There is only one server/domain, and the server is the domain controller, so I don't know why I would even be getting these in the first place. Any help is appreciated. Thanks!

Turns out it was that the "base DN" in the search wasn't specific enough. Apparently you'll get a referral if you don't pinpoint into the exact OU or CN that the user resides. Since I only really have one active OU I just hard-pointed it to there and everything seems to be working now.

Instead of port 389, use the Microsoft-specific port 3268.
From MSDN:
Avoid unnecessary SearchResultReference referral chasing
With referral chasing enabled, your code could go from domain to domain in the Active Directory tree trying to satisfy the request if the query cannot be satisfied by the initial domain. This method can be extremely time-consuming. When performing a query for objects and the domain for the objects is unknown, use the global catalog as a base for the search instead of using referral chasing.
then:
Connecting to the Global Catalog
There are several ways to connect to a global catalog. If you are using LDAP, then use port 3268 in the ldap_open or ldap_init calls.
You may think everything is satsified by the initial (only!) domain, but...this is a bureaucracy, and list of 1 thing is still a list.
When you create a Security Group, you can make it Global or Domain Local. If the user belongs to a Global Group, like my case, AD automatically assumes there may be more information to be found in the Global Catalog, so a query to port 389 will generate 3 references. There's probably other reasons references are triggered.
I had to solve this issue because I had many OUs directly below the top level, all of which I wanted to query in one authentication pass.
In particular the mod_ldap.c of ProFTPd was distracted by these referrals. It followed them in separate LDAP transactions without binding with the same credentials as the initial query. Although they added nothing, the ldap library must have returned an opaque error.

Related

How to remotely connect to a local elasticsearch server - in a secure way ofc

I have been playing around with creating a webapp that uses elasticsearch to perform queries. Currently, everything is in production, thus on the localhost, let's say elasticsearch runs at 123.123.123.123:9200. All fun and games, but once the webapplication (react) is finished, the webapp should be able to send the queries to the current local elastic search db.
I have been reading around on how to get this done in a proper and most of all secure way. Summary of this all is currently:
"First off, exposing an Elasticsearch node directly to the internet without protections in front of it is usually bad, bad news." (see here: Accessing elasticsearch from a public domain name or IP).
Another interesting blog I found: https://code972.com/blog/2017/01/dont-be-ransacked-securing-your-elasticsearch-cluster-properly-107.
The problem with the above-mentioned sources is that they are a bit older, and thus I am not sure whether they are up to date.
Therefore the following questions:
Is nginx sufficient to act as a secure middleman, passing the queries from the end-users to elastic?
What is the difference at that point with writing a backend into the react application (e.g. using node and express)?
What is the added value taking into account the built-in security from elasticsearch (usernames, password, apikey, certificates, https,...)?
I am reading a lot about using a VPN or tunneling. I have the impression that these solutions are more geared towards a corporate-collaborative approach. Let's say I am running my front-end on a live server, I can use tunneling to show my work to colleagues, my employer. VPN would be more realistic for allowing employees -wish I had them, just a cs student here- to access e.g. the database within my private network (let's say an employee needs to access kibana to adapt something, let's say an API-key - just making something up here), he/she could use a VPN connection for that.
Thank you so much for helping me clarify the above-mentioned points!
TLS, authorisation and access control are free for the Elastic Stack, and have been for a while. I'd start by looking at the docs, as it's an easy way to natively secure your cluster
for nginx, it can be useful for rate limiting, or blocking specific queries for eg. however it's another thing to configure and maintain
from a client POV it would really only matter if you are using the official Elasticsearch clients, and you use nginx and make changes to the way the API would respond to the client (eg path rewrites, rate limiting)
it's free, it's native, it's easy to manage via Kibana
I'd follow the docs to secure Elasticsearch and then see if you need this at some point in the future. this would be handled outside Elasticsearch anyway, and you'd still want to secure Elasticsearch
The point in exposing Elasticsearch nodes directly to the internet is a higher vulnerability in principle. You should follow the rule of the least "surface" of your system on the internet.
A good practice is to hide from the internet whatever doesn't need to be there, although well protected. It takes ~20mins to get cyber attacks on any exposed service (see a showcase).
So I suggest you install a private network, such as a traditional VPN or an SDP product such as Shieldoo Mesh.

Orbit-db security, how to prevent illicit docstore updates

I have a specific use case for orbit-db but I am a bit fuzzy about a certain security implication.
I'm developing a web app where a user logs in through a Tronweb wallet account.
People can post questions, whereas other people can answer them and get paid for it.
In order to do so, I add the public key of the respondent to the question and save it to Orbit db.
Now it is my understanding that default access to any orbit-db instance is given to the app (identity) that creates it per default, or you can add custom access controllers. Let's say, if I want to create a db to manage tags, I could create an identity based on my own tronlink account and require a login to create those tags. Nobody else would be able to access that db.
Now what I am a bit fuzzy about is what happens when:
An OP creates a question, and a respondent registers an answer.
When the OP accepts the answer, and payment through the Tronlink plugin will be executed.
But since OrbitDB can run without a server (i.e. p2p based on a local IPFS node), what prevents anybody to set a breakpoint in the client-side Javascript code, get a handle to the db instance, and execute an update call to update the respondent's address locally in that question document, after which it will be synced to other nodes?
I store the public key of the respondent in orbitdb, but the transaction is still confirmed by the OP through Tronlink wallet plugin of course. But still, you cannot assume all users to check it every time.
Since there is no server involved, I don't see how you can prevent corruption of the db client-side by unwanted parties.
Could you enlighten? Let me know if my question is not clear.
what prevents anybody to set a breakpoint in the client-side Javascript code, get a handle to the db instance, and execute an update call to update the respondent's address locally in that question document
Nothing.
after which it will be synced to other nodes?
This is what makes OrbitDB secure. Each document that is synced to a new node, needs to pass through the validator function of this node. So, honest (the ones that have a correct AccessController) nodes will filter out malicious changes.
What that means? A node can be "corrupted" but it can't "corrupt" other nodes in the network.

How to handle user roles with Rethinkdb?

In RethinkDB, there does not seem to be built-in support for user roles/access permissions.
This seems to be a common feature in most established databases, including MongoDB. We are worried that this gives processes that have access to the database too much access and us as developers little control over who can access what, leading to potential security issues.
I'm wondering: How big of an issue is this? Is there an alternative way to replicate this functionality without rethinDB supporting it out of the box?
EDIT:
As of RethinkDB 2.3 which was just released, you can now add users and ACLs!
2.3 Release Blog Post
Users documentation
Original Answer
access control (sometimes ACL) for RethinkDB is on the road map but in the mean time I recommend to either setup multiple instances divided by user permissions of RethinkDB along with an auth key:
https://rethinkdb.com/docs/security/#securing-the-driver-port
RethinkDB allows you to set an authentication key by modifying the
cluster_config system table. Once you set an authentication key,
client drivers will be required to pass the key to the server in order
to connect.
Hope that helps!

Inter-Gear Communication for Openshift?

I'm trying to create an app such that gear 2 according to this model can be accessed by gear 3,4...n when using the --scaling option.
The idea being for this structure is the head of a chain of relays. I'm trying to find where the relevant information is so all the following gears have the same behavior. It would look like this:
I've found no documentation that describes how to reach gear 2 (The Primary DNAS) with a url (internal/external ip:port) or otherwise, so I'm a little lost as to how to let the app scale properly.
I should mention so far I've only used bash scripting, but I'm not worried about starting the program in other languages, but so long as it follows that structure in openshift I'm not worried.
The end result is hopefully create a scalable instance of shoutcast on openshift.
To Be Clear:
I'm developing a cartridge, not using the diy, all I understand of openshift is in this guide but of course I'm limited because I'm new.
I'm stuck trying to figure out how to have the cartridge handle having additional gears use the first gear as a relay. I am not confused about how Openshift routes requests externally to the gears and load balances them. I'm not lost how to use port-forwarding to connect to my app, the goal would be to design the cartridge so this wouldn't be a requirement at all, to only use external routes.
The problem as described above is that additional gears need some extra configuration, they need an available source (what better than the first gear?). In fact the solution to my issue might be to somehow set up this cartridge to bypass haproxy with an external route that only goes to the first gear.
Github for those interested, pass it around, it'll remain public. Currently this works only as a standalone, scaling it (what I'd like to fix) causes issues. I've been working on this too long by myself, so have at it :)
There's a great KB article that explains how the routing works on OpenShift gears here https://help.openshift.com/hc/en-us/articles/203263674-What-external-ports-are-available-on-OpenShift-.
On a scalable application, haproxy handles all the traffic routing to your gears. the only way to access your gears is through the ports mentioned in the article above. rhc does however provide a port-forwading option that would allow you to access things like mysql directly from your local machine.
Please note: We don't allow arbitrary binding of ports on the externally accessible IP address.
It is possible to bind to the internal IP with port range: 15000 - 35530. All other ports are reserved for specific processes to avoid conflicts. Since we're binding to the internal IP, you will need to use port forwarding to access it: https://openshift.redhat.com/community/blogs/getting-started-with-port-forwarding-on-openshift

1 A-record for every subdomain (10000+); any potential issues? Any other solution?

Most solutions I've read here for supporting subdomain-per-user at the DNS level are to point everything to one IP using *.domain.com.
It is an easy and simple solution, but what if I want to point first 1000 registered users to serverA, and next 1000 registered users to serverB? This is the preferred solution for us to keep our cost down in software and hardware for clustering.
alt text http://learn.iis.net/file.axd?i=1101
(diagram quoted from MS IIS site)
The most logical solution seems to have 1 x A-record per subdomain in Zone Datafiles. BIND doesn't seem to have any size limit on the Zone Datafiles, only restricted to memory available.
However, my team is worried about the latency of getting the new subdoamin up and ready, since creating a new subdomain consist of inserting a new A-record & restarting DNS server.
Is performance of restarting DNS server something we should worry about?
Thank you in advance.
UPDATE:
Seems like most of you suggest me to use a reverse proxy setup instead:
alt text http://learn.iis.net/file.axd?i=1102
(ARR is IIS7's reverse proxy solution)
However, here are the CONS I can see:
single point of failure
cannot strategically setup servers in different locations based on IP geolocation.
Use the wildcard DNS entry, then use load balancing to distribute the load between servers, regardless of what client they are.
While you're at it, skip the URL rewriting step and have your application determine which account it is based on the URL as entered (you can just as easily determine what X is in X.domain.com as in domain.com?user=X).
EDIT:
Based on your additional info, you may want to develop a "broker" that stores which clients are to access which servers. Make that public facing then draw from the resources associated with the client stored with the broker. Your front-end can be load balanced, then you can grab from the file/db servers based on who they are.
The front-end proxy with a wild-card DNS entry really is the way to go with this. It's how big sites like LiveJournal work.
Note that this is not just a TCP layer load-balancer - there are plenty of solutions that'll examine the host part of the URL to figure out which back-end server to forward the query too. You can easily do it with Apache running on a low-spec server with suitable configuration.
The proxy ensures that each user's session always goes to the right back-end server and most any session handling methods will just keep on working.
Also the proxy needn't be a single point of failure. It's perfectly possible and pretty easy to run two or more front-end proxies in a redundant configuration (to avoid failure) or even to have them share the load (to avoid stress).
I'd also second John Sheehan's suggestion that the application just look at the left-hand part of the URL to determine which user's content to display.
If using Apache for the back-end, see this post too for info about how to configure it.
If you use tinydns, you don't need to restart the nameserver if you modify its database and it should not be a bottleneck because it is generally very fast. I don't know whether it performs well with 10000+ entries though (it would surprise me if not).
http://cr.yp.to/djbdns.html

Resources