In RethinkDB, there does not seem to be built-in support for user roles/access permissions.
This seems to be a common feature in most established databases, including MongoDB. We are worried that this gives processes that have access to the database too much access and us as developers little control over who can access what, leading to potential security issues.
I'm wondering: How big of an issue is this? Is there an alternative way to replicate this functionality without rethinDB supporting it out of the box?
EDIT:
As of RethinkDB 2.3 which was just released, you can now add users and ACLs!
2.3 Release Blog Post
Users documentation
Original Answer
access control (sometimes ACL) for RethinkDB is on the road map but in the mean time I recommend to either setup multiple instances divided by user permissions of RethinkDB along with an auth key:
https://rethinkdb.com/docs/security/#securing-the-driver-port
RethinkDB allows you to set an authentication key by modifying the
cluster_config system table. Once you set an authentication key,
client drivers will be required to pass the key to the server in order
to connect.
Hope that helps!
Related
So I have decided to rewrite an application I have been writing in Node.js to Elixir because of all the extra complexity working with Node that Elixir comes with out of the box.
My issue was something I didn't have quite right in Node and is becoming just as complex in Elixir and I am not entirely sure how to go about approaching it.
I am trying to recreate a lot of how Discord does permissions. I am essentially building a CRM system, with different roles like "Sales Manager", "Sales", "Customer Service Rep" etc... But they all are able to do different things based on their "role".
Some things I need to do is be able to update a permission on the fly for a person or role. Maybe the "Sales Manager" role can't look at company financial data like an "Accountant" but we need to give that specific person access for a few days. Or I have a "Customer Service Rep" and we give that entire role the ability to add things to a calendar. I also would like to have the ability to kill sessions.
So there are a few ways I've seen said around Elixir forums, like:
Using Guardian, I really want to like tokens and think not having to hit the database every time sounds wonderful, but I don't think it's practical for this. Unless there is a good solution to updating tokens on the fly which I haven't found.
Giving each person their own process and just kill and start the process on changes with new changes. This seems pretty neat, but I'd rather not kill processes unless there is an actual error, I think this solution will come with big problems, like tracing problems. Although I am not familiar enough to know if this might actually cause problems, or if this is a bad solution for other reasons.
Use Guardian with Guardian_DB, which then defeats the purpose of using tokens, but at least I'd have a trackable session. My only problem with this is I do plan on using a load-balancer so that if a socket connection dies I can reconnect it to the same server and I am not sure there is a way to do that with tokens or if the socket itself has a session attached to it. This is not really that big of an issue though and is pretty close to what I had with Node.js.
Use Redis which I'd like to stay away from, and then update session data in Redis based on user_id when updates occur and hit Redis on every request to see if the user has permissions. I plan to put this across multiple servers eventually which means ETS is not viable unless I can load-balance socket connections like I could in Node.js.
So I guess my questions are,
Can I attach sessions to sockets? Is this a bad idea?
Should I still use a token, and just use Redis to check the token on every request?
Is a token still a better choice than a session?
Is there a much better/easier solution that I have not even mentioned?
I'm sorry this was pretty drawn out and long, I've never had to do something as permission bound as this project professionally and am pretty new to Elixir.
Phoenix channels are stateful. You can put data in the assigns field and it stays there for the duration of the connection. That is where you normally put your user_id after authenticating the user on join.
I also use the channels assigns to store client state that I need on the server.
WRT to the role to permissions question, I'm doing exactly this. What I do is load the load the role permissions from the database on startup and build an ETS store with them. You can do the same with a Task or a GenServer. If the permissions change for a given role, i update the database and the ETS table.
My user model supports a list of roles for each user.
When in need to validate the permissions for a given user, I call the Permission model api like Permission.has_permission?("create-room", user, scope). I have two level of permissions, global and per room. That is what the scope is used for.
I would like to store user profile information. After researching a bit online, I am confused between the following options:
Use a LDAP server (example: Open DJ) - I can write Java clients which can interact with the LDAP server using LDAP APIs.
Store user profile in a database as a JSON document (like in Elastic DB) - The No SQL databases can then index the documents to improve lookup time.
What are the factors that I should keep in mind before selecting one of the approaches?
For a start, if you are storing passwords, then using LDAP is a no brainer IMO. See http://smart421.com/smart-identity-and-fraud/why-bother-with-an-ldap-anyway/ .
Otherwise I would recommend you do a PoC with each solutions (do not forget to add indexes for OpenDJ and you may also use Rest2LDAP) see how they fill your needs. Both products are open source so its easy to get started.
If your user population is a known group that may already have accounts in an existing LDAP repository, or where user account information needs to be shared between systems, then it makes sense to use and add on to the existing LDAP repository.
If you are starting out from scratch and have mainly external, unknown users who have no other interaction with your infrastructure but this one application, then LDAP is not a good choice imo because of the overhead that you are getting for creating and managing the server. Then a lightweight JSON approach seems better suited (even thought the L in LDAP stands for "lightweight").
The number of expected users is less of a consideration - you need to thread carefully with very large populations in either scenario.
See this questions as well for additional insights Reasons to store users' data in LDAP instead of RDBMS
I am learning elasticsearch. I wanted to know how safe (in terms of access control & validating user access) it is to access ES server directly from JavaScript API rather than accessing it through your backend ? Will it be a safe to access ES directly from Javascript API ?
Depends on what you mean by "safe".
If you mean "safe to expose to the internet", then no, definitely not, as there isn't any access control and anyone will be able to insert data or even drop all the indexes.
This discussion gives a good overview of the issue. Relevant section:
Just as you would not expose a database directly to the Internet and let users send arbitrary SQL, you should not expose Elasticsearch to the world of untrusted users without sanitizing the input. Specifically, these are the problems we want to prevent:
Exposing private data. This entails limiting the searches to certain indexes, and/or applying filters to the searches.
Restricting who can update what.
Preventing expensive requests that can overwhelm or crash nodes and/or the entire cluster.
Preventing arbitrary code execution through dynamic scripts.
Its most certainly possible, and it can be "safe" if say you're using it as an internal tool behind some kind of authentication. In general, no, its not secure. Elasticsearch API can create, delete, update, and search, which is not something you want to give a client access to or they could essentially do any or all of these things.
You could, in theory, create role based auth with ElasticSearch Shield, but it's far from standard practice. Its not really anymore difficult to implement search on your backend then just have a simple call to return search results.
I am trying to develope an application with tomcat running in several computers of same LAN trying representing several nodes and each of them runs an application with a single shared session(Ex. shared document editor such as google docs.). in my understanding so far I need a single shared session and several users need to update the doc symultaneously and each others updates are reflected on each others we interfaces almost imidietly. Can I acheve this with with tomcat's clustering feature. http://tomcat.apache.org/tomcat-7.0-doc/cluster-howto.html#Configuration_Example or is this just a faluir recovery system.
Tomcat's clustering feature is meant for failover - if one node fails, user can carry on working while being transparently sent to another node without a need to log in again.
What you are trying to achieve is a totally different scenario and I think using session for this is just wrong. If you go back to Google Doc example, how would you achieve granting (revoking?) document access to another user? What do you do when session times out - create the document again? Also, how would you define which users would be able to access selected documents?
You would need to persist this data somewhere (DB?) anyway so implement or reuse some existing ACL system where you could share information about users and document permissions.
I am wondering if there are any risks or pitfalls involved in me using the WindowsAuthentication_OnAuthenticate event to create a FormsAutnetication ticket to store user roles. I grab these roles from a couple different queries (I don't have permission to change the db schema so...). I don't want to use ASP.NET's role manager and My concern is that if I don't use a cookie (at least one that expires every 30mins or so) then performance might be an issue since WindowsAuthentication_OnAuthenticateis would get called for every request and I'd be make making these db calls constantly (not to mention having to decrypt the cooke and build a custom principal for my Context on the Application_AuthenticateRequest event).
Yes and no. If it's compromised yes, if not no.. From a security standpoint it's not a good idea and this has been compromised although quickly patched (see the POET vulnerability)
It's for you to decide if the risk is worth it which it generally isn't.
Why not consider a server side cache to store this data on instead and only of the cache is empty then check for the roles?