I have set up my class to control access using ACLs. Only the creator of the object can view or edit the object. There are many thousands of these objects in use in a production environment.
I have a new requirement to, under a certain circumstance, allow a user to remove another user's objects. The simplest analogy is to imagine an admin feature that lets moderators remove all comments by a certain abusive user.
Since the client cannot do this, I am defining a cloud function to handle it, which will be able to use the master key. I pass the user ID from the client to the cloud function, and it should remove all comments by this user.
However, the cloud function is not able to find the comments since they are tied to the user only by ACL. As far as I can tell, it is not possible to query by ACL. Is this accurate?
What is the correct approach here? Do I need an additional column besides the ACL to identify the commenter a second time, simply so I can query by it? This seems duplicative. I will also need to update the many existing records, copying the user specified in the ACL into the new column. Is this even possible?
Or is there some way to build an ACL for use by the cloud function and use it instead of the master key so that the query searches as though it were the user in question?
My final (last resort) suggestion is to fetch all of the objects and then iterate over them checking the ACLs. This is obviously a pretty poor solution for scale and performance since I will need to fetch potentially hundreds of thousands to items to check them all.
Related
I´m working on a contract using rust that creates subaccounts and transfer some tokens to it as part of its functionality, but since im repeating the process many times testing the functions I wonder if im leaving many subaccounts created. Is there a way to list all the subaccounts an account have?
There is no simple way because each subaccount is an independent account and the upper-level account has no control over subs.
Though you can try to use Indexer for Explorer public database to query accounts table with something like:
SELECT account_id FROM accounts WHERE account_id LIKE '%.youraccount.near';
From the README
NOTE: Please, keep in mind that the access to the database is shared across everyone in the world, so it is better to make sure you limit the amount of queris and individual queries are efficient.
I'm designing a solution and want to leverage some of Elasticsearch's query capabilities (version 7.x). We are expected to have around 10M documents per index.
Documents might have different 'associations' to what we call 'users' (not necessarily same meaning as in ES) -
associated to all, queryable in any context.
associated to single user, should appear only in this user context searches.
associated to a 'groups' of users (of size of up to 1000K), should appear in queries for user's of this group.
We expect to have a lot of users, in the 100Ks or so. which also mean we might have a lot of different groups, each 2 users might form a custom group.
I've been investigating ES's capabilities and it looks like each solution I came up with have disadvantages:
RBAC - will require creating a lot of rolls (per user + per group, can ES even handle that many?)
ABAC - will require creating a lot of users (can ES even handle that many?)
Simple AND clauses on a dedicated properties (complex template of the query as explained here)
it is important to note that I have a single user that I will be using in order to query on behalf of the users I will create, in case I will choose to go down this path.
I came across this question but I figured that thing might have evolved since its been answered Document access control in ElasticSearch
Any other suggestions that I should check out? maybe even custom 3rd party solutions?
I'm trying to make a database table for every single username. I see that for every username, I can add more columns in it's row, but I want to attribute a full table for each one. How can I do that?
Thanks,
Eli
First let me say, what you are trying to do sounds like really, really bad database design and you should rethink your idea of creating a table per user. To get help for this you should add way more detail about the reasoning to your question to get a good answer. As far as I know there is also a maximum number of classes you can create on Parse so sooner or later you will run into problems, either performance wise or due to technical limitations of the platform.
That being said, you can use the Schema API to programmatically create/delete/update tables of your Parse app. It always requires the master key, so doing this from the client side is not recommended for security reasons. You could put this into a Cloud Code function for example and call this one from your app/admin tool to create a new table for a user on the fly or delete a table of a user.
Again, better don't do it and think about a better way to design your database, it would be out of scope here to discuss it.
In Meteor, I have a little confusion between Session and Local Collection.
I know that Session is a temporary reactive key-value store, client-side only, and is cleaned on page refresh.
Local collection seems to be the same: reactive, temporary client-side storage, cleaned on page refresh with more flexible function like insert, update & remove query like server-side Mongo collection.
So I guess I could manage everything in Local Collection without Session, or, everything in Session without Local Collection.
But what is the best and efficient way to use Session and/or Local collection?
Simply, when to use Session and not use it?
And when to use Local collection and when not use it?
As I read your question I told myself that this is a very easy question, but then I was scratching my head. I tried to figure out an example that you can just accomplish with session or collections. But I didn't found any use-case. So let's rollup things from begin. Basically you already answered the question on your own, because it is the little sugar that makes collections something special.
When to use a collection?
Basically a collection is a database artifact. Imagine you have a client-server-application. All the data is persisted in the server side storage. Now you can use a local collection to provide the user a small subset of the servers collection. So a client collection is a database with reduced amount of data. The advantage is that you can access the collection with queries. You can use the same queries on server and client. In additon a collection always contains multiple objects of the same type. Sometimes you produce data on client for the client. No server interaction needed. Than you can use a local collection. A local collection provides the same functionality as a normal collection without server communication. This should be used if you have multiple objects with the same structure and in special if you'd like to use query operators.
You can also save the data inside a session object. Session objects can contain multiple objects as well. But imaging you want to find an object in an objectarray indexed with a special id. Than you need to iterate throw the whole array in order to find this object. You have to write additional logic, that can be handled with collection like magic. Further, collections return cursors. A cursor is an reactive object that just changes if the selected data changes. That means if you use find with an id. Than this object just rerenders when the object to this id changes. With session you can't. When a session changes you need to rerender all depending objects.
When to use a session?
For everything else. Sessions are often just small objects that contain some configuration logic. It is basically just one object and not a multiple occurency of equal objects. Haven't time now to go in detail but if it does not fit the collection use-cases you can use sessions.
Have a look at this post that describes why sessions should not be overused.
I assume that by local collection you mean: new Mongo.Collection(null)
The difference is that local collections do not survive hot code pushes. A refresh will erase Session, but hot code push will not, there's special code in Meteor to persist the values of the Session variable in the case of a hot code push..
You would use Session whenever you're storing temporary values that do NOT need to be persisted to the database.
Trivial examples could include a users selection of filters or the item in an index vies that is currently selected.
manipulated data in minimongo (insert, update, delete etc) is intended to be sent back to the server and stored in the database. For example this could be updating a users profile information etc.
I need to synchronize my Relational database(Oracle or Mysql) to CouchDb. Do anyone has any idea how its possible. if its possbile than how we can notify the CouchDb for any changes happened on the relational DB.
Thanks in advance.
First of all, you need to change the way you think about database modeling. Synchronizing to CouchDB is not just creating documents of all your tables, and pushing them to Couch.
I'm using CouchDB for a site in production, I'll describe what I did, maybe it will help you:
From the start, we have been using MySQL as our primary database. I had entities mapped out, including their relations. In an attempt to speed up the front-end I decided to use CouchDB as a content repository. The benefit was to have fully prepared documents, that contained all the relational data, so data could be fetched with much less overhead.
Because the documents can contain related entities - say a question document that contains all answers - I first decided what top-level entities I wanted to push to Couch. In my example, only questions would be pushed to Couch, and those documents would contain the answers, and possible some metadata, such as tags, user info, etc. When requesting a question on the frontend, I would only need to fetch one document to have all the information I need at that point.
Now for your second question: how to notify CouchDB of changes. In our case, all the changes in our data are done using a CMS. I have a single point in my code which all edit actions call. That's the place where I hooked in a function that persisted the object being saved to CouchDB. The function determines if this object needs persisting (ie: is it a top level entity), then creates a document of this object (think about some sort of toArray function), and fetches all its relations, recursively. The complete document is then pushed to CouchDB.
Now, in your case, the variables here may be completely different, but the basic idea is the same: figure out what documents you want saved, and how they look like. Then write a function that composes these documents and make sure this is called when changes are made to your relational database.
Notifying CouchDB of a change
CouchDB is very simple. Probably the easiest thing is directly updating an existing document. Two ways to implement this come to mind:
The easiest way is a normal CouchDB update: Fetch the current document by id; modify it; then send it back to Couch with HTTP PUT or POST.
If you have clear application-specific changes (e.g. "the views value was incremented") then writing an _update function seems prudent. Update function are very simple: they receive an HTTP query and a document; they modify the document; and then CouchDB stores the new version. You write update functions in Javascript and they run on the server. It is a great way to "compress" common actions into simpler (and fewer) HTTP queries.