I like the simplicity of Pundit gem and I would like to make policies dynamic by storing them to database.
Basically I'm looking for a way to be able to change policies without need to redeploy the application.
1st way
Pundit policy is pure ruby code, so if you don't want to keep code inside database and evaluate it dynamically, I'd say the answer is no. It's unsafe. You may give it a go, though.
2nd way
But nothing prevents you from creating model which keeps rules in simple json and compare them using Pundit, e.g.:
class PostPolicy < ApplicationPolicy
def update?
access_setting = PolicySetting.find_by(key: self.class_name)
user.role.in?(access_setting['roles'])
end
end
Of course, complexity and flexibility of the tool directly depends on each other.
3rd way
Is just work around. You may set you authorisation project apart from the main one, so that it's deploys (zero-downtime, of course) would not affect the main big project.
4th way
Create your own DSL to be stored in Database
5th way
Use something like json-logic-ruby to store logic in database
Related
I'm working with Laravel 5 but I think this question can be applied beyond the scope of a single framework or language. The last few days I've been all about writting interfaces and implementations for repositories, and then binding services to the IoC and all that stuff. It feels extremely slow.
If I need a new method in my service, say, Store::getReviews() I must create the relationship in my entity model class (data source, in this case Eloquent) then I must declare the method in the repo interface to make it required for any other implementation, then I must write the actual method in the repo implementation, then I have to create another method on the service that calls on the repo to extract all reviews for the store... (intentional run-on sentence) It feels like too much.
Creating a new model now isn't as simple as extending a base model class anymore. There are so many files I have to write and keep track of. Sometimes I'll get confused as of to where exactly I should put something, or find halfway throught setting up a method that I'm in the wrong class. I also lost Eloquent's query building in the service. Everytime I need something that Eloquent has, I have to implement it in the repo and the service.
The idea behind this architecture is awesome but the actual implementation I am finding extremely tedious. Is there a better, faster way to do things? I feel I'm beeing too messy, even though I put common methods and stuff in abstract classes. There's just too much to write.
I've wrestled with all this stuff as I moved to Laravel 5. That's when I decided to change my approach (it was tough decision). During this process I've come to the following conclusions:
I've decided to drop Eloquent (and the Active Record pattern). I don't even use the query builder. I do use the DB fascade still, as it's handy for things like parameterized query binding, transactions, logging, etc. Developers should know SQL, and if they are required to know it, then why force another layer of abstraction on them (a layer that cannot replace SQL fully or efficiently). And remember, the bridge from the OOP world to the Relational Database world is never going to be pretty. Bear with me, keeping reading...
Because of #1, I switched to Lumen where Eloquent is turned off by default. It's fast, lean, and still does everything I needed and loved in Laravel.
Each query fits in one of two categories (I suppose this is a form of CQRS):
3.1. Repositories (commands): These deal with changing state (writes) and situations where you need to hydrate an object and apply some rules before changing state (sometimes you have to do some reads to make a write) (also sometimes you do bulk writes and hydration may not be efficient, so just create repository methods that do this too). So I have a folder called "Domain" (for Domain Driven Design) and inside are more folders each representing how I think of my business domain. With each entity I have a paired repository. An entity here is a class that is like what others may call a "model", it holds properties and has methods that help me keep the properties valid or do work on them that will be eventually persisted in the repository. The repository is a class with a bunch of methods that represent all the types of querying I need to do that relates to that entity (ie. $repo->save()). The methods may accept a few parameters (to allow for a bit of dynamic query action inside, but not too much) and inside you'll find the raw queries and some code to hydrate the entities. You'll find that repositories typically accept and/or return entities.
3.2. Queries (a.k.a. screens?): I have a folder called "Queries" where I have different classes of methods that inside have raw queries to perform display work. The classes kind of just help for grouping together things but aren't the same as Repositories (ie. they don't do hydrating, writes, return entities, etc.). The goal is to use these for reads and most display purposes.
Don't interface so unnecessarily. Interfaces are good for polymorphic situations where you need them. Situations where you know you will be switching between multiple implementations. They are unneeded extra work when you are working 1:1. Plus, it's easy to take a class and turn it into an interface later. You never want to over optimize prematurely.
Because of #4, you don't need lots of service providers. I think it would be overkill to have a service provider for all my repositories.
If the almost mythological time comes when you want to switch out database engines, then all you have to do is go to two places. The two places mentioned in #3 above. You replace the raw queries inside. This is good, since you have a list of all the persistence methods your app needs. You can tailor each raw query inside those methods to work with the new data-store in the unique way that data-store calls for. The method stays the same but the internal querying gets changed. It is important to remember that the work needed to change out a database will obviously grow as your app grows but the complexity in your app has to go somewhere. Each raw query represents complexity. But you've encapsulated these raw queries, so you've done the best to shield the rest of your app!
I'm successfully using this approach inspired by DDD concepts. Once you are utilizing the repository approach then there is little need to use Eloquent IMHO. And I find I'm not writing extra stuff (as you mention in your question), all while still keeping my app flexible for future changes. Here is another approach from a fellow Artisan (although I don't necessarily agree with using Doctrine ORM). Good Luck and Happy Coding!
Laravel's Eloquent is an Active Record, this technology demands a lot of processing. Domain entities are understood as plain objects, for that purpose try to utilizes Doctrime ORM. I built a facilitator for use Lumen and doctrine ORM follow the link.
https://github.com/davists/Lumen-Doctrine-DDD-Generator
*for acurated perfomance analisys there is cachegrind.
http://kcachegrind.sourceforge.net/html/Home.html
I am using GATE in one of my applications and I have few queries related to Multi-tenancy. My requirements are as given below.
I have the keywords set, specific for each user and depending on
which user is signed in, I need to initialise gazetteer with the
applicable set of keywords.
At a given time there could be multiple users logging into my
application and I want to make sure that the multi-tenancy
approach will not be inefficient.
I don't want to store the keywords for each user in the .lst
file(s) but store it on a DB (mongo) and inject only at the
runtime.
I searched the web for few samples and though I found some thoughts on working with Processing Resource, I have no idea how the performance will be affected.
Your help is much appreciated.
Thanks in advance,
Sajith
That's an interesting use-case for a GATE gazetteer.
One thing I believe you should definitely do is add the user ID as a feature when you're creating the document. This way you'll be able to make your MongoDB query in a processing resource later on.
When you're processing the document, you have several options:
Create a custom PR which calls MongoDB and replicates the DefaultGazetteer code but with overwritten "init" method (or inherit or wrap it, haven't looked into much detail if that's possible). Instead of the default init method you should provide your list of keywords, then set the needed fields and call execute().
If you don't have too many keywords, create a custom PR (or groovy scripting PR) which calls MongoDB and does some simple regex search like the one in this thread.
They also suggest the stringsearch library in the comments. Then just use start and end indices to create Lookup annotations on your own.
You said you don't want that but still, several million words can be handled by both the default and the Hash gazetteer. Although, you should be careful as gate documents could be very memory-intensive if you have too many annotations - in your case Lookups for all user keywords.
Hope this helps.
I want to offer a tumblr-like functionality where you can select a template, and optionally customize the HTML of your template in the browser and save it.
Current stack: Mongo, Sinatra (for REST API) for prototype. Will likely be moving to a compiled, statically-typed language later.
Wondering how best to accomplish this. Options I've considered:
Store the HTML in mongo, and duplicate it for all user accounts. So the HTML for the template you choose gets written into your account. Obvious cons of space inefficiency and need to update all users that use that template (if un-customized - you customize it it becomes your own and I won't ever touch it) if the template changes.
Store the templates in templates collection, and put custom templates either into this same collection or into the user collection with the owner of the template. User references a template id. This is quite clearly better than 1 I believe. Especially because I won't need to pull the template every time the user object is pulled.
Some third party library? Open to suggestions here.
File system.
I will need to package up these templates (insert js and stuff the user shouldn't be exposed to) and then serve them. Any advice on how best to approach this is greatly appreciated.
Your approach will depend on how often you foresee people customizing the template versus just going with a standard. How about a hybrid approach?
That is, have a field in the user document that is created lazily (on use) that either stores the custom template, or maybe a diff from one of the standards (not sure about the level of customization you are planning to allow).
Then you can have the template field you describe in 2 above, with a "special" setting for custom templates. While you still have the concern about pulling a template each time, you do have the advantage of knowing that these are some of your more dedicated users - saving a trip to the DB might be advantageous, or you might not care.
If you don't care about 2 trips to the DB for every user, then you take approach 2, add the custom templates to the templates collection and simply reference the new ID for each user that customizes.
It's a balancing act - is the extra data overhead in terms of pulling the template each time worth saving a round trip to the DB or do you want efficiency in terms of the data you get each time at the cost of multiple queries to the DB - only you can answer that one based on how you design your app and how people use it.
For the linked approach you might want to take a look at Database References and Schema Design in the MongoDB docs.
After developing in CodeIgniter for awhile, I find it difficult to make decisions when to create a custom library and when to create a custom helper.
I do understand that both allow having business logic in it and are reusable across the framework (calling from different controller etc.)
But I strongly believe that the fact that CI core developers are separating libraries from helpers, there has to be a reason behind it and I guess, this is the reason waiting for me to discover and get enlightened.
CI developers out there, pls advise.
i think it's better to include an example.
I could have a
class notification_lib {
function set_message() { /*...*/}
function get_message() {/*...*/}
function update_message() {/*...*/}
}
Alternatively, i could also include all the functions into a helper.
In a notification_helper.php file, i will include set_message(), get_message(), update_message()..
Where either way, it still can be reused. So this got me thinking about the decision making point about when exactly do we create a library and a helper particularly in CI.
In a normal (framework-less) php app, the choice is clear as there is no helper, you will just need to create a library in order to reuse codes. But here, in CI, I would like to understand the core developers seperation of libraries and helpers
Well the choice comes down to set of functions or class. The choice is almost the same as a instance class verses a static class.
If you have just a simply group of functions then you only need to make a group of functions. If these group of functions share a lot of data, then you need to make a class that has an instance to store this data in between the method (class function) calls.
Do you have many public or private properties to store relating to your notification messages?
If you use a class, you could set multiple messages through the system then get_messages() could return a private array of messages. That would make it perfect for being a library.
There is a question I ask myself when deciding this that I think will help you as well. The question is: Am I providing a feature to my framework or am I consolidating?
If you have a feature that you are adding to your framework, then you'll want to create a library for that. Form validation, for example, is a feature that you are adding to a framework. Even though you can do form validation without this library, you're creating a standard system for validation which is a feature.
However, there is also a form helper which helps you create the HTML of forms. The big difference from the form validation library is that the form helper isn't creating a new feature, its just a set of related functions that help you write the HTML of forms properly.
Hopefully this differentiation will help you as it has me.
First of all, you should be sure that you understand the difference between CI library and helper class. Helper class is anything that helps any pre-made thing such as array, string, uri, etc; they are there and PHP already provides functions for them but you still create a helper to add more functionality to them.
On the other hand, library can be anything like something you are creating for the first time, any solution which might not be necessarily already out there.
Once you understand this difference fully, taking decision must not be that difficult.
Helper contains a group of functions to help you do a particular task.
Available helpers in CI
Libraries usually contain non-CI specific functionality. Like an image library. Something which is portable between applications.
Available libraries in CI
Source link
If someone ask me what the way you follow when time comes to create Helpers or Libraries.
I think these differences:
Class : In a nutshell, a Class is a blueprint for an object. And an object encapsulates conceptually related State and Responsibility of something in your Application and usually offers an programming interface with which to interact with these. This fosters code reuse and improves maintainability.
Functions : A function is a piece of code which takes one more input in the form of parameter and does some processing and returns a value. You already have seen many functions like fopen() and fread() etc. They are built-in functions but PHP gives you option to create your own functions as well.
So go for Class i.e. libraries if any one point matches
global variable need to use in two or more functions or even one, I hate using Global keyword
default initialization as per each time call or load
some tasks are private to entity not publicly open, think of functions never have public modifiers why?
function to function dependencies i.e. tasks are separated but two or more tasks needs it. Think of validate_email check only for email sending script for to,cc,bcc,etc. all of these needs validate_email.
And Lastly not least all related tasks i.e. functions should be placed in single object or file, it's easier for reference and remembrance.
For Helpers : any point which not matches with libraries
Personally I use libraries for big things, say an FTP-library I built that is a lot faster than CodeIgniters shipped library. This is a class with a lot of methods that share data with each other.
I use helpers for smaller tasks that are not related to a lot of other functionality. Small functions like decorating strings might be an example. Or copying a directory recursively to another location.
I consider myself still pretty new to the TDD scene. But find that no matter which method I use (mock framework or stubbing my own objects) I find that I have to write a lot of code to create mock data. I like the idea of loading up objects to create an in-memory database. But what I don't like is cluttering up my tests with a ton of code for the sole purpose of creating mock data. This is especially the case when the data needs to account for all the different cases.
I'd love some suggestions for a better way of doing this.
It would seem to me that I should be able to load the data once into a known state from some data store and then I could use a snapshot of that state which is loaded in the test setup/initialize before each test method is executed. This would satisfy proper testing practices while providing convenience and let me focus on writing tests instead of writing code to create test data "by hand".
May be you could try the NBuilder library. It provides a very fluent interface and is easy to use. You can use it for generating single instances of a class with defualt values or generate lists with default or overriden values. You can have a look at this one.
If your are using .Net Try NDBUnit
You populate your store and then it reverts your DB to a known state at test time, for each test. The Autumn of Agile screen cast series shows this in pretty good detail.
Or you can do this manually...build a stored procedure or whatever to truncate your tables and copy in the data in your teardown method.
You can have Builder class(es) that helps you building the instances you need / in this case ones you would use related to the repository.
Have the Builder use appropiate defaults, and on your tests you can overwride what you need. This helps you avoid needing to put have every single case of "data" mixed up for all the different tests (which introduces problems, because usually there are cases that aren't compatible for different tests).
**Update 1:**Take a look at www.markhneedham.com/blog/2009/01/21/c-builder-pattern-still-useful-for-test-data
I know exactly what you mean. I think a good approach to solving this problem is to actually have a separate MockFramework project that houses all your mock data, outside the test project. This way you can generate mock data separately, store it in memory if you want to, or not, and then reference the mock framework from the test project. If you use a third party framework to do this, all the better, but you can still wrap that third party framework in your own mock framework so you can get all that "glue" that creates the mock data the way you need it out of your tests so the tests can really be only what they need to be.
Thanks for all the suggestions, I think the solution requires a little bit of everything. I don't want these tests to end up being regression tests, but w/o some kind of existing data store everything still boils down to creating the data by "manually" building the objects.
What would really be nice would be a framework that allowed me to use my existing DAL to either script the data to code for me or get the data in memory and access it like an in memory database.
Untils.org covers this way better than I ever could.
Their whole guide is actually very good.
But basically, if your units require "a lot of data" they may not be unit tests anymore. I'd recommend attempting testing the smaller pieces individually.