Dynamically Generating ORM Classes - ruby

I'm working on a Sinatra-based project that uses the Datamapper ORM. I'd like to be able to define criteria for the DM validations in an external YAML file so that less-experienced users of the system can easily tweak the setup. I have this working pretty well as a proof-of-concept, but I suspect there could be a much easier or a least less processor-intensive way to approach this.
Right now, the script loads the YAML file and generates the DM classes with a series of eval statements (I know this already places me on thin ice). The problem is that this process has to happen with every request. My bright idea is to check the YAML for changes, regenerate the classes and export to static source if changes are detected, and include the static files if no changes are detected.
This is proving more difficult than I anticipated because exporting code blocks to strings for serialization isn't as trivial as I expected.
Is this ridiculous? Am I approaching this in an entirely wrong-headed way?
I'm new to Ruby and the world of ORMs, so please forgive my ignorance.
Thanks!

DM validations in an external YAML file so that less-experienced users of the system can easily tweak the setup
A DSL for a DSL. Not having seen your YAML I still wonder how much easier than the DM Validations it really can get?
require 'dm-validations'
class User
include DataMapper::Resource
property :name, String
# Manual validation
validates_length_of :name, :max => 42
# Auto-validation
property :bio, Text, :length => 100..500
end
Instead of going for YAML I would provide the less-experienced users with a couple of relevant validation examples and possibly also a short guideline based on the dm-validations documentation.

It does seem a little crazy to go and put everything in YAML, as that's only a shade easier than writing the validations in Ruby. What you could do is make a DSL in Ruby that makes defining validations much easier, then expose that to your users instead of the whole class.

Related

Self-Documenting ActiveRecord Class Files in Rails 4 without attr_accessible

Question:
In a post-attr_accessible Rails 4 world, in what way do you recommend, if at all, annotating your ActiveRecord model class files to communicate its (database) attributes?
Further Thoughts
As part of a Rails 3 -> 4 upgrade, we are making a switch, and happily so, away from attr_accessible and to strong parameters in the controller. I understand and agree with the improvement in security via this switch. If you want to know more about this, the information is out there, and it's not hard to find.
But, I enjoyed, in my Rails 3 world, having those reminders of what attributes made up a class up there at the top of the model file. Especially since we're moving toward a world in which ActiveRecord classes are just DAOs, what else is the class but a collection of database attributes? I don't want to go to the schema.rb file just to remember them.
Am I thinking about this incorrectly? In a DAO world, should I be creating my ActiveRecord model class file and then never opening it again?
I know about the annotate_models gem and used it way back in the day. I was not a fan of having the attributes described in commented-out lines. (unreadable, hackish, fragile)
Thoughts? Opinions?
How about:
Person.column_names
If you are using an IDE or and editor that has a console feature this becomes an easy way to be reminded what attributes there are. I am no Ruby or Rails expert, still pretty new here, but I've been using Rails 4 almost exclusively and it just seems like you wouldn't need to see the attributes that often in the model. The params get whitelisted in the controller because that is where they will usually be used, no? If you don't want to use comments you could store an array of the attributes in the model:
my_attr = [:fname, :lname, :age, :height, :weight]
But is that really any more useful than a comment? Would there be a case of attributes that would have been in attr_accessible that wouldn't be in your whitelist in your controller? It would be trick if you put some code in a rake task that would run every time you ran
rake db:...
that would update the my_attr array in your model so you wouldn't have to remember to do it when you modified the model. I go into my models to add class methods and scopes, so I do see a value in it. But I work in RubyMine so I just click on the DB tab on the left side if I need to be reminded of columns that aren't in my whitelist.

How to set a namespace in ruby at runtime?

I have an ActiveRecord based application that is used via command line utilities. Having the models namespaced in such an application is advantageous for keeping the Object namespace clean.
I'm starting to build a rails application around these ActiveRecord models and, though I have overcome some of my initial troubles with using models in a namespace, I'm finding things are more verbose than I'd like.
What I want is to programmaticaly set a namespace for my ActiveRecord classes when used in the command line utilities and to programmaticaly not set a namespace for these models when used in the Rails app.
I know that the files themselves could be altered at runtime before being required, but I'm looking for something in the Ruby language itself to accomplish this cleanly.
hard to offer a great suggestion without seeing some code, but here are two possibilities.
It sounds like you have two clients for this code. Maybe make it an engine (just a fancy gem), you can add your paths to autoload paths, then use it from the gem without all the railsy crap getting in the way.
Maybe create a constant then reopen it in the models:
in some initializer
ActualNamespace = Class.new
DynamicNamespace = ActualNamespace
in your model file
class DynamicNamespace
class MyModel
end
end
DynamicNamespace::MyModel # => ActualNamespace::MyModel
Then for your command line app
DynamicNamespace = Object
Which is the same as not having a namespace:
DynamicNamespace::MyModel # => MyModel
Now you might wind up having difficulties with some of the Rails magic, which is largely based on reflection. I don't totally know what you'll face, but I'd expect forms to start generating the wrong keys when submitting data. You can probably fix this by defining something DynamicNamespace.name or something along those lines.
Autoloading, is likely to also become an issue, but I think you can declare autoload paths somehow (I don't know for sure, but googling "rails autoloading" gives some promising results, looks like it just hooks into Ruby's autoloading -- though I think this is going away in Ruby 2.0) worst case, you can probably define a railtie to eager load the dirs for you. This is all a bit out of my league, but I'd assume you need the railties defined before the app is initialized, so you may need to require the railtie in config/application.rb
Unfortunately, at the end of the day, when you start deviating from Rails conventions, life starts getting hard, and all that magic you never had to think about breaks down so you suddenly have to go diving into the Rails codebase to figure out what it was doing.

Is there a lightweight, all purpose validation library/DSL for Ruby?

I'm doing a lot of bulk data-validation on various kinds of data sources and I find myself writing such boilerplate code:
if summed_payments != row['Total']
raise "The sum of the payments, #{summed_payments} != #{row['Total']}"
end
I was wondering if there was a way to apply a DSL, like Minitest, but to use it for purposes that didn't involve application testing? For example, finding and logging errors during a bulk data import and validation script...it's a quick-and-dirty script that I don't want to have to write a test suite for, but that I want to do some various kinds of validation upon.
I think standalone ActiveModel should be good for this.
Watch this railscast for more information: http://railscasts.com/episodes/219-active-model
If you like you can check out http://rubygems.org/gems/validates_simple which is a gem for doing simple validation of hashes trying to mimic the interface of the active model validation methods.
I know this is a little old, but our Veto gem is likely what you're looking for: https://github.com/kodio/veto
Standalone validation of plain old ruby object, without dependencies.

How I can modularize Rails model?

I'm implementing several classes which does not have data by itself, just logics. These classes implements access control policy to date which depends on several parameters taken from data from other models.
I initially try to find answer to "Where to store such classes?" here, and the answer was apps/models directory. That's ok, but I like to clearly separate these classes from ActiveRecord inherited classes in hierarchy, both as file and class.
So, I created classes inside Logic module, like Logic::EvaluationLogic or Logic::PhaseLogic. I also wanted to have constants which passed between these logics. I prefer to place these constants into Logic module too. Thus, I implemented like this:
# in logic/phase_logic.rb
module Logic
PHASE_INITIAL = 0
PHASE_MIDDLE = 1000
class PhaseLogic
def self.some_phase_control_code
end
end
end
# in logic/evaluation_logic.rb
module Logic
class EvaluationLogic
def self.some_other_code
Logic::PhaseLogic.self.some_phase_control_code(Logic::PHASE_INITIAL)
end
end
end
Now, it work just fine with rspec (It passes tests I wrote without issues), but not with development server, since it can't find the Logic::PHASE_INITIAL constant.
I suspect it's related to the mismatch of the autoloading scheme of Rails and what I wanted to do. I tried to tweak rails, but no luck, ended-up with eliminating module Logic wrap.
Now the question I want to ask: How I can organize these classes with Rails?
I'm using 3.2.1 at this moment.
Posted a follow-up question "How I can organize namespace of classes in app/modules with rails?"
I am not sure whether I really understand your classes, but couldn't you create a Logic module or (I would rather do this:) PhaseLogic and EvaluationLogic objects in /lib directory?
It is not said that "Model" is always descendant of ActiveRecord. If the object belongs to "business logic" then it is a model. You can have models which do not touch database in any way. So, if your classes are "business objects", place them in 'app/models' and use like any other model.
Another question is whether you should use inheritance or modules - but I would rather think about including a module in PhaseLogic, and not about defining PhaseLogic in a module. Of course, all this depends heavily on the intended role of your objects.
Because in Ruby the class of object is not important, you do not need to use inheritance. If you want to 'plug' the logic objects into other objects, just take care that all '*Logic' classes have the required methods. I know that all I said is very vague, but I think I cannot give you some more concrete suggestions without knowing more about the role of these objects.
Ah, and one more thing!
If you find yourself fighting with Rails class autoloading, just use the old require "lib/logic.rb" in all the classes where you are using Logic::PHASE_INITIAL constants.
In this case I suppose that your problem was caused by different order of loading. The logic/evaluation_logic.rb has been loaded before logic/phase_logic.rb. The problem may disappear if you create logic.rb somewhere, where class autoloading can find it, and define these constants in that file.
Don't name your classes or modules Logic use specific names. Start with extracting logic into separate classes and then try to break them into smaller ones. Use namespaces to distinguish them from each other in lib folder, after this steps you would be able to extract some logic parts to separate gems and reduce codebase and complexity of application. Also take a look into presenter pattern.

What separates a Ruby DSL from an ordinary API

What are some defining characteristics of a Ruby DSL that separate it from just a regular API?
When you use an API you instantiate objects and call methods in an imperative manner. On the other hand a good DSL should be declarative, representing rules and relationships in your problem domain, not instructions to be executed. Moreover ideally DSL should be readable and modifiable by somebody who is not a programmer (which is not the case with APIs).
Also please keep in mind the distinction between internal and external DSLs.
Internal domain specific language is embedded in a programming language (eg. Ruby). It's easy to implement, but the structure of the DSL is dependent on the parent language it is embedded in.
External domain specific language is a separate language designed with the particular domain in mind. It gives you a greater flexibility when it comes to syntax, but you have to implement the code to interpret it. It's also more secure, as the person editing domain rules doesn't have access to all the power of the parent language.
DSL (domain specific language) is an over-hyped term. If you are simply using a sub-set of a language (say Ruby), how is it a different language than the original? The answer is, it isn't.
However, if you do some preprocessing of the source text to introduce new syntax or new semantics not found in the core language then you indeed have a new language, which may be domain-specific.
The combination of Ruby's poetry mode and operator overloading does present the possibility of having something that is at the same time legal Ruby syntax and a reasonable DSL.
And the continued aggravation that is XML does show that perhaps the simple DSL built into all those config files wasn't completely misguided..
Creating a DSL:
Adding new methods to the Object class so that you can just call them as if they were built-in language constructs. (see rake)
Creating methods on a custom object or set of objects, and then having script files run the statements in the context of a top-level object. (see capistrano)
API design:
Creating methods on a custom object or set of objects, so the user creates an object to use the methods.
Creating methods as class methods, so that the user prefixes the classname in front of all the methods.
Creating methods as a mixin that users include or extend to use the methods in their custom objects.
So yes, the line is thin between them. It's trivial to turn a custom set of objects into a DSL by adding one method that runs a script file in the right context.
The difference between a DSL and an API to me is that a DSL could be at least understood (and verified) if not written as a sub-language of Ruby by someone in that domain.
For example, you could have financial analysts writing rules for a stock trading application in a Ruby DSL and they would never have to know they were using Ruby.
They are, in fact, the same thing. DSLs are generally implemented via the normal language mechanisms in Ruby, so technically they're all APIs.
However, for people to recognize something as a DSL, it usually ends up adding what look like declarative statements to existing classes. Something like the validators and relationship declarations in ActiveRecord.
class Foo << ActiveRecord::Base
validates_uniqueness_of :name
validates_numericality_of :number, :integer_only => true
end
looks like a DSL, while the following doesn't:
class Foo <<ActiveRecord::BAse
def validate
unless unique? name
errors.add(:name, "must be unique")
end
unless number.to_s.match?(/^[-]?\d$/)
errors.add(:number, "must be an integer")
end
end
end
They're both going to be implemented by normal Ruby code. It's just that one looks like you've got cool new language constructs, while the other seems rather pedestrian (and overly verbose, etc. etc.)

Resources