I'm wanting to dynamically create and query tables using Datamapper.
While Datamapper allows you to work with legacy tables and schemas, and in this way set the table name used this is only during initialisation, not within the application.
Is there an easy way to tell Datamapper to migrate/upgrade a Model with an assigned table name in application, and to then tell it to query this table?
This should not be a problem.
All Ruby classes can be created, and re-defined at run-time. Even initialization is at run-time. Initialization just happens to be executed first, before other code is executed.
That is why monkey-patches work so easily. It's just additional code at initialization that just re-defines classes to add extra methods, variables etc.
There is no Ruby code that is "special" in the sense that it only runs at compile time. Ruby is an interpreted language.
To dynamically create a class, see Dynamically creating class in Ruby.
Assuming you don't need to dynamically create classes from an array of strings, you can define additional methods with define_method, or call Datamapper methods at runtime to add attributes.
To define new methods in a class:
Post.send :define_method, :new_method_name do
end
To define a new property using the Datamapper property:
class Post
include DataMapper::Resource
property :title, String # the static way
end
Post.send :property, :title, String # add property the dynamic way (at run-time)
Do note that any tables or properties you define at run-time will not be available if you restart your server, unless the code that dynamically generates these are re-executed.
To update your tables at runtime, you simply do the same thing as normal, that is, call:
DataMapper.auto_upgrade!
To upgrade only a single table, you can also do:
Post.auto_upgrade!
2nd warning: If you have multiple processes, the dynamic code will need to be run in each process, or the additional table Models and Properties will not be available.
This is a problem if you have multiple worker processes, as might happen in production (eg. Nginx with multiple Unicorn workers, or multiple Mongrel workers behind a Ha_proxy).
If you have a single process server, then that is not a problem. However, if you have multiple worker processes, you must run the dynamic code to generate these extra classes and properties in EACH process to make it available.
This is actually the same for initialization, because each process goes through initialization (or if forked, inherit any initialization).
The easiest way without changing anything under the hood is to use separate databases instead of tables (assuming that any relationships will also be stored in the separate database) and open a connection to an additional repository in the block.
DataMapper.setup(:external, "adapter://username:password#hostname/dbname")
DataMapper.repository(:external) do...end
Related
I want to use FactoryGirl to build in-memory stubs of models, then have all ActiveRecord queries run against only those. For example:
# Assume we start with an empty database, a Foo model,
# and a Foo factory definition.
#foo_spec.rb
stubbed_foo = FactoryGirl.build_stubbed(:foo)
# Elsewhere, deep in the guts of application
Foo.first() # Ideally would return the stubbed_foo we created
# in the test. Currently this returns nil.
The solution might be to use an in-memory database. But is the above scenario possible?
If your reason for avoiding the database, is to speed up your tests, then there are better ways.
Use FactoryGirl.build as much as possible instead of create. This works as long as the record won't be fetched from the database by your code. This works well for unit tests with well-structured code. (For example, it helps to use Service Objects and unit test them independently.
For tests that actually need to read from the database (as in your Foo.first example call), you can use FactoryGirl.create and use transactional fixtures. This creates a database transaction at the beginning of each test example, and then rolls back the transaction at the end of the example. This can cause problems when you use callbacks in your ActiveRecord models such as after_commit.
If you use after_commit or other callbacks in your models that require the database transaction to close (or you use explicit transactions in your code), I recommend setting up DatabaseCleaner. Here's an example of to configure and use it: https://gist.github.com/RobinDaugherty/9f4e5f782d9fdbe191a23de30ad8b539
I want all of my db interactions for a specific model to go through the mongo primary in my cluster, so I set the model to use strong consistency.
class Photo
include Mongoid::Document
with consistency: :strong
field :number, type: Integer
# let's say a photo number is unique in the db
validate :unique_number
end
But this does not seem to work, because I still run into validation errors when I save two Photo photos very close together.
photo1 # db has number=1 for this object
photo1.update_attributes(number: 2)
photo2.number = 1
photo2.save! # <= this raises a validation exception
My understanding of strong consistency is that there shouldn't be a race here. It should do the write and then do the read, and since it's all off the primary there shouldn't be a conflict. What am I missing?
What you area experiencing looks like it is persistence. The update_attributes is making an atomic change on the document, and it looks like it is not updating the persisted photo1.Your photo2 validation is fired from within the persistence (i.e. on the rails server and not in mongo) and is looking at the records it has. If you ran photo1.reload after the photo1.update_attributes this may sort this for you.
It's been a while since I used mongoid 3, 4 has been the staple for a while and recently upgraded to 5.You won't find this type of issue in mongoid 4.
If the reload does not help, please output photo2.errors so I can pin point the issue for you.
It turns out calling with(consistency: :strong) at the class level only applies it to the next query. So the class method is called when the class is loaded, setting strong consistency for the first query,
but subsequent queries don't trigger the same class method leaving their persistence operations to operate with eventual consistency. From the Mongoid 3.1.7 documentation:
Tell the next persistance [sic] operation to store in a specific collection, database or session.
This method does not enforce the persistence options that can be passed in (like a few other methods in the class), so we can also pass in consistency: :strong.
Hack Fix
In order to apply this to every* persistence operation, I added it to a default_scope.
class App
default_scope -> { with(consistency: :strong); where({}) }
end
In this case, the default scope expects to have a Mongoid Criteria object returned, so we return a noop where clause after setting the consistency level on the in-progress persistence operation.
* This will not be applied if the developer decides to call unscoped and strip off the default_scope.
I'm using DataMapper as ORM framework after many years of experience with AR. For that reason I sometimes try to find a specific DM function that mirrors some behaviour from AR. Sometimes I'm lucky, sometimes I'm not. With the #reload directive, I'm kind of in a limbo. The method exists, but somehow doesn't do what I expected it to. Basically, instead of the AR behaviour in which the instance attributes would be updated looking up to the DB, DM somehow marks every attributes from the instance as "not loaded".
Can somebody tell me if this is possible to achieve using DM?
DataMapper marks the attributes as not loaded and will load them on the next access.
This is a result from support of lazy loading groups. DM-1 will wait to see what attribute is accessed next to load only a limited set of attributes.
Per default all attributes are in the :default group, so most likely all attributes are loaded once you hit one.
In case this lazy behavior is not wanted you can do the following:
resource = YourClass.first(:some => :stuff)
# full non lazy reload (make sure you do not have
# a reference to old somewhere that causes confusion
resource = resource.model.get(resource.id)
I am creating a Mongoid based application which will have a Class (called Question) whose Objects are stored in two different ways for different purposes. One group of those objects need to be stored in an N:N relationship with Class Page and another group of the same objects need to be stored as embedded (1:N) entries in a different Class (FilledPage).
I need to be able to copy a Question Object which has been referenced in a Page into a FilledPage and for the purposes of speed, I need that to be an embedded relationship.
I have tried creating a Superclass with the information and then two child classes, but I can't convert from one child class to the other without considerable work (and this same design needs to be used in a few other areas with much greater complexity).
Is there any way to support both embedding and references in the same class, or some other solution which will do similar.
Nothing block to have same class to be embedded or standalone. with reference. The limitation is about linking a master document to embedded document. It's not possible easily with mongodb, because your need get the master document and extract the embedded one.
I am designing a class for log entries of my mail server. I have parsed the log entries and created the class hierarchy. Now I need to save the in memory representation to the disk. I need to save it to multiple destinations like mysql and disk files. I am at a loss to find out the proper way to design the persistence mechanism. The challenges are:
How to pass persistence
initialization information like
filename, db connection parameters
passed to them. The options I can
think of are all ugly for eg:
1.1 Constructor: it becomes ugly as I
add more persistence.
1.2 Method: Object.mysql_params(" "),
again butt ugly
"Correct" method name to call each
persistance mechanism: eg:
Object.save_mysql, Object.save_file,
or Object.save (mysql) and
Object.save(file)
I am sure there is some pattern to solve this particular problem. I am using ruby as my language, with out any rails, ie pure ruby code. Any clue is much welcome.
raj
Personally I'd break things out a bit - the object representing a log entry really shouldn't be worrying about how it should save it, so I'd probably create a MySQLObjectStore, and FileObjectStore, which you can configure separately, and gets passed the object to save. You could give your Object class a class variable which contains the store type, to be called on save.
class Object
cattr_accessor :store
def save
##store.save(self)
end
end
class MySQLObjectStore
def initialize(connection_string)
# Connect to DB etc...
end
def save(obj)
# Write to database
end
end
store = MySQLObjectStore.new("user:password#localhost/database")
Object.store = store
obj = Object.new(foo)
obj.save
Unless I completely misunstood your question, I would recommend using the Strategy pattern. Instead of having this one class try to write to all of those different sources, delegate that responsibility to another class. Have a bunch of LogWriter classes, each one with the responsibility of persiting the object to a particular data store. So you might have a MySqlLogWriter, FileLogWriter, etc.
Each one of these objects can be instantiated on their own and then the persitence object can be passed to it:
lw = FileLogWriter.new "log_file.txt"
lw.Write(log)
You really should separate your concerns here. The message and the way the message is saved are two separate things. In fact, in many cases, it would also be more efficient not to open a new mysql connection or new file pointer for every message.
I would create a Saver class, extended by FileSaver and MysqlSaver, each of which have a save method, which is passed your message. The saver is responsible for pulling out the parts of the message that apply and saving them to the medium it's responsible for.