Is this Ruby class really badly designed? - ruby

I'm quite new to OOP and I'm concerned that this class that I've written is really poorly designed. It seems to disobey several principles of OOP:
It doesn't contain its own data, but relies on a yaml file for
values.
Its methods need to be called in a particular order
It has a lot of instance variables and methods
It does work, however. It's robust, but I'll need to modify the source code to add new getter methods every time I add page elements
It's a model of an html document used in an automated test suite. I keep thinking that some of the methods could be put in subclasses, but I'm concerned that I'd have too many classes then.
What do you think?
class BrandFlightsPage < FlightSearchPage
attr_reader :route, :date, :itinerary_type, :no_of_pax,
:no_results_error_container, :submit_button_element
def initialize(browser, page, brand)
super(browser, page)
#Get reference to config file
config_file = File.join(File.dirname(__FILE__), '..', 'config', 'site_config.yml')
#Store hash of config values in local variable
config = YAML.load_file config_file
#brand = brand #brand is specified by the customer in the features file
#Define instance variables from the hash keys
config.each do |k,v|
instance_variable_set("##{k}",v)
end
end
def visit
#browser.goto(#start_url)
end
def set_origin(origin)
self.text_field(#route[:attribute] => #route[:origin]).set origin
end
def set_destination(destination)
self.text_field(#route[:attribute] => #route[:destination]).set destination
end
def set_departure_date(outbound)
self.text_field(#route[:attribute] => #date[:outgoing_date]).set outbound
end
def set_journey_type(type)
if type == "return"
self.radio(#route[:attribute] => #itinerary_type[:single]).set
else
self.radio(#route[:attribute] => #itinerary_type[:return]).set
end
end
def set_return_date(inbound)
self.text_field(#route[:attribute] => #date[:incoming_date]).set inbound
end
def set_number_of_adults(adults)
self.select_list(#route[:attribute] => #no_of_pax[:adults]).select adults
end
def set_no_of_children(children)
self.select_list(#route[:attribute] => #no_of_pax[:children]).select children
end
def set_no_of_seniors(seniors)
self.select_list(#route[:attribute] => #no_of_adults[:seniors]).select seniors
end
def no_flights_found_message
#browser.div(#no_results_error_container[:attribute] => #no_results_error_container[:error_element]).text
raise UserErrorNotDisplayed, "Expected user error message not displayed" unless divFlightResultErrTitle.exists?
end
def submit_search
self.link(#submit_button_element[:attribute] => #submit_button_element[:button_element]).click
end
end

If this class is designed as a Facade, then it's not (too) bad design. It provides a coherent unified way to perform related operations that rely on a variety of un-related behavior holders.
It appears to be poor separation of concerns, in that this class essentially coupling all the various implementation details, which might turn out to be somewhat tricky to maintain.
Finally, the fact methods need to be called in a specific order may hint at the fact you're trying to model a state machine - in which case it probably should be broken down to several classes (one per "state"). I don't think there's a "too many methods" or "too many classes" point you'd reach, the fact is you need the features provided by each class to be coherent and making sense. Where to draw the line is up to you and your specific implementation's domain requirements.

Related

Same dataset-/filter-logic on different models in sequel (DRY)

I am building a sinatra web app with a sequel database backend. The primary tasks of this app is collecting status messages from different robots, store them in a database and provide various methods to view them. A common denominator in these messages is, that they provide a WGS84 position in lat/lon.
Now I want to provide various filters for querying messages based on their positions, but I want to write these filters only once, test them only once but re-use them in all model-classes with a lat/lon entry.
To boil it down to a very simple example:
Sequel.migration do
up do
create_table(:auvmessages) do
primary_key :id
Float :lat
Float :lon
String :message
end
create_table(:asvmessages) do
primary_key :id
Float :lat
Float :lon
Integer :chargestate
end
end
end
class Auvessage < Sequel::Model
dataset_module do
def north_of(lat)
self.where{ latitude > lat}
end
end
end
class Asvessage < Sequel::Model
dataset_module do
def north_of(lat)
self.where{ latitude > lat}
end
end
end
In both model classes have north_of(lat) to filter for messages which originate north of a given latitude. This function is fairly simple and you can easily repeat it two or three times, but what about more complex cases?
I have played around a bit with modules outside of dataset_module but nothing seem to be right.
Is there a preferred way how to re-use filters over different models? I have searched a lot, but didn't find any satisfying answer.
Edit:
To make my question a bit more precise: I want to move all functions like north_of(lat) (there are a lot more) into a service class. What I want to know now, is the best way to integrate that service class into a sequel-model:
"Just" include it?
Extend dataset_module, and if so, how?
Writing a dataset-plugin?
...
You can pass an existing module to dataset_module:
module NorthOf
def north_of(lat)
where{latitude > lat}
end
end
Auvessage.dataset_module NorthOf
Asvessage.dataset_module NorthOf
As a followup: I have taken #jeremy-evans answer and extended it by a parametrisation scheme for modules. So from now on I can test my filters by mocking and my model classes have just a list of includes in their dataset_module.
I like it.
As explanation my slightly modified example:
Sequel.migration do
up do
create_table(:auvmessages) do
primary_key :id
Float :lat
Float :lon
String :message
end
create_table(:asvmessages) do
primary_key :id
Float :gps_lat
Float :gps_lon
Integer :chargestate
end
end
end
module GPSFilter
def self.create(lat_name, lon_name)
Module.new do
include GPS
define_method :lat_col_name do
lat_name
end
define_method :lon_col_name do
lon_name
end
end
end
def north_of(lat)
where( "#{lat_col_name} > #{lat}" )
end
##### default parameters #####
def lon_col_name
"lon"
end
def lat_col_name
"lat"
end
end
class Auvmessage < Sequel::Model
dataset_module do
include GPSFilter
end
end
class Asvmessage < Sequel::Model
dataset_module do
include GPSFilter.create :gps_lat, :gps_lon
end
end
Here is a link to Uncle Bob's Screaming Architecture blog post which might be of help.
Now, answering your question, it seems that north_of, as well as many other methods, are actually part of your domain logic. This logic should not go in persistence abstractions, or controllers, or views, etc.
Design, build and write tests for the set of objects that solves your problem in the language of the domain of your problem. Then, you'll have at hand a rich set of functionality that you can simply use on Models, Controllers, CLIs, etc.
I usually put my service objects in a lib/ directory and write simple unit tests, without any of the persistence boilerplate that sets up test databases. They usually run very fast as well.

Ruby mixins looking for a best practice

I'm writing Ruby Gem where I have Connection module for Faraday configuration
module Example
module Connection
private
def connection
Faraday.new(url: 'http://localhost:3000/api') do |conn|
conn.request :url_encoded # form-encode POST params
conn.response :logger # log requests to STDOUT
conn.adapter Faraday.default_adapter # make requests with Net::HTTP
conn.use Faraday::Response::ParseJson
conn.use FaradayMiddleware::RaiseHttpException
end
end
end
end
Second module which makes API requests looks like this:
module Example
module Request
include Connection
def get(uri)
connection.get(uri).body
end
def post(url, attributes)
response = connection.post(url) do |request|
request.body = attributes.to_json
end
end
def self.extended(base)
base.include(InstanceMethods)
end
module InstanceMethods
include Connection
def put(url, attributes)
response = connection.put(url) do |request|
request.body = attributes.to_json
end
end
end
end
end
Class Cusomer where I use Request looks like this:
module Example
class Customer
extend Request
attr_accessor :id, :name, :age
def initialize(attrs)
attrs.each do |key, value|
instance_variable_set("##{key}", value)
end
end
def self.all
customers = get('v1/customer')
customers.map { |cust| new cust }
end
def save
params = {
id: self.id,
age: self.age
name: self.name,
}
put("v1/customers/#{self.id}", params)
end
end
end
So here you see in Customer#all class method I'm calling Request#get method which is available because I extended Request in Customer. then I'm using self.extended method in Request module to be make Request#put available in Customer class, so I have question is this good approach to use mixins like this, or do you have any suggestion?
Mixins are a strange beast. Best practices vary depending on who you talk to. As far as reuse goes, you've achieved that here with mixins, and you have a nice separation of concerns.
However, mixins are a form of inheritance (you can take a peek at #ancestors). I would challenge you saying that you shouldn't use inheritance here because a Customer doesn't have an "is-a" relationship with Connection. I would recommend you use composition instead (e.g. pass in Connection/Request) as it makes more sense to me in this case and has stronger encapsulation.
One guideline for writing mixins is to make everything end in "-able", so you would have Enumerable, Sortable, Runnable, Callable, etc. In this sense, mixins are generic extensions that provide some sort of helpers that are depending on a very specific interface (e.g. Enumerable depends on the class to implement #each).
You could also use mixins for cross-cutting concerns. For example, we've used mixins in the past in our background jobs so that we could add logging for example without having to touch the source code of the class. In this case, if a new job wants logging, then they just mixin the concern which is coupled to the framework and will inject itself properly.
My general rule of thumb is don't use them if you don't have to. They make understanding the code a lot more complicated in most cases
EDIT: Adding an example of composition. In order to maintain the interface you have above you'd need to have some sort of global connection state, so it may not make sense. Here's an alternative that uses composition
class CustomerConnection
# CustomerConnection is composed of a Connection and retains isolation
# of responsibilities. It also uses constructor injection (e.g. takes
# its dependencies in the constructor) which means easy testing.
def initialize(connection)
#connection = connection
end
def all_customers
#connection.get('v1/customers').map { |res| Customer.new(res) }
end
end
connection = Connection.new
CustomerConnection.new(connection).all_customers

Sharing a class instance between two classes

I have two different classes that both represent objects that need to be persisted to my database and now I want to share the database client object between the two classes. I want to avoid instantiating the client object more than once.
Currently I do this by using a global variable
$client = Mysql2::Client.new(:database => "myDb", :user => "user", :password => "password", :host => "localhost")
class Person
def save
$client.query("INSERT INTO persons")
end
end
class Car
def save
$client.query("INSERT INTO cars")
end
end
This works, but I am wondering if there are more correct ways to do this and why they are more correct?
You can inherit from a parent class. This allows you to share common functionality across objects and follows DRY (do not repeat yourself) programming principles. It will also allow you to protect your DB connection with locks, resuces, queues, pools, and whatever else you may want to do without having to worry about it in your children classes
class Record
#table_name = nil
##client = Mysql2::Client.new(:database => "myDb", :user => "user", :password => "password", :host => "localhost")
def save
##client.query("INSERT INTO #{#table_name}") if #table_name
end
end
class Person < Record
#table_name = "persons"
end
class Car < Record
#table_name = "cars"
end
While we are on the subject, you should look at using ActiveRecord for handling your database models and connections. It already does pretty much anything you'll need and will be more compatible with other gems already out there. It can be used without rails.
As an alternative on using inheritance, why not consider a simple Singleton pattern? This could make your models cleaner, by separating the responsibility outside your classes. And eliminating the need for inheritance.
The example below illustrates this. Only one, single instance of the DataManager class can exist. So, you'll only instantiate it once - but can use it everywhere:
require 'singleton'
class DataManager
include Singleton
attr_accessor :last_run_query
def initialize()
if #client.nil?
p "Initialize the Mysql client here - note that this'll only be called once..."
end
end
def query(args)
# do your magic here
#last_run_query = args
end
end
Next, calling it using the .instance accessor is a breeze - and will always point to one single instance, like so:
# Fetch, or create a new singleton instance
first = DataManager.instance
first.query('drop table mother')
p first.last_run_query
# Again, fetch or create a new instance
# this'll actually just fetch the first instance from above
second = DataManager.instance
p second.last_run_query
# last line prints: "drop table mother"
For the record, the Singleton pattern can have some downsides and using it frequently results in a never-ending debate on whether you should use it or not. But in my opinion it's a decent alternative to your specific question.

Issue loading classes order EDIT: works, although some odd behavior along the way

I'm working on a project to recreate some of the functionality of ActiveRecord. Here's the portion that isn't working
module Associations
def belongs_to(name, params)
self.class.send(:define_method, :other_class) do |name, params|
(params[:class_name] || name.camelize).constantize
end
self.class.send(:define_method, :other_table_name) do |other_class|
other_class.table_name
end
.
.
.
o_c = other_class(name, params)
#puts this and other (working) values in a query
query = <<-SQL
...
SQL
#sends it off with db.execute(query)...
I'm building towards this testing file:
require 'all_files' #holds SQLClass & others
pets_db_file_name = File.expand_path(File.join(File.dirname(__FILE__), "pets.db"))
DBConnection.open(pets_db_file_name)
#class Person
#end
class Pet < SQLClass
set_table_name("pets")
set_attrs(:id, :name, :owner_id)
belongs_to :person, :class_name => "Person", :primary_key => :id, :foreign_key => :owner_id
end
class Person < SQLClass
set_table_name("people")
set_attrs(:id, :name)
has_many :pets, :foreign_key => :owner_id
end
.
.
.
Without any changes I received
.../active_support/inflector/methods.rb:230:in `block in constantize': uninitialized constant Person (NameError)
Just to make sure that it was an issue with the order of loading the classes in the file I began the file with the empty Person class, which, as predicted gave me
undefined method `table_name' for Person:Class (NoMethodError)
Since this is a learning project I don't want to change the test to make my code work (open all the classes, set all the tables/attributes then reopen them them for belongs_to. But, I'm stuck on how else to proceed.)
EDIT SQLClass:
class SQLClass < AssignmentClass
extend SearchMod
extend Associations
def self.set_table_name(table_name)
#table_name = table_name
end
def self.table_name
#table_name
end
#some more methods for finding rows, and creating new rows in existing tables
And the relevant part of AssignmentClass uses send on attr_accessor to give functionality to set_attrs and makes sure that before you initialize a new instance of a class all the names match what was set using set_attrs.
This highlights an important difference between dynamic, interpreted Ruby (et al) and static, compiled languages like Java/C#/C++. In Java, the compiler runs over all your source files, finds all the class/method definitions, and matches them up with usages. Ruby doesn't work like this -- a class "comes into existence" after executing its class block. Before that, the Ruby interpreter doesn't know anything about it.
In your test file, you define Pet first. Within the definition of Pet, you have belongs_to :person. belongs_to does :person.constantize, attempting to get the class object for Person. But Person doesn't exist yet! Its definition comes later in the test file.
There are a couple ways I can think that you could try to resolve this:
One would be to do what Rails does: define each class in its own file, and make the file names conform to some convention. Override constant_missing, and make it automatically load the file which defines the missing class. This will make load order problems resolve themselves automatically.
Another solution would be to make belongs_to lazy. Rather than looking up the Person class object immediately, it could just record the fact that there is an association between Pet and Person. When someone tries to call pet.person, use a missing_method hook to actually define the method. (Presumably, by that time all the class definitions will have been executed.)
Another way would be do something like:
define_method(belongs_to) do
belongs_to_class = belongs_to.constantize
self.class.send(:define_method, belongs_to) do
# put actual definition here
end
self.send(belongs_to)
end
This code is not tested, it's just to give you an idea! Though it's a pretty mind-bending idea, perhaps. Basically, you define a method which redefines itself the first time it is called. Just like using method_missing, this allows you to delay the class lookup until the first time the method is actually used.
If I can say one more thing: though you say you don't want to "overload" method_missing, I don't think that's as much of a problem as you think. It's just a matter of extracting code into helper methods to keep the definition of method_missing manageable. Maybe something like:
def method_missing(name,*a,&b)
if has_belongs_to_association?(name)
invoke_belongs_to_association(name,a,b)
elsif has_has_many_association?(name)
invoke_has_many_association(name,a,b)
# more...
else
super
end
end
Progress! Inspired by Alex D's suggestion to use method_missing to delay the creation I instead used define_methodto create a method for the name, like so:
define_method, :other_class) do |name, params|
(params[:class_name] || name.camelize).constantize
end
define_method(:other_table_name) do |other_class|
other_class.table_name
end
#etc
define_method(name) do #|params| turns out I didn't need to pass in `params` at all but:
#p "---#{params} (This is line 31: when testing this out I got the strangest error
#.rb:31:in `block in belongs_to': wrong number of arguments (0 for 1) (ArgumentError)
#if anyone can explain this I would be grateful.
#I had declared an #params class instance variable and a getter for it,
#but nothing that should make params require an argument
f_k = foreign_key(name, params)
p f_k
o_c = other_class(name, params)
o_t_n = other_table_name(o_c)
p_k = primary_key(params)
query = <<-SQL
SELECT *
FROM #{o_t_n}
WHERE #{p_k} = ?
SQL
row = DBConnection.execute(query, self.send(f_k))
o_c.parse_all(row)
end

Should the Applicant class "require 'mad_skills'" or "include 'mad_skills'"?

Also, what does "self.send attr" do? Is attr assumed to be a private instance variable of the ActiveEngineer class? Are there any other issues with this code in terms of Ruby logic?
class Applicant < ActiveEngineer
require 'ruby'
require 'mad_skills'
require 'oo_design'
require 'mysql'
validates :bachelors_degree
def qualified?
[:smart, :highly_productive, :curious, :driven, :team_player ].all? do
|attr|
self.send attr
end
end
end
class Employer
include TopTalent
has_millions :subscribers, :include=>:mostly_women
has_many :profits, :revenue
has_many :recent_press, :through=>[:today_show, :good_morning_america,
:new_york_times, :oprah_magazine]
belongs_to :south_park_sf
has_many :employees, :limit=>10
def apply(you)
unless you.build_successful_startups
raise "Not wanted"
end
unless you.enjoy_working_at_scale
raise "Don't bother"
end
end
def work
with small_team do
our_offerings.extend you
subscribers.send :thrill
[:scaling, :recommendation_engines, : ].each do |challenge|
assert intellectual_challenges.include? challenge
end
%w(analytics ui collaborative_filtering scraping).each{|task|
task.build }
end
end
end
def to_apply
include CoverLetter
include Resume
end
require 'mad_skills' loads the code in mad_skills.rb (or it loads mad_skills.so/.dll depending on which one exists). You need to require a file before being able to use classes, methods etc. defined in that file (though in rails files are automatically loaded when trying to access classes that have the same name as the file). Putting require inside a class definition, does not change its behaviour at all (i.e. putting it at the top of the file would not make a difference).
include MadSkills takes the module MadSkills and includes it into Applicant's inheritance chain, i.e. it makes all the methods in MadSkills available to instances of Applicant.
self.send attr executes the method with the name specified in attr on self and returns its return value. E.g. attr = "hello"; self.send(attr) will be the same as self.hello. In this case it executes the methods smart, highly_productive, curious, driven, and team_player and checks that all of them return true.

Resources