Ruby plugin architecture - ruby

I have to develop a plugin based software in ruby. What's the best architeture tu use?
I am thinking about plugin like this, each in a separate .rb file:
class MyPlugin < Plugin
def info
infos
end
def run
# run
end
end
How i can write a plugin manager to call these plugins?

You'd have to clearly define what "calling the plugins" exactly mean.
For start, you can check out here how to require all the files from a directory, put your plugins into a single directory and require them all.
Then you need to somehow pick which one to use, whether it be:
passing its classname as a string through a command line argument or a config file parameter, and looking for a class by that name using const_get, or
presenting a user a list of all plugins (all descendants of your Plugin class) - check out here how to do it
Finally, you instantiate your plugin and do whatever you need to do with it.

Related

RSpec/Capybara without Rails Setup for Page Objects/Support Files?

So im going to be writing a framework that uses Rspec/Capybara and Selenium to do some automation testing on an app. Im using purely RSpec (no rails) and im having some difficulties in getting everything setup correctly.
The problem I am running into is when im trying to include page object files into my specs. Right now my directory looks like:
spec (Actual .spec files go here, spec_helper is also here)
spec/support (Page Object Files go here)
In my page object files my first problem is it can't find the Capybara DSL unless I do: include Capybara::DSL at the top of each page object file (Which apparently isn't a good idea according to the warning message it gives me)
The other problem is that including my page object files in my tests Im needing to do a require_relative to the specific file (Which is sort of a pain) otherwise it can't find the class.
Is there something in spec_helper that will fix this? Im not an expert in ruby so I assume Im missing something. The Capybara DSL problem I can't figure out either for example requiring require 'capybara/rspec' doesn't seem to help either.
You have a few options for Capybara::DSL -
Include Capybara::DSL into whatever base class you're using for your PageObjects so the Capybara methods are available in that class (as opposed to on the global object) - ie. the include Capybara::DSL goes inside the class definition - not at the top of the file.
Keep an instance of Capybara::Session in your page object instances and call all capybara methods on that session instance - #my_session.find(...), etc.
Always use Capybara.current_session when calling Capybara methods - Capybara.current_session.find(...), etc
As for the require issue -- you can either use require_relative to specify files to require based from the current file, or you use require and specify the location from the projects root (current working directory). If you have a directory of files you want loaded you can add something like the following to your spec_helper.rb to tell it to load all the files
Dir[File.dirname(__FILE__) + "/support/**/*.rb"].each {|f| require f }

World vs Include in cucumber/ruby

Can anyone tell me the difference between include and World? in my env.rb file I want to expose a module, and I can't really see the difference between the approaches...
Module example:
module Test do
...
end
In my env.rb file:
World(Test)
or
include Test
Besides that, can anyone state the pros/cons of using World rather than include in cucumber/ruby?
World(Test) is the preferred method.
The purpose to using World(Test) is that you don't pollute the global namespace yet you still have all the functionality of that module available in your stepdefs.

Svn2Rally Connector Extension

I have used the latest connector from Rally and set it up with Task Scheduler to run periodically. It works well, however I have encountered problems when trying to extend it.
Within the config.yml file used, there is an UpdateArtifactState flag that I believe, when set to True, will make use of the statex.rb file. This file handles how the commit message from SVN is parsed. It is here that I find my problem;
I want to extend the connector to allow for the # symbol to be included in the Rally task identifier (DE55555 -> #DE5555 for example). However, upon testing this file does not seem to be used.
Question: if my assumption is correct, and the statex.rb file is merely an example and not used in execution, how can I extend the Rally Connector to pick up tags the way I see fit?
You may try the following:
make a new Ruby class and put it in to the extension subdirectory.
Example: extension/my_state_extractor.rb
in this file he has a class defined as 'MyStateExtractor'
Pattern your file from the statex.rb file.
Then, in your config in the Rally section, you'll need an entry of
StateExtractorClass : MyStateExtractor(message)
Customarily this entry will follow the entry for UpdateArtifactState : True

Best way of using Ruby module for configuration data

I am somewhat new to Ruby, especially the more advanced concepts like modules and mixins, so I might be using module totally out of context..
I am currently writing an internal test framework using Capybara and I am trying to figure out the best/easiest way of handling configuration data. I file a file called config.rb and within it I want to store configuration settings per environment. For example:
module QAConfiguration
config data goes here
end
module DevConfiguration
config data goes here
end
The simplest example of configuration data is usernames and password. QA and Dev of course use different users. I am thinking of two different ways of going about this but I want to make sure I am following at least a decent practice and not going into the weeds.
module QAConfiguration
USERNAME = 'test'
PASSWORD = 'test'
end
or..
module QAConfiguration
def username
'test'
end
end
And so on. Which is the best way of approaching this?
A module probably isn't the best way to implement this. Generally when testing (I use rspec) we use helper files that contain reusable code.
Object attributes like usernames and passwords are usually handled by Factories. A great gem for factories is FactoryGirl.
A common way to store configuration in ruby/rails is in a yaml file. You could for example create a file called config/test_conf.yml with yaml format:
username: 'your_user'
password: 'somepass'
Then where you need the config data:
config = YAML.load(File.read("#{Rails.root}/config/test_conf.yml"))
puts config['username']
And finally, you will usually only put a test_config.yml.example on git/svn and in your app setup readme note that they need to cp config/test_config.yml.example config/test_config.yml and edit the file.

How can I build a modular command-line interface using rubygems?

I've written a command-line tool for manipulating with genome scaffolds called "Scaffolder". At the moment all the tools I want to use are hard-coded into the library. For instance these tools "validate" or "build" the scaffold. I'd like to split these tools out into their own gems, make it more modular, and to allow third parties to write their own commands.
The ideal case would be that I run "gem install scaffolder-validate" and this gem-bundled command would then be available as part of scaffolder. I know a couple of libraries make it easy to build a command-line interface: thor, commander, gli, .... However I don't think any of them cater for this type of functionality.
My question is how can I use a gem structure to create a module structure for installing these commands? Specifically how can the installed commands be auto-detected and loaded? With some prefix in the gem name scaffolder-* then searching rubygems? How could I test this with cucumber?
So, one thing you can do is to decide on a canonical name for your plugins, and then use that convention to load things dynamically.
It looks like your code is all under a module Scaffolder, so you can create plugins following the following rules:
Scaffolder gems must be named scaffold-tools-plugin-pluginname
All plugins expose one class, named Scaffolder::Plugin::Pluginname
That class must conform to some interface you document, and possibly provide a base class for
Given that, you can then accept a command-line argument of the plugins to load (assuming OptionParser):
plugin_names = []
opts.on('--plugins PLUGINS','List of plugins') do |plug|
plugin_names << plug
end
Then:
plugin_classes = []
plugin_names.each do |plugin_name|
require "scaffold-tools-plugin-#{plugin_name}"
plugin_classes << Kernel.const_get("Scaffold::Plugin::#{plugin_name}")
end
Now plugin_classes is an Array of the class objects for the plugins configured. Supposing they all conform to some common constructor and some common methods:
plugin_classes.each do |plugin_class|
plugin = plugin_class.new(args)
plugin.do_its_thing(other,args)
end
Obviously, when doing a lot of dynamic class loading like this, you need to be careful and trust the code that you are running. I'm assuming for such a small domain, it won't be a concern, but just be wary of requireing random code.
Hm, tricky one. One simple idea I have is that the main gem just tries to require all the others and catches the load error when they are not there and disables the respective features. I do this in one of my gems. If HighLine is present, the user gets prompted for a password, if it isn't there has to be a config file.
begin
require 'highline'
rescue LoadError
highline = false
end
If you have a lot of gems this could become ugly though...

Resources