I'm attempting to clean up my view by moving Rails' sanitizer method to a helper, but it's not producing the desired result. So below is what my index action looks like. I know it's ugly and not very OOP, but I simplified it down so I could follow what was happening when debugging.
I'm attempting to loop through all the sources' attributes, running the sanitizer on any attribute that is a non-empty string, replacing original strings with the sanitized strings (transform_values!), and writing over the original #sources (map!).
I tried storing them in different variables than #sources and using .each instead of .map! but the sanitized values don't make it through.
def index
#sources = Source.all
#sources.map! { |source|
source.attributes.transform_values! { |attr|
attr.blank? || !attr.is_a?(String) ? attr
: ActionController::Base.helpers.sanitize(attr) } }
end
However, after examining my list of sources in the view, it's removing the source instances and instead returning a nondescript array of hashes. I can loop through these, but I can't call specific attributes like source.author which is not great.
Here's some images for reference. The first one is what it should look like and second is what I'm currently getting
Unsanitized sources
Sanitized sources
map! replaces each item in the array with the result of the block. This is not what you intend to do, because you just want to mutate the items, not replace them with something else. Use a plain each instead of map! would do the trick.
On another side, sanitization is actually a responsibility of the view (that’s why it’s defined in a helper). If you need to sanitize often with the same argument, define your own helper:
class ApplicationHelper
def sany(str)
sanitize(str, %w[...])
end
end
<%= sany(source.some_attr) %>
You could also set the default sanitization options following the documentation:
# In config/application.rb
config.action_view.sanitized_allowed_tags = ['strong', 'em', 'a']
config.action_view.sanitized_allowed_attributes = ['href', 'title']
Related
I'm using SitePrism to create some POM tests. One of my page classes looks like this:
class HomePage < SitePrism::Page
set_url '/index.html'
element :red_colour_cell, "div[id='colour-cell-red']"
element :green_colour_cell, "div[id='colour-cell-green']"
element :blue_colour_cell, "div[id='colour-cell-blue']"
def click_colour_cell(colour)
case colour
when 'red'
has_red_colour_cell?
red_colour_cell.click
when 'green'
has_green_colour_cell?
green_colour_cell.click
when 'blue'
has_blue_colour_cell?
blue_colour_cell.click
end
end
end
The method click_colour_cell() get its string value passed from a Capybara test step that calls this method.
If I need to create additional similar methods in the future, it can become rather tedious and unwieldy having so many case switches to determine the code flow.
Is there some way I can create a variable that is dynamically named by the string value of another variable? For example, I would like to do something for click_colour_cell() that resembles the following:
def click_colour_cell(colour)
has_#colour_colour_cell?
#colour_colour_cell.click
end
where #colour represents the value of the passed value, colour and would be interpreted by Ruby:
def click_colour_cell('blue')
has_blue_colour_cell?
blue_colour_cell.click
end
Isn't this what instance variables are used for? I've tried the above proposal as a solution, but I receive the ambiguous error:
syntax error, unexpected end, expecting ':'
end
^~~ (SyntaxError)
If it is an instance variable that I need to use, then I'm not sure I'm using it correctly. if it's something else I need to use, please advise.
Instance variables are used define properties of an object.
Instead you can achieve through the method send and string interpolation.
Try the below:
def click_colour_cell(colour)
send("has_#{colour}_colour_cell?")
send("#{colour}_colour_cell").click
end
About Send:
send is the method defined in the Object class (parent class for all the classes).
As the documentation says, it invokes the method identified by the given String or Symbol. You can also pass arguments to the methods you are trying to invoke.
On the below snippet, send will search for a method named testing and invokes it.
class SendTest
def testing
puts 'Hey there!'
end
end
obj = SendTest.new
obj.send("testing")
obj.send(:testing)
OUTPUT
Hey there!
Hey there!
In your case, Consider the argument passed for colour is blue,
"has_#{colour}_colour_cell?" will return the string"has_blue_colour_cell?" and send will dynamically invoke the method named has_blue_colour_cell?. Same is the case for method blue_colour_cell
Direct answer to your question
You can dynamically get/set instance vars with:
instance_variable_get("#build_string_as_you_see_fit")
instance_variable_set("#build_string_as_you_see_fit", value_for_ivar)
But...
A Warning!
I think dynamically creating variables here and/or using things like string-building method names to send are a bad idea that will greatly hinder future maintainability.
Think of it this way: any time you see method names like this:
click_blue_button
click_red_button
click_green_button
it's the same thing as doing:
add_one_to(1) // instead of 1 + 1, i.e. 1.+(1)
add_two_to(1) // instead of 1 + 2, i.e. 1.+(2)
add_three_to(1) // instead of 1 + 3, i.e. i.+(3)
Instead of passing a meaningful argument into a method, you've ended up hard-coding values into the method name! Continue this and eventually your whole codebase will have to deal with "values" that have been hard-coded into the names of methods.
A Better Way
Here's what you should do instead:
class HomePage < SitePrism::Page
set_url '/index.html'
elements :color_cells, "div[id^='colour-cell-']"
def click_cell(color)
cell = color_cells.find_by(id: "colour-cell-#{color}") # just an example, I don't know how to do element queries in site-prism
cell.click
end
end
Or if you must have them as individual elements:
class HomePage < SitePrism::Page
set_url '/index.html'
COLORS = %i[red green blue]
COLORS.each do |color|
element :"#{color}_colour_cell", "div[id='colour-cell-#{color}']"
end
def cell(color:) # every other usage should call this method instead
#cells ||= COLORS.index_with do |color|
send("#{color}_colour_cell") # do the dynamic `send` in only ONE place
end
#cells.fetch(color)
end
end
home_page.cell(color: :red).click
I'm working on some ERB templates that are not my own, and I see the developer has used statements like this a lot:
<%= p("object.property.foo") %>
Where object is an OpenStruct. This method call results in the value of object.property.foo being printed (as in JavaScript, or most languages I know). Which is awesome because it is much more simple than writing:
<%= object["property"]["foo"] %>
My questions are:
Why am I able to access properties with "." notation?
Why do I pass a string to p and not the object itself?
Why is p preferable in this case? (I know p vs. puts, but why use p here?)
<%= %> tells the ERB parser to evaluate the content as a ruby expression, and to include the return value in the resulting HTML text.
p( ) is likely to be a view helper function that creates some HTML tags. This is not obvious from the code fragment. Apparently it evaluates the string argument, again as another ruby expression. p is not a standard rails or ruby method.
object is according to the questioner an OpenStruct. OpenStruct is a data structure that combines the behaviour of a Hash with the syntax of class methods. It is documented here: http://ruby-doc.org/stdlib-2.1.0/libdoc/ostruct/rdoc/OpenStruct.html
object.property asks OpenStruct to apply property on object. OpenStruct replies with a stored value, something like #value[property], where #value would be a Hash. You do not need square bracket syntax, because OpenStruct provides dynamic access methods. The '.' is the ruby operator to apply a method to an object. The internal implementation of OpenStruct's data storage does not have be be a Hash at all. According to the questioner, the return value is another instance of OpenStruct.
object.property.foo calls method foo on the instance of OpenStruct that was returned from object.property. Now we receive the value of a nested OpenStruct object structure.
Seems to me that the developer may have aliased the eval statement with p somewhere earlier in the code.
require 'ostruct'
o = OpenStruct.new(key: 5)
o.key # returns 5
alias p eval
p("o.key") # returns 5
eval is simply a function that executes any string passed into it as ruby code.
Regarding your question
Why is p preferable in this case? (I know p vs. puts, but why use p here?)
I don't believe you are using the classic p function here. By default that p does not "eval" strings passed into it. The p function must have been overwritten with the eval function. Check the code base for something like alias p eval or alias_method :p, :eval
I'm having some issues with creating a Mongoid document that includes an array of custom objects.
I my particular case I intend to store an array of BaseDevice objects. The BaseDevice class is already mongified and serializes from/to a plain hash using Mongoid's custom fields support. This works pretty well on single object.
For storing an array of BaseDevice, I've created the following class:
class BaseDeviceArray < Array
class << self
def demongoize(object)
object ? object.map{ |obj| BaseDevice.demongoize obj } : new
end
def evolve(object)
case
when BaseDeviceArray then object.mongoize
else object
end
end
end
def mongoize
self.map(&:mongoize)
end
end
The mongoid document looks like this
class MongoPeriph
include Mongoid::Document
field :devices, type: BaseDeviceArray
end
Let's say some_devices is an array containing two BaseDevice instances.
What happens is the following: when I assign some_devices to the devices fields of the MongoPeriph instance that works correctly.
mp = MongoPeriph.create
mp.devices = some_devices
mp.devices # => [#<BaseDevice:0x007fa84bac0080>,#<BaseDevice:0x007fa84baaff78>]
When try to send push, pop, shift, unshift methods to the devices field within the mongoid document, nothing seems to happen. The changes are not appearing on the mp object. Also when referencing one of the objects by index (i.e. when calling mp.devices[0].some_method) the world does not change.
When popping objects from the array, on every pop a new object is given. This is expected as the deserializer is instantiating a new BaseDevice object for every pop, but the internal field is not updated i.e. the object stays there and one can pop endlessly.
Using the BaseDeviceArray separate from a mongoid document works as expected:
foo = BaseDeviceArray.new
foo << BaseDevice.new
results in an array with a BaseDevice object.
Btw. I found one other approach to this on the net. It is a more generalized way of implementing what I need, but it monkey-patches Mongoid. Something I try to avoid. Moreover that solution seems to have the same issue my approach has.
Issue in your code is that you have #mongoize (instance) method but you actually need ::mongoize (class) method. You never create an instance of BaseDeviceArray thus instance methods are useless.
Here's an example of how I did the ::mongoize method where I actually have in mongo a Hash with a single key with array value. Also I wanted to make the resulting array into a hash with ids as keys for easier lookup.
def demongoize(hash)
return validate_hash(hash)["TestRecord"].each_with_object({}) do |r, m|
rec = TestRecord.new(r)
m[rec.case_id] = rec
end
end
def mongoize(object)
case object
when Array then {"TestRecord" => object.map(&:mongoize)}
when Hash
if object["TestRecord"]
# this gets actually called when doing TestRun.new(hash)
mongoize(demongoize(object))
else
{"TestRecord" => object.values.map(&:mongoize)}
end
else raise("dunno how to convert #{object.class} into records JSON")
end
end
def evolve(object)
# can't see how we want to process this here yet
# docs.mongodb.com/ruby-driver/master/tutorials/6.0.0/mongoid-documents
object
end
I guess op task done long ago but thought somebody may find it useful.
I've been looking through the Jekyll source code, and stumbled upon this method:
# Public: Generate a Jekyll configuration Hash by merging the default
# options with anything in _config.yml, and adding the given options on top.
#
# override - A Hash of config directives that override any options in both
# the defaults and the config file. See Jekyll::DEFAULTS for a
# list of option names and their defaults.
#
# Returns the final configuration Hash.
def self.configuration(override)
# Convert any symbol keys to strings and remove the old key/values
override = override.reduce({}) { |hsh,(k,v)| hsh.merge(k.to_s => v) }
# _config.yml may override default source location, but until
# then, we need to know where to look for _config.yml
source = override['source'] || Jekyll::DEFAULTS['source']
# Get configuration from <source>/_config.yml or <source>/<config_file>
config_file = override.delete('config')
config_file = File.join(source, "_config.yml") if config_file.to_s.empty?
begin
config = YAML.safe_load_file(config_file)
raise "Configuration file: (INVALID) #{config_file}" if !config.is_a?(Hash)
$stdout.puts "Configuration file: #{config_file}"
rescue SystemCallError
# Errno:ENOENT = file not found
$stderr.puts "Configuration file: none"
config = {}
rescue => err
$stderr.puts " " +
"WARNING: Error reading configuration. " +
"Using defaults (and options)."
$stderr.puts "#{err}"
config = {}
end
# Merge DEFAULTS < _config.yml < override
Jekyll::DEFAULTS.deep_merge(config).deep_merge(override)
end
end
I can't figure out what it does despite the comments. reduce({}) especially bothers me - what does it do?
Also, the method that is called just before configuration is:
options = normalize_options(options.__hash__)
What does __hash__ do?
Let's look at the code in question:
override.reduce({}) { |hsh,(k,v)| hsh.merge(k.to_s => v) }
Now let's look at the docs for Enumerable#reduce:
Combines all elements of enum by applying a binary operation, specified by a block or a symbol that names a method or operator.
If you specify a block, then for each element in enum the block is passed an accumulator value (memo) and the element. If you specify a symbol instead, then each element in the collection will be passed to the named method of memo. In either case, the result becomes the new value for memo. At the end of the iteration, the final value of memo is the return value for the method.
So, override is going to be your typical Ruby options hash, like:
{
debug: 'true',
awesomeness: 'maximum'
}
So what happens when you use that reduce on override?
It will combine all the elements of the enum (key => value pairs of the override hash) using the binary function merge. Merge takes a hash and merges it into the receiver. So what's happening here?
hsh starts out as {} and the first key/value pair is merged: {}.merge(:debug.to_s => "true").
hsh is now {"debug" => "true"}.
The next key/value pair is merged into that: {"debug" => "true"}.merge(:awesomeness.to_s => "maximum").
hsh is now {"debug" => "true", "awesomeness" => "maximum"}
There are no more elements, so this value of hsh is returned.
This matches up with the code comment, which says "Convert any symbol keys to strings and remove the old key/values", although technically the old values are not removed. Rather, a new hash is constructed and the old hash with the old values is discarded by replacing the variable with the new value, to eventually be collected – along with the intermediate objects created by the merges in the reduce – by the garbage collector. As an aside, this means that merge! would be slightly more efficient than merge in this case as it would not create those intermediate objects.
__foo__ is a ruby idiom for a quasi-private and/or 'core' method that you want to make sure isn't redefined, e.g., __send__ because things like Socket want to use send. In Ruby, hash is the hash value of an object (computed using a hash function, used when the object is used as a hash key), so __hash__ probably points to an instance variable of the options object that stores its data as a hash. Here's a class from a gem that does just that. You'd have to look at the docs for whatever type of object options is to be sure though. (You'd have to look at the code to be really sure. ;)
reduce is often used to build an array or hash, in a way that is similar to using map or collect, by iteratively adding each element to that container, usually after some manipulation to the element.
I use each_with_object instead as it's more intuitive for that sort of operation:
[:foo, :bar].each_with_object({}) do |e, h|
h[e.to_s] = e
end
Notice that each_with_object doesn't need to have the "remembered" value returned from the block like reduce or inject wants. reduce and inject are great for other types of summing magic that each_with_object doesn't do though, so leave those in your toolbox too.
I'm trying to build a system for programmatically filtering timeseries data and wonder if this problem has been solved, or at least hacked at, before. It seems that it's a perfect opportunity to do some Ruby block magic, given the scope and passing abilities; however, I'm still a bit short of fully grokking how to take advantage of blocks.
To wit:
Pulling data from my database, I can create either a hash or an array, let's use array:
data = [[timestamp0, value0],[timestamp1,value1], … [timestampN, valueN]]
Then I can add a method to array, maybe something like:
class Array
def filter &block
…
self.each_with_index do |v, i|
…
# Always call with timestep, value, index
block.call(v[0], v[1], i)
…
end
end
end
I understand that one powers of Ruby blocks is that the passed block of code happens within the scope of the closure. So somehow calling data.filter should allow me to work with that scope. I can only figure out how to do that without taking advantage of the scope. To wit:
# average if we have a single null value, assumes data is correctly ordered
data.filter do |t, v, i|
# Of course, we do some error checking…
(data[i-1] + data[i+1]) / 2 if v.nil?
end
What I want to do is actually is (allow the user to) build up mathematical filters programmatically, but taking it one step at a time, we'll build some functions:
def average_single_values(args)
#average over single null values
#return filterable array
end
def filter_by_std(args)
#limit results to those within N standard deviations
#return filterable array
end
def pull_bad_values(args)
#delete or replace values seen as "bad"
#return filterable array
end
my_filters == [average_single_values, filter_by_std, pull_bad_values]
Then, having a list of filters, I figure (somehow) I should be able to do:
data.filter do |t, v, i|
my_filters.each do |f|
f.call t, v, i
end
end
or, assuming a different filter implementation:
filtered_data = data.filter my_filters
which would probably be a better way to design it, as it returns a new array and is non-destructive
The result being an array that has been run through all of the filters. The eventual goal, is to be able to have static data arrays that can be run through arbitrary filters, and filters that can be passed (and shared) as objects the way that Yahoo! Pipes does so with feeds. I'm not looking for too generalized a solution right now, I can make the format of the array/returns strict.
Has anyone seen something similar in Ruby? Or have some basic pointers?
The first half of your question about working in the scope of the array seems unnecessary and irrelevant to your problem. As for creating operations to manipulate data with blocks, you can use Proc instances ("procs"), which essentially are blocks stored in an object. For example, if you want to store them with names, you can create a hash of filters:
my_filters = {}
my_filters[:filter_name] = lambda do |*args|
# filter body here...
end
You do not need to name them, of course, and can use arrays. Then, to run some data through an ordered series of filters, use the helpful Enumerable#inject method:
my_filters.inject(data) do |result, filter|
filter.call result
end
It uses no monkeypatching too!