In cucumber, one of the best feature is the Table data passing. However if I want to add additional data to it, or create a Table data in my step_definitions, how could I do that? What type is Table (hash? map? list? array?)?
To illustrate, below is one of my step, accepting a table data from the feature, and pass along to a function. I like to append some data to it. How could I do that?
Then(/^posted JSON should have the below attributes$/) do |table|
## Here I want to append some data to my table. How to do it?
posted_json_attribute_table_check table
end
Then I have a function that use it to compare with a read JSON.
def posted_json_attribute_table_check(table)
json = JSON.parse $post_result.lines.first
data = table.raw
data.each do |entry|
status = entry[0]
value = entry[1]
expect(json[status].to_s).to eq(value)
end
end
Thanks!
The table object is of type Cucumber::Core::Ast::DataTable and can be found here. https://github.com/cucumber/cucumber-ruby-core/blob/master/lib/cucumber/core/ast/data_table.rb
# Creates a new instance. +raw+ should be an Array of Array of String
# or an Array of Hash
# You don't typically create your own DataTable objects - Cucumber will do
# it internally and pass them to your Step Definitions.
#
def initialize(raw, location)
raw = ensure_array_of_array(rubify(raw))
verify_rows_are_same_length(raw)
#raw = raw.freeze
#location = location
end
Related
Hey guys so I am trying to parse through an excel file through the ruby gem "creek", it parses the the rows accurately but I want to just retrieve the Columns, such as only the data in the "A" cloumn. Outputs the whole excel documents correctly.
require 'creek'
creek = Creek::Book.new 'Final.xlsx'
sheet= creek.sheets[0]
sheet.rows.each do |row|
puts row # => {"A1"=>"Content 1", "B1"=>nil, C1"=>nil, "D1"=>"Content 3"}
end
Any suggestions will be much appreciated.
Creek doesn't make it easy to extract column information because it stores the column and row smashed together in a string hash key.
The more popular Roo allows you to do things like sheet.column(1) and get an entire column. Very simple.
If you absolutely must have creek, I noticed that there is an add-on to Creek called Ditch which adds some column-fetching capability. Example:
sheet.rows.each { |r|
puts "#{r.index} #{r.get('A')} - #{r.get('B')}"
}
Finally, if you want to do it with Creek and no add-ons, use Hash#select:
sheet.rows.each do |row|
puts row.select{ |k,v| ["A", "B"].include? k[0]}
end
To read the individual columns you can use Creek :: Sheet # simple_rows method
For example, to read the first and third columns:
require 'creek'
creek = Creek::Book.new 'Final.xlsx'
sheet_first = creek.sheets.first
# read first column A
col_first = sheet_first.simple_rows.map{|col| col['A']} #=> Array containing the first column
# read third column C
col_third = sheet_first.simple_rows.map{|col| col['C']} #=> Array containing the third column
I have a CSV import method that renders a confirmation / preview page of the data about to be imported, and I want to pass the data from the preview to the actual import method.
In the preview, the CSV has already been turned into a hash of rows and I want to pass that hash to the import method. I've tried simply doing:
<%= hidden_field_tag "my_hash", #final %>
where #final is the hash of data, but it passes the hash as a string and in the params, the data looks like json.
"wi_hash"=>"{
\"name_fail\"=>[{\"scale_id\"=>\"509\",
\"name\"=>\"John Doe\",
\"date\"=>\"<no data>\",
\"current_weight\"=>\"999\",
\"bmi\"=>\"999\",
\"body_fat\"=>\"999\",
\"visceral_fat\"=>\"999\",
\"tbw\"=>\"999\",
\"muscle_mass\"=>\"999\",
\"basal_metabolic_rate\"=>\"999\"
....
}
How else can I pass #final so that it maintains its hash format?
I found this useful helper in another question:
def hash_to_hidden_fields(hash)
query_string = Rack::Utils.build_nested_query(hash)
pairs = query_string.split(Rack::Utils::DEFAULT_SEP)
tags = pairs.map do |pair|
key, value = pair.split('=', 2).map { |str| Rack::Utils.unescape(str) }
hidden_field_tag(key, value)
end
tags.join("\n").html_safe
end
It allows you to pass the haas as an argument.
I'm trying to transfer all the information from my Ruby file into a Postgres database. I am able to transfer the information when I do not have an array column, so I am assuming the error message I am getting is because of the array column I am trying to add. The error message I am getting is:
in `exec_prepared': ERROR: missing dimension value (PG::InvalidTextRepresentation)
Here is the code I used to connect my Ruby file to my Postgres database:
require 'pg'
class Postgres
# Create the connection instance. Scraping is the name of the database I am adding this information to
def connect
#conn = PG.connect(:dbname => 'scraping')
end
# Create our venue table
def createVenueTable
#conn.exec("CREATE TABLE venues (venue_number varchar(15) UNIQUE,...,img_array varchar[]);")
end
...
def prepareInsertVenueStatement
#conn.prepare("insert_venue", "insert into venues(venue_number,...,img_array) values ($1,...,$24)")
end
# Add a venue with the prepared statement.
def addVenue(venue_number,...,img_array)
#conn.exec_prepared("insert_venue", [venue_number,...,img_array])
end
end
When I check my Postgres database, the img_array column is made, however, I am unable to populate it. Please help! Thank you.
I would suggest using serialization to handle this so that you are actually just writing a string rather than an actual array.
require 'pg'
require 'yaml'
class Postgres
# Create the connection instance. Scraping is the name of the database I am adding this information to
def connect
#conn = PG.connect(:dbname => 'scraping')
end
# Create our venue table
def createVenueTable
#changed img_array to a varchar(8000) for storing serialized Array
#conn.exec("CREATE TABLE venues (venue_number varchar(15) UNIQUE,...,img_array varchar(8000));")
end
...
def prepareInsertVenueStatement
#conn.prepare("insert_venue", "insert into venues(venue_number,...,img_array) values ($1,...,$24)")
end
# Add a venue with the prepared statement.
def addVenue(venue_number,...,img_array)
#conn.exec_prepared("insert_venue", [venue_number,...,serialized(img_array)])
end
#serialize the Object
def serialized(obj)
YAML.dump(obj)
end
#deserialize the Object
def deserialized(obj)
YAML.load(obj)
end
end
Abstracted Usage Example just to show serialization
a = [1,2,4,5]
serialized = YAML.dump(a)
#=> "---\n- 1\n- 2\n- 3\n- 4\n- 5\n"
YAML.load(serialized)
#=> [1,2,3,4,5]
#Also works on Hash Objects
h = {name: "Image", type: "jpeg", data:[1,2,3,4,5]}
serial = YAML.dump(h)
#=> "---\n:name: Image\n:type: jpeg\n:data:\n- 1\n- 2\n- 3\n- 4\n- 5\n"
YAML.load(serial)
#=> {:name=>"Image", :type=>"jpeg", :data=>[1, 2, 3, 4, 5]}
Hope this helps you out with handling this issue.
If you need to store more than 8000 characters you can switch to varchar(MAX) or text column definitions. I would recommend varchar(MAX) because the data will be stored as a standard varchar until it exceeds 8000 character at which point the db basically converts it to a text column under the hood.
I have a list called #domianskill_technical_skills. This table is related table for domain_skills and technical_skills data and is retrieved based on domain_skill_id
I need return all technical_skills from the technical_skills table but have one more attribute named assigned which is true if the technical_skill_id is present in the #domianskill_technical_skills list.
My working code is like below
def assigned_technical_skills
#domain_skill_technical_skills = DomainSkillsTechnicalSkill.select('id, technical_skill_id').where(domain_skill_id: params[:domain_skill_id])
Rails.logger.info(#domain_skill_technical_skills.to_a)
TechnicalSkill.all.where(record_status: 1).each do |skill|
if #domain_skill_technical_skills.include?(skill.id) then
Rails.logger.info(skill.id.to_s + skill.name)
end
end
respond_with #domain_skill_technical_skills
end
Please guide me to correct the if statement and also need to select the needed fields (id, name, assigned) and create one instance for return.
You might want to replace your code with this, if DomainSkillsTechnicalSkill has 'technical_skill_id' column
def assigned_technical_skills
# just collect technical_skill_id
#domain_skill_technical_skill_ids = DomainSkillsTechnicalSkill.select('id, technical_skill_id').where(domain_skill_id: params[:domain_skill_id]).collect(&:technical_skill_id)
Rails.logger.info(#domain_skill_technical_skill_ids.to_a)
#technical_skills = []
TechnicalSkill.select('id, name').where(record_status: 1).each do |skill|
#technical_skills << skill.as_json.merge!(
{ assigned: #domain_skill_technical_skill_ids.include?(skill.id)}
)
end
respond_with #technical_skills
end
I try to use the ruby standard csv lib to dump out the arr of object to a csv.file , called 'a.csv'
http://ruby-doc.org/stdlib-1.9.3/libdoc/csv/rdoc/CSV.html#method-c-dump
dump(ary_of_objs, io = "", options = Hash.new)
but in this method, how can i dump into a file?
there is no such examples exists and help. I google it no example to do for me...
Also, the docs said that...
The next method you can provide is an instance method called
csv_headers(). This method is expected to return the second line of
the document (again as an Array), which is to be used to give each
column a header. By default, ::load will set an instance variable if
the field header starts with an # character or call send() passing the
header as the method name and the field value as an argument. This
method is only called on the first object of the Array.
Anyone knows how to pass the instance method csv_headers() to this dump function?
I haven't tested this out yet, but it looks like io should be set to a file. According to the doc you linked "The io parameter can be used to serialize to a File"
Something like:
f = File.open("filename")
dump(ary_of_objs, io = f, options = Hash.new)
The accepted answer doesn't really answer the question so I thought I'd give a useful example.
First of all if you look at the docs at http://ruby-doc.org/stdlib-1.9.3/libdoc/csv/rdoc/CSV.html, if you hover over the method name for dump you see you can click to show source. If you do that you'll see that the dump method attempts to call csv_headers on the first object you pass in from ary_of_objs:
obj_template = ary_of_objs.first
...snip...
headers = obj_template.csv_headers
Then later you see that the method will call csv_dump on each object in ary_of_objs and pass in the headers:
ary_of_objs.each do |obj|
begin
csv << obj.csv_dump(headers)
rescue NoMethodError
csv << headers.map do |var|
if var[0] == #
obj.instance_variable_get(var)
else
obj[var[0..-2]]
end
end
end
end
So we need to augment each entry in array_of_objs to respond to those two methods. Here's an example wrapper class that would take a Hash, and return the hash keys as the CSV headers and then be able to dump each row based on the headers.
class CsvRowDump
def initialize(row_hash)
#row = row_hash
end
def csv_headers
#row.keys
end
def csv_dump(headers)
headers.map { |h| #row[h] }
end
end
There's one more catch though. This dump method wants to write an extra line at the top of the CSV file before the headers, and there's no way to skip that if you call this method due to this code at the top:
# write meta information
begin
csv << obj_template.class.csv_meta
rescue NoMethodError
csv << [:class, obj_template.class]
end
Even if you return '' from CsvRowDump.csv_meta that will still be a blank line where a parse expects the headers. So instead lets let dump write that line and then remove it afterwards when we call dump. This example assumes you have an array of hashes that all have the same keys (which will be the CSV header).
#rows = #hashes.map { |h| CsvRowDump.new(h) }
File.open(#filename, "wb") do |f|
str = CSV::dump(#rows)
f.write(str.split(/\n/)[1..-1].join("\n"))
end