Not able to style Excel with spreadsheet gem (Ruby) - ruby

Trying to style an excel following - ruby spreadsheet row background color but nothing is happening for me -
Here goes my code -
My formats:
pass_format = Spreadsheet::Format.new :color=> :blue, :pattern_fg_color => :green, :pattern => 1
fail_format = Spreadsheet::Format.new :color=> :blue, :pattern_fg_color => :red, :pattern => 1
skip_format = Spreadsheet::Format.new :color=> :blue, :pattern_fg_color => :yellow, :pattern => 1
Trying to use them here(just showing one rest are decided by if elses):
sheet1.row(counter).default_format = skip_format
sheet1[counter, 3] = 'Skipped'
sheet1.row(counter).default_format = skip_format
sheet1.row(counter).set_format(3, skip_format)
Counter is the row I am currently in. Here I am not sure whether I should format first or write first. What am I doing wrong? How to fix this?
Actually it's getting applied as I found from .inspect-
#<Spreadsheet::Format:0x007f082f9c1d58 #font=#<Spreadsheet::Font:0x007f082f9c1a88 #name="Arial", #color=:red, #previous_fast_key=nil, #size=nil, #weight=nil, #italic=nil, #strikeout=nil, #outline=nil, #shadow=nil, #escapement=nil, #underline=nil, #family=:swiss, #encoding=nil>, #number_format="GENERAL", #rotation=0, #pattern=1, #bottom_color=:black, #top_color=:black, #left_color=:black, #right_color=:black, #diagonal_color=:black, #pattern_fg_color=:yellow, #pattern_bg_color=:pattern_bg, #regexes={:date=>/[YMD]/, :date_or_time=>/[hmsYMD]/, :datetime=>/([YMD].*[HS])|([HS].*[YMD])/, :time=>/[hms]/, :number=>/[#]/}, #used_merge=0>
but even if it shows color red here in excel it's black. :(
I am editing the original file then writing in a new file as
book.write "Result.xls"
Is it the wrong approach? I am going to try to make a new workbook before editing and update.

Well, it was not possible to format the existing excel then write it as a new Excel. Formatting was lost in that.
To overcome I created a new excel (populated with my existing data read from the old excel) formatted it as I want then used
book.write "xxx.xls"

Related

Using a CSV file to insert values using Ruby

I have some sample code I can execute for our Nexpose server and I need to do some mass asset tagging. Here is an example of the code.
nsc = Nexpose::Connection.new('your_nexpose_instance', 'username', 'password', 3780)
nsc.login
criterion = Nexpose::Tag::Criterion.new('IP_RANGE', 'IN', ['ip1', 'ip2'])
criteria = Nexpose::Tag::Criteria.new(criterion)
tag = Nexpose::Tag.new("tagname", Nexpose::Tag::Type::Generic::CUSTOM)
tag.search_criteria = criteria
tag.save(nsc)
I have a file called with the following data.
ip1,ip2,tagname
192.168.1.1,192.168.1.255,Workstations
How would I go about running a for loop and using the CSV to quickly process the above code? I have no experiance with Ruby and tried to follow some example but I'm confused at this point.
There's a CSV library in Ruby's standard lib collection that you can use.
Basic example based on your code example and data, not tested:
require 'csv'
nsc = Nexpose::Connection.new('your_nexpose_instance', 'username', 'password', 3780)
nsc.login
CSV.foreach("path/to/file.csv", headers: true) do |row|
criterion = Nexpose::Tag::Criterion.new('IP_RANGE', 'IN', [row['ip1'], row['ip2'])
criteria = Nexpose::Tag::Criteria.new(criterion)
tag = Nexpose::Tag.new(row['tagname'], Nexpose::Tag::Type::Generic::CUSTOM)
tag.search_criteria = criteria
tag.save(nsc)
end
I made a directory with input.csv and main.rb
input.csv
ip1,ip2,tagname
192.168.1.1,192.168.1.255,Workstations
main.rb
require "csv"
CSV.foreach("input.csv", headers: true) do |row|
puts "ip1: #{row['ip1']}"
puts "ip2: #{row['ip2']}"
puts "tagname: #{row['tagname']}"
end
the output is
ip1: 192.168.1.1
ip2: 192.168.1.255
tagname: Workstations
I hope this can help. If you have questions I'm here :)
If you just need to loop through each line of the file and fire that chunk of code for each line, you could do something like this:
file = Net::HTTP.get(URI(<whatever_your_file_name_is>))
index = 0
file.each_line do |line|
next if index == 0
index += 1
split_line = line.split(',')
ip1 = split_line[0]
ip2 = split_line[1]
tagname = split_line[2]
nsc = Nexpose::Connection.new('your_nexpose_instance', 'username', 'password', 3780)
nsc.login
criterion = Nexpose::Tag::Criterion.new('IP_RANGE', 'IN', [ip1, ip2])
criteria = Nexpose::Tag::Criteria.new(criterion)
tag = Nexpose::Tag.new(tagname, Nexpose::Tag::Type::Generic::CUSTOM)
tag.search_criteria = criteria
tag.save(nsc)
end
NOTE: This code example is assuming that the CSV file is stored remotely, not locally.
ALSO: In case you're wondering, the next if index == 0 is there to skip your header record.
UPDATE
To use this approach for a local file, you can use File.open() instead of Net::HTTP.get(), like so:
file = File.open(<whatever_your_file_name_is>).read
Two things to note:
Make sure you use the fully-qualified name of the file - i.e. ~/folder/folder/filename.csv instead of just filename.csv.
If the files you're going to be loading are enormous, this might not be an ideal approach because it's actually reading the whole file into memory. But considering your file only has 3 columns, you'd have to have an extreme number of rows in the file for this to be an issue.

clusteredPoints of cluster result disappear [mahout]

I got CSV and TEXT format results like followings with clusterdump.
CSV:
0,Sports_38.txt
1,Sports_23.txt
2,Sports_36.txt
3,Sports_13.txt
4,Sports_31.txt,Sports_32.txt
5,Sports_28.txt,Sports_29.txt
6,Sports_2.txt
9,Sports_15.txt
TEXT:
{"identifier":"VL-1","r":[],"c":[...,"n":7}
Top Terms:
什 => 15.829998016357422
利物浦 => 13.629814147949219
克 => 11.317766189575195
格 => 10.938775062561035
特 => 10.842317581176758
尔 => 10.447234153747559
切尔西 => 9.742402076721191
比赛 => 8.247735023498535
表现 => 7.909337520599365
批评 => 7.462332725524902
I noticed that just one point of VL-1 in CSV file but 7 points of VL-1 in TEXT file (VL-1's "n" equals 7).
Why did some points disappear? And how can I get every points' cluster?
Thanks a lot.
I also got empty clusteredPoints if the data is a little bigger.
I finally found the reason by myself.
clusterClassificationThreshold should be 0 in Kmeans.run's 8th parameter.(mahout 1.0)
Check this: http://mail-archives.apache.org/mod_mbox/mahout-user/201211.mbox/%3C50B62629.5020700#windwardsolutions.com%3E

Generating KML files with Ruby

I'm using the ruby_kml gem right now to try to generate KML from some data in my model.
I also tried georuby.
Both of them, when they generate XML it seems to be coming back escaped like this:
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<kml xmlns=\"http://earth.google.com/kml/2.1\">\n <Folder>\n <name>San Francisco</name>\n <LineStyle>\n <color>#0D7215</color>\n </LineStyle>\n <Placemark>\n <name>21 Google Bus</name>\n <description>\n <![CDATA[Click to add description.]]>\n </description>\n <LineString>\n <coordinates>37.784282779035216,-122.42228507995605 37.784144999999995,-122.42225699999999,37.784084,-122.42274499999999,37.785472,-122.423023,37.785391,-122.423564,37.785364,-122.423839,37.785418,-122.424714,37.785410999999996,-122.42497999999999,37.785391,-122.42522,37.784839,-122.42956,37.784631,-122.431297,37.782576,-122.43086799999999,37.776969,-122.42975399999999,37.776759999999996,-122.431384,37.776368,-122.431305 37.776368,-122.431305,37.777699999999996,-122.431575,37.778746999999996,-122.42335399999999,37.773609,-122.42231199999999,37.773013999999996,-122.42222799999999,37.772974999999995,-122.42222799999999,37.772915,-122.42226799999999,37.772774,-122.422446,37.772636999999996,-122.422585,37.772562,-122.42263399999999,37.772521999999995,-122.422643,37.771588,-122.42253799999999,37.771631,-122.421759</coordinates>\n </LineString>\n </Placemark>\n <LineStyle>\n <color>#0071CA</color>\n </LineStyle>\n <Placemark>\n <name>45 Inverter</name>\n <description>\n <![CDATA[Click to add description.]]>\n </description>\n <LineString>\n <coordinates>37.792490234462946,-122.40863800048828 37.792516,-122.408429,37.793068,-122.408541,37.792957,-122.409357,37.792051,-122.409189,37.788289999999996,-122.40841499999999,37.785495,-122.407866,37.785713,-122.406229,37.785713,-122.40591599999999,37.785699,-122.40576999999999,37.785658,-122.40568999999999,37.783249999999995,-122.40270699999999,37.778850999999996,-122.40827499999999,37.779104,-122.408577</coordinates>\n </LineString>\n </Placemark>\n <LineStyle>\n <color>#AD0101</color>\n </LineStyle>\n <Placemark>\n <name>82 X Wing</name>\n <description>\n <![CDATA[Click to add description.]]>\n </description>\n <LineString>\n <coordinates></coordinates>\n </LineString>\n </Placemark>\n <LineStyle>\n <color>#AD0101</color>\n </LineStyle>\n <Placemark>\n <name>93 X Wing</name>\n <description>\n <![CDATA[Click to add description.]]>\n </description>\n <LineString>\n <coordinates></coordinates>\n </LineString>\n </Placemark>\n </Folder>\n</kml>\n"
I'm not sure why it should be coming it escaped, since it definitely is not valid XML.
georuby does the same.
Does anyone know why it's coming out escaped and also how to unescape it?
Here's the code I'm using:
map = self;
kml = KMLFile.new
folder = KML::Folder.new(:name => map[:name])
map.lines.each do |line|
folder.features << KML::LineStyle.new(
color: line.color,
)
folder.features << KML::Placemark.new(
:name => line.name,
:geometry => KML::LineString.new(:coordinates => line.coordinates),
:description => line.description
)
end
kml.objects << folder
kml.render
Thanks!!!

Amazon API and hardcover versus paperback

When searching Amazon using their API is there a way to separate out hardcover, paperback, kindle and preorderable books?
I'm using Ruby and Amazon Product gem and have been searching through their documentation looking for info on this and haven't managed to find anything yet.
update:
I have come across some starting points that I am working through. There seems to a way to get this information via RrelatedItems ResponseGroup as described here. The KindleStore hierarchy seems relevant.
update II:
Binding is possibly the field I need to somehow look at and amazon's API provides an way of getting to AlternateVersions of an ASIN using the AlternativeVersion ResponseGroup type for books.
I will assume you know how to set up the Amazon Product gem request object correctly. from there, you can do...
search = req.search( 'Ender\'s Game' )
search.each('Item') do |item|
asin = item["ASIN"]
title = item['ItemAttributes']['Title']
hash = {
:response_group => ['ItemAttributes']
}
items = req.find( asin, hash )
items.each('Item') do |ia|
puts "[#{asin}]: #{title} => [#{ia['ItemAttributes']['Binding']}]"
end
end
which produces output like
[0812550706]: Ender's Game (Ender, Book 1) => [Mass Market Paperback]
[0765362430]: The Ender Quartet Box Set: Ender's Game, Speaker for the Dead, Xenocide, Children of the Mind => [Mass Market Paperback]
[0812550757]: Speaker for the Dead (Ender, Book 2) => [Mass Market Paperback]
[0765342405]: Ender's Shadow (Ender, Book 5) => [Paperback]
[B003G4W49C]: Ender's Game => [Kindle Edition]
[0785135820]: Ender's Game: Command School => [Hardcover]
[0785135804]: Ender's Game: Battle School (Ender's Game Gn) => [Hardcover]
[0765362449]: The Ender's Shadow Series Box Set: Ender's Shadow, Shadow of the Hegemon, Shadow Puppets, Shadow of the Giant => [Paperback]
[0785136096]: Ender's Game: Formic Wars: Burning Earth => [Hardcover]
[0812565959]: Shadow of the Hegemon (Ender, Book 6) => [Mass Market Paperback]
You could try looking at the BrowseNodes. For example if you see the kindle store BrowseNode you know it's a kindle book.

Working with nested hashes in Rails 3

I'm working with the Koala gem and the Facebook Graph API, and I want to break down the results I get for a users feed into separate variables for inserting into a mySQL database, probably using Active Record. Here is the code I have so far:
#token = Service.where(:provider => 'facebook', :user_id => session[:user_id]).first.token
#graph = Koala::Facebook::GraphAPI.new(#token)
#feeds = params[:page] ? #graph.get_page(params[:page]) : #graph.get_connections("me", "home")
And here is what #feeds looks like:
[{"id"=>"1519989351_1799856285747", "from"=>{"name"=>"April Daggett Swayne", "id"=>"1519989351"},
"picture"=>"http://photos-d.ak.fbcdn.net/hphotos-ak-ash4/270060_1799856805760_1519989351_31482916_3866652_s.jpg",
"link"=>"http://www.facebook.com/photo.php?fbid=1799856805760&set=a.1493877356465.2064294.1519989351&type=1", "name"=>"Mobile Uploads",
"icon"=>"http://static.ak.fbcdn.net/rsrc.php/v1/yx/r/og8V99JVf8G.gif", "type"=>"photo", "object_id"=>"1799856805760", "application"=>{"name"=>"Facebook for Android",
"id"=>"350685531728"}, "created_time"=>"2011-07-03T03:14:04+0000", "updated_time"=>"2011-07-03T03:14:04+0000"}, {"id"=>"2733058_10100271380562998", "from"=>{"name"=>"Joshua Ramirez",
"id"=>"2733058"}, "message"=>"Just posted a photo",
"picture"=>"http://platform.ak.fbcdn.net/www/app_full_proxy.php?app=124024574287414&v=1&size=z&cksum=228788edbab39cb34861aecd197ff458&src=http%3A%2F%2Fimages.instagram.com%2Fmedia%2F2011%2F07%2F02%2F2ad9768378cf405fad404b63bf5e2053_7.jpg",
"link"=>"http://instagr.am/p/G1tp8/", "name"=>"jtrainexpress's photo", "caption"=>"instagr.am",
"icon"=>"http://photos-e.ak.fbcdn.net/photos-ak-snc1/v27562/10/124024574287414/app_2_124024574287414_6936.gif", "actions"=>[{"name"=>"Comment",
"link"=>"http://www.facebook.com/2733058/posts/10100271380562998"}, {"name"=>"Like", "link"=>"http://www.facebook.com/2733058/posts/10100271380562998"}], "type"=>"link",
"application"=>{"name"=>"Instagram", "id"=>"124024574287414"}, "created_time"=>"2011-07-03T02:07:37+0000", "updated_time"=>"2011-07-03T02:07:37+0000"},
{"id"=>"588368718_10150230423643719", "from"=>{"name"=>"Eric Bailey", "id"=>"588368718"}, "link"=>"http://www.facebook.com/pages/Martis-Camp/105474549513998", "name"=>"Martis Camp",
"caption"=>"Eric checked in at Martis Camp.", "description"=>"Rockin the pool", "icon"=>"http://www.facebook.com/images/icons/place.png", "actions"=>[{"name"=>"Comment",
"link"=>"http://www.facebook.com/588368718/posts/10150230423643719"}, {"name"=>"Like", "link"=>"http://www.facebook.com/588368718/posts/10150230423643719"}],
"place"=>{"id"=>"105474549513998", "name"=>"Martis Camp", "location"=>{"city"=>"Truckee", "state"=>"CA", "country"=>"United States", "latitude"=>39.282813917575,
"longitude"=>-120.16736760768}}, "type"=>"checkin", "application"=>{"name"=>"Facebook for iPhone", "id"=>"6628568379"}, "created_time"=>"2011-07-03T01:58:32+0000",
"updated_time"=>"2011-07-03T01:58:32+0000", "likes"=>{"data"=>[{"name"=>"Mike Janes", "id"=>"725535294"}], "count"=>1}}]
I have looked around for clues on this, and haven't found it yet (but I'm still working on my stackoverflow-foo). Any help would be greatly appreciated.
That isn't a Ruby Hash, that's a fragment of a JSON string. First you need to decode into a Ruby data structure:
# If your JSON string is in json...
h = ActiveSupport::JSON.decode(json) # Or your favorite JSON decoder.
Now you'll have a Hash in h so you can access it like any other Hash:
array = h['data']
puts array[0]['id']
# prints out 1111111111_0000000000000
puts array[0]['from']['name']
# prints Jane Done

Resources