Skip Runtime error inside loop cycle - ruby

I created this ruby script which basically search all the nodes/servers from an environment based on recipes from each server node. The issue is that it's failling if a node is stopped with Runtime error and it will get the next microservice according to h[thread_id] << "ReadTimeout Error" next
file = File.read("./#{srv}.json")
data_hash = JSON.parse(file)
h = {}
threads = []
service = data_hash.keys
service.each do |microservice|
threads << Thread.new do
thread_id = Thread.current.object_id.to_s(36)
begin
h[thread_id] = "#{microservice}"
port = data_hash["#{microservice}"]['adport']
h[thread_id] << "\nPort: #{port}\n"
nodes = "knife search 'chef_environment:#{env} AND recipe:#{microservice}' -i 2>&1 | sed '1,2d'"
node = %x[ #{nodes} ].split
node.each do |n|
h[thread_id] << "\n Node: #{n} \n"
uri = URI("http://#{n}:#{port}/healthcheck?count=10")
res = Net::HTTP.get_response(uri)
status = Net::HTTP.get(uri)
h[thread_id] << "#{res.code}"
h[thread_id] << status
h[thread_id] << res.message
end
rescue => e
h[thread_id] << "ReadTimeout Error"
next
end
end
end
threads.each do |thread|
thread.join
end
ThreadsWait.all_waits(*threads)
h.values.join("\n")
How can I skip only that node and get the results for the other found nodes inside this loop before it moves to the next microservice.

Related

Create a symlink using ruby

I am trying to create a symlink for the created file but I get an error like File exists - (/etc/nginx/sites-available/sushant.com, /etc/nginx/sites-enabled/sushant.com) (Errno::EEXIST)
Here is my code
require 'fileutils'
open('/etc/hosts') do |f|
matches = []
vhosts = []
f.readlines.each do |lines|
matches << lines if lines =~ /.*.com/
end
matches.each do |val|
val.split.each do |x|
vhosts << x if x =~ /.*.com/
end
end
vhosts.each do |domain|
#put the path to sites-enabled
unless File.file? "/etc/nginx/sites-available/#{domain}"
open("/etc/nginx/sites-available/#{domain}", 'w') do |g|
g << "server { \n"
g << "\tlisten 80 default_server;\n"
g << "\tlisten [::]:80 default_server ipv6only=on;\n"
g << "\troot /usr/share/nginx/html;\n"
g << "\tindex index.html index.htm;\n"
g << "\tserver_name localhost;\n"
g << "\tlocation / {\n"
g << "\t\ttry_files $uri $uri/ =404;\n"
g << "\t}\n"
g << "}\n"
g << "server {\n"
g << "\tpassenger_ruby /path/to/ruby;\n"
g << "\trails_env development;\n"
g << "\tlisten 80;\n"
g << "\tserver_name #{domain};\n"
g << "\troot /usr/share/nginx/html/#{domain}/public;\n"
g << "\tpassenger_enabled on;\n"
g << "}\n"
end
File.symlink "/etc/nginx/sites-available/#{domain}", "/etc/nginx/sites-enabled/#{domain}"
end
end
p vhosts
end
Why is the EEXIST error occurs after I run the script? Am I missing out on something?
I have found out that I should have placed File.symlink "/etc/nginx/sites-available/#{domain}", "/etc/nginx/sites-enabled/#{domain}" first then the action to create the file.

Why do these threads stop working?

Since Shopify's default products importer (via CSV) is really slow, I'm using multithreading to add ~24000 products to a Shopify store using the API. The API has a call limit of 2 per second. With 4 threads the calls are within the limit.
But after a while all threads stop working except one. I don't get any error messages, the code keeps running but doesn't print any product information. I can't seem to figure out what's going wrong.
Here's the code I'm using:
require 'shopify_api'
require 'open-uri'
require 'json'
require 'base64'
begin_time = Time.now
my_threads = []
shop_url = "https://<API_KEY>:<PASSWORD>#<SHOPNAME>.myshopify.com/admin"
ShopifyAPI::Base.site = shop_url
raw_product_data = JSON.parse(open('<REDACTED>') {|f| f.read }.force_encoding('UTF-8'))
# Split raw product data
one, two, three, four = raw_product_data.each_slice( (raw_product_data.size/4.0).round ).to_a
def category_to_tag(input)
<REDACTED>
end
def bazookah(array, number)
array.each do |item|
single_product_begin_time = Time.now
# Store item data in variables
vendor = item['brand'].nil? ? 'Overige' : item['brand']
title = item['description']
item_size = item['salesUnitSize']
body = "#{vendor} - #{title} - #{item_size}"
type = item['category'].nil? ? 'Overige' : item['category']
tags = category_to_tag(item['category']) unless item['category'].nil?
variant_sku = item['itemId']
variant_price = item['basePrice']['price']
if !item['images'].nil? && !item['images'][2].nil?
image_src = item['images'][2]['url']
end
image_time_begin = Time.now
image = Base64.encode64(open(image_src) { |io| io.read }) unless image_src.nil?
image_time_end = Time.now
total_image_time = image_time_end - image_time_begin
# Create new product
new_product = ShopifyAPI::Product.new
new_product.title = title
new_product.body_html = body
new_product.product_type = type
new_product.vendor = vendor
new_product.tags = item['category'].nil? ? 'Overige' : tags
new_product.variants = [ <REDACTED> ]
new_product.images = [ <REDACTED> ]
new_product.save
creation_time = Time.now - single_product_begin_time
puts "#{number}: #{variant_sku} - P: #{creation_time.round(2)} - I: #{image_src.nil? ? 'No image' : total_image_time.round(3)}"
end
end
puts '====================================================================================='
puts "#{raw_product_data.size} products loaded. Starting import at #{begin_time}..."
puts '-------------------------------------------------------------------------------------'
my_threads << Thread.new { bazookah(one, 'one') }
my_threads << Thread.new { bazookah(two, 'two') }
my_threads << Thread.new { bazookah(three, 'three') }
my_threads << Thread.new { bazookah(four, 'four') }
my_threads.each { |thr| thr.join }
puts '-------------------------------------------------------------------------------------'
puts "Done. It took #{Time.now - begin_time} minutes."
puts '====================================================================================='
What could I try to solve this?
It most likely has something to do with this:
http://docs.shopify.com/api/introduction/api-call-limit
I'd suspect that you are being rate limited by Shopify. You are trying to add 24000 records, via the API, from a single IP address. Most people don't like that kind of thing.

Ruby Watir: cannot launch browser in a thread in Linux

I'm trying to run this code in Red Hat Linux, and it won't launch a browser. The only way I can get it to work is if i ALSO launch a browser OUTSIDE of the thread, which makes no sense to me. Here is what I mean:
require 'watir-webdriver'
$alphabet = ["A", "B", "C"]
$alphabet.each do |z|
puts "pshaw"
Thread.new{
Thread.current["testPuts"] = "ohai " + z.to_s
Thread.current["myBrowser"] = Watir::Browser.new :ff
puts Thread.current["testPuts"] }
$browser = Watir::Browser.new :ff
end
the output is:
pshaw
(launches browser)
ohai A
(launches browser)
pshaw
(launches browser)
ohai B
(launches browser)
pshaw
(launches browser)
ohai C
(launches browser)
However, if I remove the browser launch that is outside of the thread, as so:
require 'watir-webdriver'
$alphabet = ["A", "B", "C"]
$alphabet.each do |z|
puts "pshaw"
Thread.new{
Thread.current["testPuts"] = "ohai " + z.to_s
Thread.current["myBrowser"] = Watir::Browser.new :ff
puts Thread.current["testPuts"] }
end
The output is:
pshaw
pshaw
pshaw
What is going on here? How do I fix this so that I can launch a browser inside a thread?
EDIT TO ADD:
The solution Justin Ko provided worked on the psedocode above, but it's not helping with my actual code:
require 'watir-webdriver'
require_relative 'Credentials'
require_relative 'ReportGenerator'
require_relative 'installPageLayouts'
require_relative 'PackageHandler'
Dir[(Dir.pwd.to_s + "/bmx*")].each {|file| require_relative file } #this includes all the files in the directory with names starting with bmx
module Runner
def self.runTestCases(orgType, *caseNumbers)
$testCaseArray = Array.new
caseNumbers.each do |thisCaseNum|
$testCaseArray << thisCaseNum
end
$allTestCaseResults = Array.new
$alphabet = ["A", "B", "C"]
#count = 0
#multiOrg = 0
#peOrg = 0
#eeOrg = 0
#threads = Array.new
$testCaseArray.each do |thisCase|
$alphabet[#count] = Thread.new {
puts "working one"
Thread.current["tBrowser"] = Watir::Browser.new :ff
puts "working two"
if ((thisCase.declareOrg().downcase == "multicurrency") || (thisCase.declareOrg().downcase == "mc"))
currentOrg = $multicurrencyOrgArray[#multiOrg]
#multiOrg += 1
elsif ((thisCase.declareOrg().downcase == "enterprise") || (thisCase.declareOrg().downcase == "ee"))
currentOrg = $eeOrgArray[#eeOrg]
#eeOrg += 1
else #default to single currency PE
currentOrg = $peOrgArray[#peOrg]
#peOrg += 1
end
setupOrg(currentOrg, thisCase.testCaseID, currentOrg.layoutDirectory)
runningTest = thisCase.actualTest()
if runningTest.crashed != "crashed" #changed this to read the attr_reader isntead of the deleted caseStatus method from TestCase.rb
cleanupOrg(thisCase.testCaseID, currentOrg.layoutDirectory)
end
#threads << Thread.current
}
#count += 1
end
#threads.each do |thisThread|
thisThread.join
end
writeReport($allTestCaseResults)
end
def self.setupOrg(thisOrg, caseID, layoutPath)
begin
thisOrg.logIn
pkg = PackageHandler.new
basicInstalled = "false"
counter = 0
until ((basicInstalled == "true") || (counter == 5))
pkg.basicInstaller()
if Thread.current["tBrowser"].text.include? "You have attempted to access a page"
thisOrg.logIn
else
basicInstalled = "true"
end
counter +=1
end
if !((caseID.include? "bmxb") || (caseID.include? "BMXB"))
moduleInstalled = "false"
counter2 = 0
until ((moduleInstalled == "true") || (counter == 5))
pkg.packageInstaller(caseID)
if Thread.current["tBrowser"].text.include? "You have attempted to access a page"
thisOrg.logIn
else
moduleInstalled = "true"
end
counter2 +=1
end
end
installPageLayouts(layoutPath)
rescue
$allTestCaseResults << TestCaseResult.new(caseID, caseID, 1, "SETUP FAILED!" + "<p>#{$!}</p><p>#{$#}</p>").hashEmUp
writeReport($allTestCaseResults)
end
end
def self.cleanupOrg(caseID, layoutPath)
begin
uninstallPageLayouts(layoutPath)
pkg = PackageHandler.new
pkg.packageUninstaller(caseID)
Thread.current["tBrowser"].close
rescue
$allTestCaseResults << TestCaseResult.new(caseID, caseID, 1, "CLEANUP FAILED!" + "<p>#{$!}</p><p>#{$#}</p>").hashEmUp
writeReport($allTestCaseResults)
end
end
end
The output it's generating is:
working one
working one
working one
It's not opening a browser or doing any of the subsequent code.
It looks like the code is having the problem mentioned in the Thread class documentation:
If we don't call thr.join before the main thread terminates, then all
other threads including thr will be killed.
Basically your main thread is finishing pretty instantaneously. However, the threads, which create browsers, take a lot longer than that. As result the threads get terminated before the browser opens.
By adding a long sleep at the end, you can see that your browsers can be opened by your code:
require 'watir-webdriver'
$chunkythread = ["A", "B", "C"]
$chunkythread.each do |z|
puts "pshaw"
Thread.new{
Thread.current["testwords"] = "ohai " + z.to_s
Thread.current["myBrowser"] = Watir::Browser.new :ff
puts Thread.current["testwords"] }
end
sleep(300)
However, for more reliability, you should join all the threads at the end:
require 'watir-webdriver'
threads = []
$chunkythread = ["A", "B", "C"]
$chunkythread.each do |z|
puts "pshaw"
threads << Thread.new{
Thread.current["testwords"] = "ohai " + z.to_s
Thread.current["myBrowser"] = Watir::Browser.new :ff
puts Thread.current["testwords"] }
end
threads.each { |thr| thr.join }
For the actual code example, putting #threads << Thread.current will not work. The join will be evaluating like #threads is empty. You could try doing the following:
$testCaseArray.each do |thisCase|
#threads << Thread.new {
puts "working one"
Thread.current["tBrowser"] = Watir::Browser.new :ff
# Do your other thread stuff
}
$alphabet[#count] = #threads.last
#count += 1
end
#threads.each do |thisThread|
thisThread.join
end
Note that I am not sure why you want to store the threads in $alphabet. I put in the $alphabet[#count] = #threads.last, but could be removed if not in use.
I uninstalled Watir 5.0.0 and installed Watir 4.0.2, and now it works fine.

how to get my rexml/nokogiri script run faster

I have this ruby script that collects 46344 xml-links and then collects 16 elements-nodes in every xml file. The last part of the proccess is that it stores it in a CSV file. The problem that I have is that it takes to long. It takes more than 1-2 hour..
Here is the script without the link that have all the XML-links, I cant provid the link beacuse its company stuff.. I hope its cool.
Here is the script, and it works but it takes to long:
require 'rubygems'
require 'nokogiri'
require 'open-uri'
require 'rexml/document'
require 'csv'
include REXML
#urls = Array.new
#ID = Array.new
#titleSv = Array.new
#titleEn = Array.new
#identifier = Array.new
#typeOfLevel = Array.new
#typeOfResponsibleBody = Array.new
#courseTyp = Array.new
#credits = Array.new
#degree = Array.new
#preAcademic = Array.new
#subjectCodeVhs = Array.new
#descriptionSv = Array.new
#visibleToSweApplicants = Array.new
#lastedited = Array.new
#expires = Array.new
# Hämtar alla XML-länkar
htmldoc = Nokogiri::HTML(open('A SITE THAT HAVE ALL THE LINKS'))
# Hämtar alla länkar för xml-filerna och sparar dom i arrayn urls
htmldoc.xpath('//a/#href').each do |links|
#urls << links.content
end
#urls.each do |url|
# Loop throw the XML files and grab element nodes
xmldoc = REXML::Document.new(open(url).read)
# Root element
root = xmldoc.root
# Hämtar info-id
#ID << root.attributes["id"]
# TitleSv
xmldoc.elements.each("/ns:educationInfo/ns:titles/ns:title[1]"){
|e| #titleSv << e.text
}
# TitleEn
xmldoc.elements.each("/ns:educationInfo/ns:titles/ns:title[2]"){
|e| #titleEn << e.text
}
# Identifier
xmldoc.elements.each("/ns:educationInfo/ns:identifier"){
|e| #identifier << e.text
}
# typeOfLevel
xmldoc.elements.each("/ns:educationInfo/ns:educationLevelDetails/ns:typeOfLevel"){
|e| #typeOfLevel << e.text
}
# typeOfResponsibleBody
xmldoc.elements.each("/ns:educationInfo/ns:educationLevelDetails/ns:typeOfResponsibleBody"){
|e| #typeOfResponsibleBody << e.text
}
# courseTyp
xmldoc.elements.each("/ns:educationInfo/ns:educationLevelDetails/ns:academic/ns:courseOfferingPackage/ns:type"){
|e| #courseTyp << e.text
}
# credits
xmldoc.elements.each("/ns:educationInfo/ns:credits/ns:exact"){
|e| #credits << e.text
}
# degree
xmldoc.elements.each("/ns:educationInfo/ns:degrees/ns:degree"){
|e| #degree << e.text
}
# #preAcademic
xmldoc.elements.each("/ns:educationInfo/ns:prerequisites/ns:academic"){
|e| #preAcademic << e.text
}
# #subjectCodeVhs
xmldoc.elements.each("/ns:educationInfo/ns:subjects/ns:subject/ns:code"){
|e| #subjectCodeVhs << e.text
}
# DescriptionSv
xmldoc.elements.each("/educationInfo/descriptions/ct:description/ct:text"){
|e| #descriptionSv << e.text
}
# Hämtar dokuments utgångs-datum
#expires << root.attributes["expires"]
# Hämtar dokuments lastedited
#lastedited << root.attributes["lastEdited"]
# Lagrar dom i uni.CSV
CSV.open("eduction_normal.csv", "wb") do |row|
(0..#ID.length - 1).each do |index|
row << [#ID[index], #titleSv[index], #titleEn[index], #identifier[index], #typeOfLevel[index], #typeOfResponsibleBody[index], #courseTyp[index], #credits[index], #degree[index], #preAcademic[index], #subjectCodeVhs[index], #descriptionSv[index], #lastedited[index], #expires[index]]
end
end
end
If it's network access you could start threading it and/or start using Jruby which can use all cores on your processor. If you have to do it often you will have to work out a read write strategy that serves you best without blocking.

How to generate xml_builder ruby code from an XML file

I've got an xml file. How could I generate xml_builder ruby code out of that file?
Notice - I'm sort of going backwards here (instead of generating xml, I'm generating ruby code).
Pretty formatting isn't a big deal - I can always run it through a formatter later.
It's sort of impossible, not unlike if you asked "how to generate Ruby script that outputs number 3", for the answer could be:
puts 3
or
puts 2+1
or
puts [1,2,3].count
etc.
So, one answer to your question would be:
xml = File.read('your.xml')
puts "puts <<EOF\n#{xml}\nEOF"
Anyway, if you would just want to generate Builder based script which just generates your XML node-for-node, I guess it would be easiest using XSLT. That's a language constructed exactly for such purposes - transforming XMLs.
Here's what I eventually came up with:
#!/usr/bin/env ruby
require "rexml/document"
filename = ARGV[0]
if filename
f = File.read(filename)
else
raise "Couldn't read file: `#{filename}'"
end
doc = REXML::Document.new(f)
def self.output_hash(attributes={})
count = attributes.size
str = ""
index = 0
attributes.each do |key, value|
if index == 0
str << " "
end
str << "#{key.inspect} => "
str << "#{value.inspect}"
if index + 1 < count
str << ", "
end
index += 1
end
str
end
def self.make_xml_builder(doc, str = "")
doc.each do |element|
if element.respond_to?(:name)
str << "xml.#{element.name}"
str << "#{output_hash(element.attributes)}"
if element.length > 0
str << " do \n"
make_xml_builder(element, str)
str << "end\n"
else
str << "\n"
end
elsif element.class == REXML::Text
string = element.to_s
string.gsub!("\n", "")
string.gsub!("\t", "")
if !string.empty?
str << "xml.text!(#{string.inspect})\n"
end
end
end
str
end
puts make_xml_builder(doc)
After generating that, I then formatted it in Emacs.

Resources