I am trying to create a symlink for the created file but I get an error like File exists - (/etc/nginx/sites-available/sushant.com, /etc/nginx/sites-enabled/sushant.com) (Errno::EEXIST)
Here is my code
require 'fileutils'
open('/etc/hosts') do |f|
matches = []
vhosts = []
f.readlines.each do |lines|
matches << lines if lines =~ /.*.com/
end
matches.each do |val|
val.split.each do |x|
vhosts << x if x =~ /.*.com/
end
end
vhosts.each do |domain|
#put the path to sites-enabled
unless File.file? "/etc/nginx/sites-available/#{domain}"
open("/etc/nginx/sites-available/#{domain}", 'w') do |g|
g << "server { \n"
g << "\tlisten 80 default_server;\n"
g << "\tlisten [::]:80 default_server ipv6only=on;\n"
g << "\troot /usr/share/nginx/html;\n"
g << "\tindex index.html index.htm;\n"
g << "\tserver_name localhost;\n"
g << "\tlocation / {\n"
g << "\t\ttry_files $uri $uri/ =404;\n"
g << "\t}\n"
g << "}\n"
g << "server {\n"
g << "\tpassenger_ruby /path/to/ruby;\n"
g << "\trails_env development;\n"
g << "\tlisten 80;\n"
g << "\tserver_name #{domain};\n"
g << "\troot /usr/share/nginx/html/#{domain}/public;\n"
g << "\tpassenger_enabled on;\n"
g << "}\n"
end
File.symlink "/etc/nginx/sites-available/#{domain}", "/etc/nginx/sites-enabled/#{domain}"
end
end
p vhosts
end
Why is the EEXIST error occurs after I run the script? Am I missing out on something?
I have found out that I should have placed File.symlink "/etc/nginx/sites-available/#{domain}", "/etc/nginx/sites-enabled/#{domain}" first then the action to create the file.
Related
The issue is with the 'chap << "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF"' line.
chaps = []
ctoc = "toc1\x00"
ctoc << [3, chapters.size].pack("CC")
chapters.each_with_index do |ch, i|
num = i+1
title = ch[:title]
description = ch[:description]
link = ch[:link]
ctoc << "ch#{num}\x00"
chap = "ch#{num}\x00"
chap << [ch[:start]*1000, ch[:end]*1000].pack("NN");
chap << "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF"
title_tag = [title.encode("utf-16")].pack("a*");
chap << "TIT2"
chap << [title_tag.size+1].pack("N")
chap << "\x00\x00\x01"
chap << title_tag
chaps << chap
end
I have added the following to the top of the file but that didn't fix the issue. Any other ideas to try?
# encoding: utf-8
The example
require 'gnuplot'
require 'gnuplot/multiplot'
def sample
x = (0..50).collect { |v| v.to_f }
mult2 = x.map {|v| v * 2 }
squares = x.map {|v| v * 4 }
Gnuplot.open do |gp|
Gnuplot::Multiplot.new(gp, layout: [2,1]) do |mp|
Gnuplot::Plot.new(mp) { |plot| plot.data << Gnuplot::DataSet.new( [x, mult2] ) }
Gnuplot::Plot.new(mp) { |plot| plot.data << Gnuplot::DataSet.new( [x, squares] ) }
end
end
end
works pretty well. But how can I send this to a file instead of the screen? Where to put plot.terminal "png enhanced truecolor" and plot.output "data.png"?
Indeed, I don't even know where I should call #terminal and #output methods since the plot object are inside a multiplot block.
As a workaround, the following would work as expected.
Gnuplot.open do |gp|
...
end
The block parameter gp in this part is passed the IO object to send the command to gnuplot through the pipe. Thus, we can send commands ("set terminal", "set output") directly to gnuplot via gp.
Gnuplot.open do |gp|
gp << 'set terminal png enhanced truecolor' << "\n"
gp << 'set output "data.png"' << "\n"
Gnuplot::Multiplot.new(gp, layout: [2,1]) do |mp|
Gnuplot::Plot.new(mp) { |plot| plot.data << Gnuplot::DataSet.new( [x, mult2] ) }
Gnuplot::Plot.new(mp) { | plot| plot.data << Gnuplot::DataSet.new( [x, squares] ) }
end
end
I created this ruby script which basically search all the nodes/servers from an environment based on recipes from each server node. The issue is that it's failling if a node is stopped with Runtime error and it will get the next microservice according to h[thread_id] << "ReadTimeout Error" next
file = File.read("./#{srv}.json")
data_hash = JSON.parse(file)
h = {}
threads = []
service = data_hash.keys
service.each do |microservice|
threads << Thread.new do
thread_id = Thread.current.object_id.to_s(36)
begin
h[thread_id] = "#{microservice}"
port = data_hash["#{microservice}"]['adport']
h[thread_id] << "\nPort: #{port}\n"
nodes = "knife search 'chef_environment:#{env} AND recipe:#{microservice}' -i 2>&1 | sed '1,2d'"
node = %x[ #{nodes} ].split
node.each do |n|
h[thread_id] << "\n Node: #{n} \n"
uri = URI("http://#{n}:#{port}/healthcheck?count=10")
res = Net::HTTP.get_response(uri)
status = Net::HTTP.get(uri)
h[thread_id] << "#{res.code}"
h[thread_id] << status
h[thread_id] << res.message
end
rescue => e
h[thread_id] << "ReadTimeout Error"
next
end
end
end
threads.each do |thread|
thread.join
end
ThreadsWait.all_waits(*threads)
h.values.join("\n")
How can I skip only that node and get the results for the other found nodes inside this loop before it moves to the next microservice.
I have this ruby script that collects 46344 xml-links and then collects 16 elements-nodes in every xml file. The last part of the proccess is that it stores it in a CSV file. The problem that I have is that it takes to long. It takes more than 1-2 hour..
Here is the script without the link that have all the XML-links, I cant provid the link beacuse its company stuff.. I hope its cool.
Here is the script, and it works but it takes to long:
require 'rubygems'
require 'nokogiri'
require 'open-uri'
require 'rexml/document'
require 'csv'
include REXML
#urls = Array.new
#ID = Array.new
#titleSv = Array.new
#titleEn = Array.new
#identifier = Array.new
#typeOfLevel = Array.new
#typeOfResponsibleBody = Array.new
#courseTyp = Array.new
#credits = Array.new
#degree = Array.new
#preAcademic = Array.new
#subjectCodeVhs = Array.new
#descriptionSv = Array.new
#visibleToSweApplicants = Array.new
#lastedited = Array.new
#expires = Array.new
# Hämtar alla XML-länkar
htmldoc = Nokogiri::HTML(open('A SITE THAT HAVE ALL THE LINKS'))
# Hämtar alla länkar för xml-filerna och sparar dom i arrayn urls
htmldoc.xpath('//a/#href').each do |links|
#urls << links.content
end
#urls.each do |url|
# Loop throw the XML files and grab element nodes
xmldoc = REXML::Document.new(open(url).read)
# Root element
root = xmldoc.root
# Hämtar info-id
#ID << root.attributes["id"]
# TitleSv
xmldoc.elements.each("/ns:educationInfo/ns:titles/ns:title[1]"){
|e| #titleSv << e.text
}
# TitleEn
xmldoc.elements.each("/ns:educationInfo/ns:titles/ns:title[2]"){
|e| #titleEn << e.text
}
# Identifier
xmldoc.elements.each("/ns:educationInfo/ns:identifier"){
|e| #identifier << e.text
}
# typeOfLevel
xmldoc.elements.each("/ns:educationInfo/ns:educationLevelDetails/ns:typeOfLevel"){
|e| #typeOfLevel << e.text
}
# typeOfResponsibleBody
xmldoc.elements.each("/ns:educationInfo/ns:educationLevelDetails/ns:typeOfResponsibleBody"){
|e| #typeOfResponsibleBody << e.text
}
# courseTyp
xmldoc.elements.each("/ns:educationInfo/ns:educationLevelDetails/ns:academic/ns:courseOfferingPackage/ns:type"){
|e| #courseTyp << e.text
}
# credits
xmldoc.elements.each("/ns:educationInfo/ns:credits/ns:exact"){
|e| #credits << e.text
}
# degree
xmldoc.elements.each("/ns:educationInfo/ns:degrees/ns:degree"){
|e| #degree << e.text
}
# #preAcademic
xmldoc.elements.each("/ns:educationInfo/ns:prerequisites/ns:academic"){
|e| #preAcademic << e.text
}
# #subjectCodeVhs
xmldoc.elements.each("/ns:educationInfo/ns:subjects/ns:subject/ns:code"){
|e| #subjectCodeVhs << e.text
}
# DescriptionSv
xmldoc.elements.each("/educationInfo/descriptions/ct:description/ct:text"){
|e| #descriptionSv << e.text
}
# Hämtar dokuments utgångs-datum
#expires << root.attributes["expires"]
# Hämtar dokuments lastedited
#lastedited << root.attributes["lastEdited"]
# Lagrar dom i uni.CSV
CSV.open("eduction_normal.csv", "wb") do |row|
(0..#ID.length - 1).each do |index|
row << [#ID[index], #titleSv[index], #titleEn[index], #identifier[index], #typeOfLevel[index], #typeOfResponsibleBody[index], #courseTyp[index], #credits[index], #degree[index], #preAcademic[index], #subjectCodeVhs[index], #descriptionSv[index], #lastedited[index], #expires[index]]
end
end
end
If it's network access you could start threading it and/or start using Jruby which can use all cores on your processor. If you have to do it often you will have to work out a read write strategy that serves you best without blocking.
I want to have this:
["(GKA) GOROKA, GOROKA, PAPUA NEW
GUINEA"]
instead of:
[
[
"(GKA)",
"GOROKA",
"GOROKA",
"PAPUA NEW GUINEA"
] ]
I have this code so far:
#aeropuertos = ""
f = File.open("./public/aeropuertos/aeropuertos.cvs", "r")
f.each_line { |line|
fields = line.split(':')
if (fields[2] == "N/A")
#line = "(" << fields[1] << ")" << ",," << fields[3] << "," << fields[4]
else
#line = "(" << fields[1] << ")" << "," << fields[2] << "," << fields[3] << "," << fields[4]
end
#aeropuertos += #line << "\n"
}
return CSV.parse(#aeropuertos).to_json
What should I do?
#aeropuertos = ""
f = File.open("./public/aeropuertos/aeropuertos.cvs", "r")
f.each_line { |line|
fields = line.split(':')
if (fields[2] == "N/A")
#line = "(" << fields[1] << ")" << ",," << fields[3] << "," << fields[4]
else
#line = "(" << fields[1] << ")" << "," << fields[2] << "," << fields[3] << "," << fields[4]
end
#aeropuertos += #line << "\n"
}
res = []
CSV.parse(#aeropuertos).each do |c|
res << c.join(',')
end
return res.to_json
There's no need for the CSV parser. Just create the structure you want as you read each line. That is, instead of making a large string in #aeropuertos and parsing it with a CSV parser, make #aeropuertos an array, and add each #line to the array.
So, instead of this:
#aeropuertos += #line << "\n"
Do this:
#aeropuertos << #line
But be sure to say this at the beginning:
#aeropuertos = []