I have a basic ruby program I've been using for several months now that retrieves hardware and virtualguest machine details for my account. Up until roughly 3 days ago this always ran fine and relatively fast. Since then it often crashes and/or the virtualguest retrieval takes 20-30 times longer than it has in the past. What could be the issue here? The crash stack is not very informative of the problem.
Program:
require 'rubygems'
require 'softlayer_api'
require 'pp'
client = SoftLayer::Client.new(:username => user, :api_key => api_key, :timeout => 999999)
account = client['Account'].object_mask("mask[virtualGuestCount,hardwareCount]").getObject()
virtual_machines_count = account["virtualGuestCount"]
bare_metal_machines_count = account["hardwareCount"]
bare_metal_machines_count_index = 0
virtual_machines_count_index = 0
for i in 0..(bare_metal_machines_count/10.0).ceil - 1
list_of_baremetal_machines = client['Account'].result_limit(i*10,10).object_mask("mask[id, hostname, fullyQualifiedDomainName, provisionDate, datacenter[name], billingItem[recurringFee, associatedChildren[recurringFee], orderItem[description, order[userRecord[username], id]]], operatingSystem[id, softwareLicense[id, softwareDescription[longDescription]]], tagReferences[tagId, tag[name]], primaryIpAddress, primaryBackendIpAddress]").getHardware
for x in 0..list_of_baremetal_machines.length - 1
bare_metal_machines_count_index = bare_metal_machines_count_index + 1
if bare_metal_machines_count_index == bare_metal_machines_count
pp("Finished retrieving " + bare_metal_machines_count.to_s + " bare metal machines")
end
end
end
for i in 0..(virtual_machines_count/10.0).ceil - 1
list_of_virtual_machines = client['Account'].result_limit(i*10,10).object_mask("mask[id, hostname, fullyQualifiedDomainName, provisionDate, datacenter[name], billingItem[recurringFee, associatedChildren[recurringFee], orderItem[description, order[userRecord[username], id]]], operatingSystem[id, softwareLicense[id, softwareDescription[longDescription]]], tagReferences[tagId, tag[name]], primaryIpAddress, primaryBackendIpAddress]").getVirtualGuests
for x in 0..list_of_virtual_machines.length - 1
virtual_machines_count_index = virtual_machines_count_index + 1
if virtual_machines_count_index == virtual_machines_count
pp("Finished retrieving " + virtual_machines_count.to_s + " virtual machines")
end
end
end
The crash looks like (the process.rb line 552 contains the getVirtualGuests call):
/opt/cds/ruby/lib/ruby/2.1.0/xmlrpc/client.rb:271:in `call': An error has occurred while processing your request. Please try again later. (XMLRPC::FaultException)
from /opt/cds/ruby/gems/gems/softlayer_api-3.1.0/lib/softlayer/Service.rb:267:in `call_softlayer_api_with_params'
from /opt/cds/ruby/gems/gems/softlayer_api-3.1.0/lib/softlayer/APIParameterFilter.rb:194:in `method_missing'
from /home/dashadmin/manas/cds-health-dashboard-sensu/lib/sensu/server/process.rb:552:in `block (5 levels) in retrieve_softlayer_inventory_information'
from /opt/cds/ruby/lib/ruby/gems/2.1.0/gems/activesupport-4.2.4/lib/active_support/core_ext/range/each.rb:7:in `each'
from /opt/cds/ruby/lib/ruby/gems/2.1.0/gems/activesupport-4.2.4/lib/active_support/core_ext/range/each.rb:7:in `each_with_time_with_zone'
from /home/dashadmin/manas/cds-health-dashboard-sensu/lib/sensu/server/process.rb:551:in `block (4 levels) in retrieve_softlayer_inventory_information'
from /home/dashadmin/manas/cds-health-dashboard-sensu/lib/sensu/server/process.rb:511:in `each'
from /home/dashadmin/manas/cds-health-dashboard-sensu/lib/sensu/server/process.rb:511:in `block (3 levels) in retrieve_softlayer_inventory_information'
from /opt/cds/ruby/gems/gems/sensu-em-2.5.1/lib/eventmachine.rb:1054:in `call'
from /opt/cds/ruby/gems/gems/sensu-em-2.5.1/lib/eventmachine.rb:1054:in `block in spawn_threadpool'
Thank you in advance for any help.
The cause of this exception could be due to the amount of data that is attempting to be retrieved at once.
As first suggestion I would tell you, that you use “result limits” in your script. But I can see that you are using this. In that case, you may reduce the values of your result limits.
Other reason could be the query generated from that “object masks” is too intensive and is causing the failures. In that case it is recommended that you please reduce the values in the “object masks”, maybe splitting the retrieval of all those properties into multiple calls and additionally adding a cool down period in the iteration of their virtual guests.
This is pseudo code what I trying to explain:
for x in myVsis:
vsiId = myVsis.id
tempVsiObject1 = getVsi('[property1,property2, property3, property4]') // Call 1 with reduced properties in the mask
tempVsiObject2 = getVsi('[property5, property6, property7 ]') // Call 2 with reduced properties in the mask
tempVsiObject3 = getVsi('[property8[property9[property10[property11]]]]') // Call 3 with reduced properties in the mask
vsi = merge_objects(tempVsiObject1, tempVsiObject2, tempVsiObject3)
sleep(1) // cool down 1 second before moving to the next VSI
I hope this alternative help you, where the result will have in lighter queries each time.
Related
I'm self study of Python and it's my first code.
I'm working for analyze logs from the servers. Usually I need analyze full day logs. I created script (this is example, simple logic) just for check speed. If I use normal coding the duration of analyzing 20mil rows about 12-13 minutes. I need 200mil rows by 5 min.
What I tried:
Use multiprocessing (met issue with share memory, think that fix it). But as the result - 300K rows = 20 sec and no matter how many processes. (PS: Also need control processors count in advance)
Use threading (I found that it's not give any speed, 300K rows = 2 sec. But normal code same, 300K = 2 sec)
Use asyncio (I think that script is slow because need reads many files). Result same as threading - 300K = 2 sec.
Finally I think that all three my script incorrect and didn't work correctly.
PS: I try to avoid use specific python modules (like pandas) because in this case it will be more difficult to execute on different servers. Better to use common lib.
Please help to check 1st - multiprocessing.
import csv
import os
from multiprocessing import Process, Queue, Value, Manager
file = {"hcs.log", "hcs1.log", "hcs2.log", "hcs3.log"}
def argument(m, a, n):
proc_num = os.getpid()
a_temp_m = a["vod_miss"]
a_temp_h = a["vod_hit"]
with open(os.getcwd() + '/' + m, newline='') as hcs_1:
hcs_2 = csv.reader(hcs_1, delimiter=' ')
for j in hcs_2:
if j[3].find('MISS') != -1:
a_temp_m[n] = a_temp_m[n] + 1
elif j[3].find('HIT') != -1:
a_temp_h[n] = a_temp_h[n] + 1
a["vod_miss"][n] = a_temp_m[n]
a["vod_hit"][n] = a_temp_h[n]
if __name__ == '__main__':
procs = []
manager = Manager()
vod_live_cuts = manager.dict()
i = "vod_hit"
ii = "vod_miss"
cpu = 1
n = 1
vod_live_cuts[i] = manager.list([0] * cpu)
vod_live_cuts[ii] = manager.list([0] * cpu)
for m in file:
proc = Process(target=argument, args=(m, vod_live_cuts, (n-1)))
procs.append(proc)
proc.start()
if n >= cpu:
n = 1
proc.join()
else:
n += 1
[proc.join() for proc in procs]
[proc.close() for proc in procs]
I'm expect, each file by def argument will be processed by independent process and finally all results will be saved in dict vod_live_cuts. For each process I added independent list in dict. I think it will help cross operation for use this parameter. But maybe it's wrong way :(
using IPC is costly, so only use "shared objects" for saving the final result, not for intermediate results while parsing the file.
limiting the number of processes is done by using a multiprocessing.Pool, the following code uses it to reach the max hard-disk speed, you only need to post-process the results.
you can only parse data as fast as your HDD can read it (typically 30-80 MB/s), so if you need to improve the performance further you should use SSD or RAID0 for higher disk speed, you cannot get much faster than this without changing your hardware.
import csv
import os
from multiprocessing import Process, Queue, Value, Manager, Pool
file = {"hcs.log", "hcs1.log", "hcs2.log", "hcs3.log"}
def argument(m, a):
proc_num = os.getpid()
a_temp_m_n = 0 # make it local to process
a_temp_h_n = 0 # as shared lists use IPC
with open(os.getcwd() + '/' + m, newline='') as hcs_1:
hcs_2 = csv.reader(hcs_1, delimiter=' ')
for j in hcs_2:
if j[3].find('MISS') != -1:
a_temp_m_n = a_temp_m_n + 1
elif j[3].find('HIT') != -1:
a_temp_h_n = a_temp_h_n + 1
a["vod_miss"].append(a_temp_m_n)
a["vod_hit"].append(a_temp_h_n)
if __name__ == '__main__':
manager = Manager()
vod_live_cuts = manager.dict()
i = "vod_hit"
ii = "vod_miss"
cpu = 1
vod_live_cuts[i] = manager.list()
vod_live_cuts[ii] = manager.list()
with Pool(cpu) as pool:
tasks = []
for m in file:
task = pool.apply_async(argument, args=(m, vod_live_cuts))
tasks.append(task)
for task in tasks:
task.get()
print(list(vod_live_cuts[i]))
print(list(vod_live_cuts[ii]))
def calculation(minutes, seconds, miles):
pace = (int(minutes) + (int(seconds)/60)/miles)
speed = (float(miles)/(int(minutes) + (int(seconds)/60)
minutes = raw_input("Minutes ==> ")
seconds = raw_input("Seconds ==> ")
miles = raw_input("Miles ==>" )
I'm attempting to do an input that is supposed to calculate the pace and speed of user-inputted variables but I keep getting syntax errors starting from the fourth line down. I'm very new to this for the record so it's probably something stupid but any help is appreciated!
I have read in this SO post this you can combine any_of with between like this:
webshop = Webshop.first
webshop.orders.any_of(
webshop.orders.between(:datetime_pending, [Time.zone.now-7.days, Time.zone.now]).selector, # An error is raised here.
webshop.orders.between(:datetime, [Time.zone.now-7.days, Time.zone.now]).selector
)
But When trying this query, using Mongoid 4, I am getting the error:
ArgumentError: wrong number of arguments (2 for 0..1).
/Users/christoffer/project/vendor/gems/ruby/2.0.0/gems/origin-1.1.0/lib/origin/selectable.rb:63:in `between'
/Users/christoffer/project/vendor/gems/ruby/2.0.0/bundler/gems/mongoid-b91705b0ded8/lib/mongoid/relations/referenced/many.rb:413:in `block in method_missing'
/Users/christoffer/project/vendor/gems/ruby/2.0.0/bundler/gems/mongoid-b91705b0ded8/lib/mongoid/scopable.rb:238:in `with_scope'
/Users/christoffer/project/vendor/gems/ruby/2.0.0/bundler/gems/mongoid-b91705b0ded8/lib/mongoid/relations/referenced/many.rb:412:in `method_missing'
What am I missing here?
According to the documentation Queryable#between takes a hash (a key with a range value).
Have you tried passing that instead of an array?
webshop = Webshop.first
now = Time.zone.now
seven_days = now - 7.days
webshop_orders = webshop.orders
webshop_orders.any_of(
webshop_orders.between(datetime_pending: seven_days..now).selector,
webshop_orders.between(datetime: seven_days..now).selector
)
I am trying to imput data from an array into a web site, however i am getting an error. I beleive the error means that I can't put floots into text fields, so i I changed the floot into a string. But again it did not work. I have only included the part of the code that I felt was relevant.
Blockquote
eee = Watir::Browser.new
eee.goto(fulllink)
eee.text_field(:name => "txtAttr").set Headings[j]
eee.wait
p = j + 1
strings = body.at(0).at(p)
String (strings)
eee.text_field(:name => "txtValue").set strings
eee.wait
eee.link(:index => 4).click
eee.wait
eee.close
end
i += 1
C:\Users\Pure.itloaner1-12\Google Drive\ruby>ruby ExST.rb
hello world
Alpha Numeric Unit #
tables filled
200.0
C:/Ruby193/lib/ruby/gems/1.9.1/gems/watir-classic-3.0.0/lib/watir-classic/input_
elements.rb:356:in `characters_in': undefined method `each_char' for 200.0:Float
(NoMethodError)
from C:/Ruby193/lib/ruby/gems/1.9.1/gems/watir-classic-3.0.0/lib/watir-c
lassic/input_elements.rb:337:in `type_by_character'
from C:/Ruby193/lib/ruby/gems/1.9.1/gems/watir-classic-3.0.0/lib/watir-c
lassic/input_elements.rb:299:in `set'
from ExST.rb:92:in `<main>'
try if changing String(strings) to strings = String(strings) works.
Ruby 1.9.1 + ActiveRecord 2.3.5 + Postgres 8.3.7
Here is a rough sketch of my code. Ignore any obvious syntax details left out. The model below inherits from ActiveRecord::Base connected to a Postgres 8.3.7 database via ActiveRecord 2.3.5.
class TableA
has_many :tableB
end
class TableB
belongs_to :tableA
has_many :tableC
end
class TableC
belongs_to :tableB
has_many :tableD
end
class TableD
belongs_to :tableC
has_many :tableE
end
class TableE
belongs_to :tableD
end
# Note that tableA has fids that are referenced in tableE but is not part of this model
#
# Later in the script, in the same global scope, I want to add entries to these tables if
# I cannot find what I need. Bear in mind that this part betrays much Ruby noobiness.
toAdd.each do |widget|
add_tableA = TableA.find_by_sql().first # assumes I will get one back based on earlier sanity checks
add_tableB = TableB.find_by_sql().first
if (add_tableB == nil)
new_tableB = TableB.new( # value assignments )
new_tableB.save
add_tableB = TableB.find_by_sql().first
end
add_tableC = TableC.find_by_sql().first
if (add_tableC == nil)
new_tableC = TableC.new( # value assignments )
new_tableC.save
add_tableC = TableC.find_by_sql().first
end
add_tableD = TableD.find_by_sql().first
if (add_tableD == nil)
new_tableD = TableD.new( # value assignments )
new_tableD.save
add_tableD = TableD.find_by_sql().first
end
# I step into TableA again because items in TableE are linked to items in TableA, but they are
# distinct from the "high level" item I grabbed from TableA earlier.
add_tableA = TableA.find_by_sql().first
if (add_tableA == nil)
new_tableA = TableA.new( # value assignments )
new_tableA.save
add_tableA = TableA.find_by_sql().first
end
# Now that I have a TableA id to put into TableE, just create TableE row because I know this
# does not exist yet.
new_tableE = TableE.new( # value assignments ) # again, this is assumed to be new based on earlier checks
new_tableE.save
end
What always happens is I get the following stack trace:
/...gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract_adapter.rb:219:in `rescue in log': PGError: no connection to the server (ActiveRecord::StatementInvalid)
: ROLLBACK
from .../gems/1.9.1/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract_adapter.rb:202:in `log'
from .../gems/1.9.1/gems/activerecord-2.3.5/lib/active_record/connection_adapters/postgresql_adapter.rb:550:in `execute'
from .../gems/1.9.1/gems/activerecord-2.3.5/lib/active_record/connection_adapters/postgresql_adapter.rb:576:in `rollback_db_transaction'
from .../gems/1.9.1/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/database_statements.rb:143:in `rescue in transaction'
from .../gems/1.9.1/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/database_statements.rb:125:in `transaction'
from .../gems/1.9.1/gems/activerecord-2.3.5/lib/active_record/transactions.rb:182:in `transaction'
from .../gems/1.9.1/gems/activerecord-2.3.5/lib/active_record/transactions.rb:200:in `block in save_with_transactions!'
from .../gems/1.9.1/gems/activerecord-2.3.5/lib/active_record/transactions.rb:208:in `rollback_active_record_state!'
from .../gems/activerecord-2.3.5/lib/active_record/transactions.rb:200:in `save_with_transactions!'
.... regardless if I'm calling save, save!, or doing a create instead of new and save.
strace reveals that I can only get one BEGIN..INSERT..COMMIT transaction to work for each run of this. Any subsequent attempts to INSERT within a transaction either in the same run of the loop or the next one ends with the connection being dropped before a COMMIT is sent. Clearly, I'm doing something wrong here with how I'm stepping into the ActiveRecord model.
I see the following strace only just before the first successful INSERT statement is set up. Is there something in ActiveRecord that allows me to preserve this as I step through tables, or am I simply Doing It Wrong?
rt_sigaction(SIGPIPE, {0x1, [], SA_RESTORER|SA_RESTART, 0x3876c0eb10}, {0x4b2ff0, [], SA_RESTORER|SA_RESTART, 0x3876c0eb10}, 8) = 0
sendto(3, "Q\0\0\2e SELECT attr.attna"..., 614, 0, NULL, 0) = 614
rt_sigaction(SIGPIPE, {0x4b2ff0, [], SA_RESTORER|SA_RESTART, 0x3876c0eb10}, {0x1, [], SA_RESTORER|SA_RESTART, 0x3876c0eb10}, 8) = 0
poll([{fd=3, events=POLLIN|POLLERR}], 1, -1) = 1 ([{fd=3, revents=POLLIN}])
recvfrom(3, "T\0\0\0:\0\2attname\0\0\0\4\341\0\2\0\0\0\23\0#\377\377\377\377\0"..., 16384, 0, NULL, NULL) = 541
Any help here is greatly appreciated.
Thanks everyone. I apologize for taking anyone's time in trying to fix this. This instance of postgres depends upon a second process to run to handle pushing trigger events out to other processes. That process was not running, so the database server booted after the first committed INSERT. It's a custom in-house kind of thing.