I have this piece of ruby code that works intermittently, I can call it in the shell multiple times ctrl-c-ing when it hangs and it will work instantaneously half of the times and hang forever the other half.
require 'resolv'
puts "initializing"
txt = Resolv::DNS.open do |dns|
records = dns.getresources("www.google.com", Resolv::DNS::Resource::IN::A)
records.empty? ? nil : records.map{|rec| rec.address}.compact
end
puts "records are #{txt}"
Here the output I see in both cases
[ ~]$ ruby test.rb
initializing
records are 216.58.217.132
[ ~]$ ruby test.rb
initializing
records are 216.58.217.132
[ ~]$ ruby test.rb
initializing
^C/usr/lib/ruby/1.8/resolv.rb:620:in `select': Interrupt
from /usr/lib/ruby/1.8/resolv.rb:620:in `request'
from /usr/lib/ruby/1.8/resolv.rb:489:in `each_resource'
from /usr/lib/ruby/1.8/resolv.rb:975:in `resolv'
from /usr/lib/ruby/1.8/resolv.rb:973:in `each'
from /usr/lib/ruby/1.8/resolv.rb:973:in `resolv'
from /usr/lib/ruby/1.8/resolv.rb:972:in `each'
from /usr/lib/ruby/1.8/resolv.rb:972:in `resolv'
from /usr/lib/ruby/1.8/resolv.rb:970:in `each'
from /usr/lib/ruby/1.8/resolv.rb:970:in `resolv'
from /usr/lib/ruby/1.8/resolv.rb:481:in `each_resource'
from /usr/lib/ruby/1.8/resolv.rb:468:in `getresources'
from test.rb:4
from /usr/lib/ruby/1.8/resolv.rb:307:in `open'
from test.rb:3
I understand that DNS is an external service and it might be flaky, but that does not explain how it can work immediately sometimes and hang forever other times, and also, when I use the host www.google.com command it always returns immediately.
How can I make this work predictably?
I tried your code and cannot reproduce the problem. Note that for your specific example the response is not always the same, which makes sense for www.google.com.
tmp> ruby resolv.rb
# initializing
# records are [#<Resolv::IPv4 216.58.211.100>]
tmp> ruby resolv.rb
# initializing
# records are [#<Resolv::IPv4 74.125.136.105>, #<Resolv::IPv4 74.125.136.106>, #<Resolv::IPv4 74.125.136.147>, #<Resolv::IPv4 74.125.136.99>, #<Resolv::IPv4 74.125.136.103>, #<Resolv::IPv4 74.125.136.104>]
I think that the problem you are experiencing lies outside of your code. The host command does a bit more than making a DNS request. It also checks the /etc/hosts file, and there might be some local cache involved. You should try to test your DNS, maybe with the dig command, and check if the answers are really always coming from the DNS server, not from a cache.
In your code, you might want to enable a timeout, e.g.:
dns.timeouts = 3
Related
Usually, Ohai plugin runs periodically to collect some host parameters and some of the plugins usually added to all nodes in the company. This could be sensitive for resources using and how Ohai handle that. So I have two questions here.
The first one is what will happen if I will put infinite loop accidentally? Does Ohai/ruby has some max heap size or any memory limits?
Second question would be about shell out in Ohai. Is it possible to reduce timeout? Do you know more protections just in case?
I use only special ruby timeout for now:
require 'timeout'
begin
status = Timeout::timeout(600) {
# all code here
}
rescue Timeout::Error
puts 'timeout'
end
The chef-client run won't start/succeed, if ohai hangs.. you should notice this in some kind of monitoring.
Regarding the timeout part: Searching the source code reveals this:
def shell_out(cmd, **options)
# unless specified by the caller timeout after 30 seconds
options[:timeout] ||= 30
so = Mixlib::ShellOut.new(cmd, options)
So you should be able to set the timeout as you like (2 seconds in this case):
so = shell_out("/bin/your-command", :timeout => 2)
Regarding the third sub-question
Do you know more protections just in case?
you are getting pretty broad. Try to solve the problems that occur, stop over-engineering.
Just for the sake of completeness, Chef does not guard against broken or malicious Ohai plugins. If you put sleep 1 while true in your Ohai plugin it will happily sit there forever.
Seems I have found solution to limit Ohai resources for Redhat Linux in terms of CPU, disk space usage, disk space I/O, long run timeout and heap size memory limit. So you will not affect other host's components. In ideal world, you write optimised and right code, but memory leak is global problem and could happen so I think protections are needed especially when you have loaded Ohai plugin to hunders or tousands production servers.
CPU -
If I'm right Ohai plugin gets lowest cpu priority (-19?). Please confirm this if you know. So Ohai plugin cannot affect your production app in terms of CPU.
Disk space -
Ohai plugin should write to node attributes
Protection for unexpected long run -
require 'timeout'
begin
status = Timeout::timeout(600) {
# Ohai plugin code is here
}
rescue Timeout::Error
puts 'timeout'
end
Protection for unexpected long run of shell_out:
so = shell_out("/bin/your-command", :timeout => 30)
Memory (RAM) heap size limit -
require "thread"
# This thread is memory watcher. It works separately and does exit if heap size reached.
# rss is checked including childs but excluding shared memory.
# This could be ok for Ohai plugin. I'm assuming memory is not shared.
# Exit - if heap size limit reached (10 000 KB) or any unexpected scenario happened.
Thread.start {
loop do
sleep 1
current_memory_rss = `ps ax -o pid,rss | grep -E "^[[:space:]]*#{$$}"`.strip.split.map(&:to_i)[1].to_i
if current_memory_rss != nil && current_memory_rss > 0 && $$ != nil && $$.to_i > 0
exit if current_memory_rss > 10_000
else
exit
end
end
}
# Your Ohai code begins here
# For testing, any code can be included to make memory growing constantly as infinite loop:
loop do
puts `ps ax -o pid,rss | grep -E "^[[:space:]]*#{$$}"`.strip.split.map(&:to_i)[1].to_s + ' KB'
end
Please let me know if you have better solutions, but it seems it works.
Disk I/O read heavy usage -
timeout should help here, but recommended to avoid commands like find and similar others
So I have some code that checks if there's a certain file on remote SFTP server:
def size
adapter.sftp.stat(path).size
end
where sftp is a Net::SFTP::Session object defined in this case as
#sftp = Net::SFTP.start(host, username, password: password)
and path is the file path to the object that I want to call stat() on.
Unfortunately, when I try to excecute this code, I get this error:
NoMethodError:
undefined method `send_data' for nil:NilClass
# /usr/local/lib/ruby/gems/2.2.0/gems/net-sftp-2.1.2/lib/net/sftp/session.rb:814:in `send_packet'
# /usr/local/lib/ruby/gems/2.2.0/gems/net-sftp-2.1.2/lib/net/sftp/protocol/base.rb:45:in `send_request'
# /usr/local/lib/ruby/gems/2.2.0/gems/net-sftp-2.1.2/lib/net/sftp/protocol/01/base.rb:90:in `open'
# /usr/local/lib/ruby/gems/2.2.0/gems/net-sftp-2.1.2/lib/net/sftp/session.rb:830:in `request'
# /usr/local/lib/ruby/gems/2.2.0/gems/net-sftp-2.1.2/lib/net/sftp/session.rb:182:in `open'
# /usr/local/lib/ruby/gems/2.2.0/gems/net-sftp-2.1.2/lib/net/sftp/session.rb:191:in `open!'
# /usr/local/lib/ruby/gems/2.2.0/gems/net-sftp-2.1.2/lib/net/sftp/operations/file_factory.rb:40:in `open'
# /Users/Ben/remote_filesystem/lib/remote_filesystem/path/sftp.rb:46:in `size'
# ./sftp_spec.rb:72:in `block (3 levels) in <top (required)>'
As far as I can tell from looking at the source code for Net::SFTP::Session, on line 814 of session.rb, channel.send_data is called, but apparently my SFTP Session has a Nil channel for some reason. Can anyone explain how to fix this issue?
If you're caching sftp, the cache might have been invalidated. I came across this exception, because I had tried to call ftp.file.open on an ftp connection that was no longer open.
As mentioned earlier, it means your SFTP session is terminated.
Check TCP logs (wireshark is your friend), the session may be terminated by the peer in the meantime.
A case when such error is happening is if you are doing write(data) operation with data length that exceeds TCP window size on the receiving side. A fix would be to repeat write operations with a buffer, like
io = StringIO.new(data)
sftp_session.file.open(filename, "w") do |file|
while buffer = io.read(BUFFER_SIZE)
file.write(buffer)
end
end
An example output from capistrano:
INFO [94db8027] Running /usr/bin/env uptime on leehambley#example.com:22
DEBUG [94db8027] Command: /usr/bin/env uptime
DEBUG [94db8027] 17:11:17 up 50 days, 22:31, 1 user, load average: 0.02, 0.02, 0.05
INFO [94db8027] Finished in 0.435 seconds command successful.
As you can see, each line starts with "{type} {hash}". I assume the hash is some unique identifier for either the server or the running thread, as I've noticed if I run capistrano over several servers, each one has it's own distinct hash.
My question is, how do I get this value? I want to manually output some message during execution, and I want to be able to match my output, with the server that triggered it.
Something like: puts "DEBUG ["+????+"] Something happened!"
What do I put in the ???? there? Or is there another, built in way to output messages like this?
For reference, I am using Capistrano Version: 3.2.1 (Rake Version: 10.3.2)
This hash is a command uuid. It is tied not to the server but to a specific command that is currently run.
If all you want is to distinguish between servers you may try the following
task :some_task do
on roles(:app) do |host|
debug "[#{host.hostname}:#{host.port}] something happened"
end
end
My code
IMAGE_DIR = 'D:\File_Server\Nisa_Costcutter\Master Nisa CC Logos'
require 'net/ssh'
require 'net/scp'
def scopy_file(file)
puts "Transferring #{file.path}"
Net::SCP.upload!('192.168.254.5',
'passenger',
file,
'/var/www/pinpointlms.co.uk/shared/logos',
:ssh => {password: '*****'})
end
puts "Starting Upload"
Dir.foreach(IMAGE_DIR) do |name|
if name.length > 4 && name[-4..-1].upcase == '.BMP'
filename=name.strip()
file = File.new(File.join(IMAGE_DIR, filename))
if (Time.now - file.mtime) > 86400
scopy_file(file)
end
end
end
puts "End of Transfer"
I am trying to copy some files from a windows box to an Ubuntu box using Ruby but I get the following output:
Starting Upload
Transferring D:\File_Server\Nisa_Costcutter\Master Nisa CC Logos/Z2579.BMP
C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/net-scp-1.1.2/lib/net/scp.rb:359:in `block (3 levels) in start_command': SCP did not finish successfully (1)
(Net::SCP::Error) from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/net-ssh-2.8.0/lib/net/ssh/connection/channel.rb:591:in `call'
from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/net-ssh-2.8.0/lib/net/ssh/connection/channel.rb:591:in `do_close'
from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/net-ssh-2.8.0/lib/net/ssh/connection/session.rb:586:in `channel_close'
from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/net-ssh-2.8.0/lib/net/ssh/connection/session.rb:465:in `dispatch_incoming_packets'
from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/net-ssh-2.8.0/lib/net/ssh/connection/session.rb:221:in `preprocess'
from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/net-ssh-2.8.0/lib/net/ssh/connection/session.rb:205:in `process'
from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/net-ssh-2.8.0/lib/net/ssh/connection/session.rb:169:in `block in loop'
from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/net-ssh-2.8.0/lib/net/ssh/connection/session.rb:169:in `loop'
from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/net-ssh-2.8.0/lib/net/ssh/connection/session.rb:169:in `loop'
from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/net-ssh-2.8.0/lib/net/ssh/connection/session.rb:118:in `close'
from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/net-scp-1.1.2/lib/net/scp.rb:205:in `ensure in start'
from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/net-scp-1.1.2/lib/net/scp.rb:205:in `start'
from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/net-scp-1.1.2/lib/net/scp.rb:221:in `upload!'
from C:/Users/administrator.GASKANDHAWLEY/Desktop/copy_images2.rb:8:in `scopy_file'
from C:/Users/administrator.GASKANDHAWLEY/Desktop/copy_images2.rb:24:in`block in <main>'
from C:/Users/administrator.GASKANDHAWLEY/Desktop/copy_images2.rb:17:in`foreach'
from C:/Users/administrator.GASKANDHAWLEY/Desktop/copy_images2.rb:17:in
`'
I am a ruby beginner so any help you can give me on how to debug this code further will be much appreciated.
Thanks
You probably don't have user access to upload (write) that file in the given directory '/var/www/pinpointlms.co.uk/shared/logos' on your Ubuntu server.
Try to save it without giving it a full path, so it ends up in the user's home directory on the Ubuntu server. If this will work, then your problem is related with the user permission on the server.
In case it's helpful for others, I got the exact same error as this when trying to upload a file to a path that didn't yet exist. scp from shell will allow you to do this, but Net::SCP will fail with this error.
i.e.
scp "my.file" "/foo/bar/"
if /foo exists but /foo/bar/ does not, scp will create /foo/bar and put your file there (assuming permissions allow you to do this).
However - under the same circumstances - this will fail with the error given in the question
scp.upload!(my_file, "/foo/bar/")
The only solution I've found is to first create the path you want locally, and then upload with the :recursive option on, like so:
scp.upload!("bar/", "/foo" :recursive => true)
where ./bar contains my_file.
I have a pretty big spec suite (watirspec), I am running it against a Ruby gem (safariwatir) and there are a lot of failures:
1002 examples, 655 failures, 1 pending
When I make a change in the gem and run the suite again, sometimes a lot of previously failing specs pass (52 in this example):
1002 examples, 603 failures, 1 pending
I would like to know which previously failing specs are now passing, and of course if any of the previously passing specs are now failing. What I do now to compare the results is to run the tests with --format documentation option and output the results to a text file, and then diff the files:
rspec --format documentation --out output.txt
Is there a better way? Comparing text files is not the easiest way to see what changed.
Just save the results to file like you're doing right now and then just diff those results with some random diff-ing tool.
I don't know of anything out there that can do exactly that. Said that, if you need it so badly you don't mind spending some time hacking your own formatter, take a look at Spec::Runner::Formatter::BaseFormatter.It is pretty well documented.
I've implemented #Serabe's solution for you. See the gist: https://gist.github.com/1142145.
Put the file my_formatter.rb into your spec folder and run rspec --formatter MyFormatter. The formatter will compare current run result with previous run result and will output the difference as a table.
NOTE: The formatter creates/overwrites file result.txt in the current folder.
Example usage:
D:\Projects\ZPersonal\equatable>rspec spec --format MyFormatter
..........
No changes since last run
Finished in 0.011 seconds
10 examples, 0 failures
No changes since last run line was added by the formatter.
And now I intentionally broken one and rerun rspec:
D:\Projects\ZPersonal\equatable>rspec spec --format MyFormatter
..F.......
Affected tests (1).
PS CS Description
. F Equatable#== should be equal to the similar sock
PS - Previous Status
CS - Current Status
Failures:
1) Equatable#== should be equal to the similar sock
Failure/Error: subject.should == Sock.new(10, :black, 0)
expected: #<Sock:0x2fbb930 #size=10, #color=:black, #price=0>
got: #<Sock:0x2fbbae0 #size=10, #color=:black, #price=20> (using ==)
Diff:
## -1,2 +1,2 ##
-#<Sock:0x2fbb930 #color=:black, #price=0, #size=10>
+#<Sock:0x2fbbae0 #color=:black, #price=20, #size=10>
# ./spec/equatable_spec.rb:30:in `block (3 levels) in <top (required)>'
Finished in 0.008 seconds
10 examples, 1 failure
Failed examples:
rspec ./spec/equatable_spec.rb:29 # Equatable#== should be equal to the similar sock
The table with affected specs was added by the formatter:
Affected tests (1).
PS CS Description
. F Equatable#== should be equal to the similar sock
PS - Previous Status
CS - Current Status
If some spec status is different between current and previous run, the formatter outputs previous status, current status and spec description. '.' stands for passed specs, 'F' for failed and 'P' for pending.
The code is far from perfect, so feel free to criticize and change it as you want.
Hope this helps. Let me know if you have any questions.