I need to perform several operations during an ssh session. At the moment I am using SSH.start and SCP.start for the remote operations and uploads respectively. This is an example:
Net::SSH.start(host, user, options) do |session|
output = session.exec!(cmd)
end
Net::SCP.start(host, user, options) do |scp|
scp.upload!(src, dst, :recursive => true)
scp.upload!(src1, dst1, :recursive => true)
end
Net::SSH.start(host, user, options) do |session|
output = session.exec!(cmd)
end
The problem is that for every operation the SSH connection needs to be re-established and this is affecting the overall performance.
Is there a way to open a session and then perform all the required operations such as commands, uploads and downloads?
The SSH protocol allows multiple channels per connection. So technically it is possible.
I do not know Ruby net-ssh implementation, but from it's API it seems to support this.
The constructor of Net::SCP class takes an exiting SSH session.
# Creates a new Net::SCP session on top of the given Net::SSH +session+
# object.
def initialize(session)
So pass your existing Net::SSH instance to the Net::SCP constructor, instead of starting a new session using .start method.
Related
I am trying to ping a large amount of urls and retrieve information regarding the certificate of the url. As I read in this thoughtbot article here Thoughtbot Threads and others, I've read that the best way to do this is by using Threads. When I implement threads however, I keep running into Timeout errors and other problems for urls that I can retrieve successfully on their own. I've been told in another related question that I asked earlier that I should not use Timeout with Threads. However, the examples I see wrap API/NET::HTTP/TCPSocket calls in the Timeout block and based opn what I've read, that entire API/NET::HTTP/TCP Socket call will be nested within the Thread. Here is my code:
class SslClient
attr_reader :url, :port, :timeout
def initialize(url, port = '443', timeout = 30)
#url = url
#port = port
#timeout = timeout
end
def ping_for_certificate_info
context = OpenSSL::SSL::SSLContext.new
certificates = nil
verify_result = nil
Timeout.timeout(timeout) do
tcp_client = TCPSocket.new(url, port)
ssl_client = OpenSSL::SSL::SSLSocket.new tcp_client, context
ssl_client.hostname = url
ssl_client.sync_close = true
ssl_client.connect
certificates = ssl_client.peer_cert_chain
verify_result = ssl_client.verify_result
tcp_client.close
end
{certificate: certificates.first, verify_result: verify_result }
rescue => error
puts url
puts error.inspect
end
end
[VERY LARGE LIST OF URLS].map do |url|
Thread.new do
ssl_client = SslClient.new(url)
cert_info = ssl_client.ping_for_certificate_info
puts cert_info
end
end.map(&:value)
If you run this code in your terminal, you will see many Timeout errors and ERNNO:TIMEDOUT errors for sites like fandango.com, fandom.com, mcaffee.com, google.de etc that should return information. When I run these individually however I get the information I need. When I run them in the thread they tend to fail especially for domains that have a foreign domain name. What I'm asking is whether I am using Threads correctly. This snippet of code that I've pasted is part of a larger piece of code that interacts with ActiveRecord objects in rails depending on the results given. Am I using Timeout and Threads correctly? What do I need to do to make this work? Why would a ping work individually but not wrapped in a thread? Help would be greatly appreciated.
There are several issues:
You'd not spawn thousands of threads, use a connection pool (e.g https://github.com/mperham/connection_pool) so you have maximum 20-30 concurrent requests going (this maximum number should be determined by testing at which point network performance drops and you get these timeouts).
It's difficult to guarantee that your code is not broken when you use threads, that's why I suggest you use something where others figured it out for you, like https://github.com/httprb/http (with examples for thread safety and concurrent requests like https://github.com/httprb/http/wiki/Thread-Safety). There are other libs out there (Typhoeus, patron) but this one is pure Ruby so basic thread safety is easier to achieve.
You should not use Timeout (see https://jvns.ca/blog/2015/11/27/why-rubys-timeout-is-dangerous-and-thread-dot-raise-is-terrifying and https://medium.com/#adamhooper/in-ruby-dont-use-timeout-77d9d4e5a001). Use IO.select or something else.
Also, I suggest you learn about threading issues like deadlocks, starvations and all the gotchas. In your case you are doing a starvation of network resources because all the threads are fighting for bandwidth/network.
An older version of Net::SSH had #send_signal. Since that method no longer seems to be available, have tried sending "\cC" via #send_data, and also tried closing the channel, but the remote command continues running.
What's the right way to send signals to a Net::SSH::Channel now?
You want to use request_pty to create a pseudo-terminal. You can use that terminal to send an interrupt command.
Unless you have a PTY, you can't send control characters. This can be as easy as calling request_pty.
It took me a while to figure out, mostly as my system is a bit more complicated - I have multiple SSH sessions running and another thread needs to cause all channels to terminate, so my code looks something like this:
def do_funny_things_to_server host,user
Net::SSH.start(host, user) do |ssh|
#connections << ssh
ssh.open_channel do |chan|
chan.on_data do |ch, data|
puts data
end
# get a PTY with no echo back, this is mostly to hide the "^C" that
# is otherwise output when I interrupt the channel
chan.request_pty(modes: [ [ Net::SSH::Connection::Term::ECHO, 0] ])
chan.exec "tail -f /some/log/file"
end
ssh.loop(0.1)
#connections.delete ssh
end
end
def cancel_all
#connections.each do |ssh|
ssh.channels.each do |id,chan|
chan.send_data "\x03"
end
end
end
###Rant:
There's so little documentation about how to use request_pty parameters that it can be said to not exist. I had to figure out modes: by reading both the source of net-ssh and the SSH RFC and sprinkling some educated guesses about the meaning of the word "flags" in section 8 of the RFC.
Someone pointed in another relevant (though not Ruby specific) answer that there's a "signal" message that can be used to send signals over the SSH connection, but also that OpenSSH (the implementation I use for the server) does not support it. If your server supports it, you might want to try to use it like this:
channel.send_channel_request 'signal', :string, 'INT'
See "signal" channel message in the RFC and the buffer.rb source file to understand the parameters. Insert here the expected rant about complete lack of documentation of how to use send_channel_request. The above suggestion is mostly to document for myself how to use this method.
The answer linked above also mentions an SSH extension called "break" which is supposedly supported by OpenSSH, but I couldn't get it to work to interrupt a session.
I am looking for some assistance on where to start to run concurrent tests with Cucumber/Capybara. I need to do this without the 'parallel_tests' gem.the reason being is I can't seem to be able to have separate users login for each process.
I was thinking that I could have a shared pool of users, most likely in a array but I can't share this data across separate processes with the gem.
Some feedback I have received is to use IO.pipe but as yet do not know enough about it.
I have a standalone Cucumber framework, no Rails etc.
Thought I would post my solution in case it helps anyone else. I have ended up separating the user pool away from my application and storing them using Redis.
I then have a simple method that will pick a random user from a Set in Redis and put it back once it has finished with it
def choose_redis_user
#redis = Redis.new
#randUser = #redis.spop("users")
$user_username = #redis.hget(#randUser, "username")
$user_password = #redis.hget(#randUser, "password")
end
def return_redis_user
#redis.sadd("users", #randUser)
end
Then within my tests i can run
login_page.username.set($user_username)
login_page.password.set($user_password)
This works really well with multiple parallel_tests
I am using the parallel_tests gem to be able to run multiple features at the same time, the problem I face with my scenario is that I have user based sessions (SSO) so only one user can be logged in at a time.
To combat this I was thinking of being able to randomly select users if they are available, but tracking their login status globally presents an issue for me.
My setup
Before and after each scenario a user will login:
Before('#login_automated_user') do
#user = FactoryGirl.attributes_for(:automated_user)
login_steps("#{APP}")
end
After('#login_automated_user') do
logout_steps
end
I was thinking of having a pool of users in an array and randomly selecting one to use and once finished return the user to the pool:
module UserSession
def choose_user
user_array = factory_girl_users.values
if user_array.length > 0
#user = user_array.pop
end
end
def return_user
user_array << #user
end
def factory_girl_users
Hash[:user_1 => FactoryGirl.attributes_for(:automated_user), :user_2 => FactoryGirl.attributes_for(:automated_user_1)]
end
end
World(UserSession)
This would then make my Before and After hooks look like:
Before('#login_automated_user') do
#user = choose_user
login_steps("#{APP}")
end
After('#login_automated_user') do
logout_steps
return_user
end
One issue I can see here is I'm using #user across two sessions (or more if I had more users) so do I need to separate them out?
Would anyone offer some tips/solutions to contemplate please?
Not sure I understand your exact problem, but assuming it's having single browser for few tests that run in parallel, and thus having one user's login, affecting other test that runs in the same time.
One solution, is having named session per test. Not sure it's reliable.
Other is having each of your process that parallel runs, open its own browser and then you won't have any sessions problem.
It's requiring more memory, but doesn't seem to effect tests speed.
Also instead of logging out, after each test, you could just use reset_sessions! in global teardown.
I have an application that I am coding to have the logging info be sent over tcpsocket to a server and have a monitor client connect to the server to view the logging data . So far i am able to get to the stage where the info is sent to the server however I need some thoughts on how to go about the next stage. Using Ruby tcpsever what methodologies can I use to have the server resend the incoming data to a client? How can I have data stored across threads?
require "socket"
server_socket = TCPServer.new('localhost', 2200)
loop do
# Create a new thread for each connection.
Thread.start(server_socket.accept) do |session|
# check if received is viewer request
line = session.gets
if line =~ /viewer/
#filter = line[/\:(.*?)\:/]
session.puts "Listining for #{filter}"
loop do
if (log = ### need input here from logging app ###)
# Show if filter is set for all or cli matches to filter
if #filter == ':all:' || log =~ /\:(.*?)\:/
session.puts log
end
# Read trace_viewer input.
if session.gets =~ /quit/
# close the connections
session.puts "Closing connection. Bye!"
session.close
break
end
end
end
end
end
end
With clients connecting to the server it sounds like a typical client/server configuration, with some clients sending data, and others requesting it. Is there any reason you don't use a standard HTTP client/server, i.e., a web server, instead of reinventing the wheel? Use Sinatra or Padrino, or even Rails and you'll be mostly finished.
How can I have data stored across threads?
Ruby's Thread module includes Queue, which is good for moving data around between threads. The document page has an example which should help.
The same ideas for using a queue would apply to using tables. I'd recommend using a database to act as a queue for your use, rather than do it in memory. A power outage, or app crash will lose all the data if it's an in-memory queue and the clients haven't retrieved everything. Writing and reading a database means the data would survive such problems.
Keep the schema simple, provide a reasonable index on it, and it should be fast enough for most uses. You'll need some housekeeping code to keep the database clean, but that's easy using SQL, or an ORM like Sequel or ActiveRecord.