Assume that we have two applications:
MasterApp
SlaveApp
MasterApp is executing SlaveApp with some arguments, fe: slaveapp --param1 100 param2 "hello"
You can't see that directly, but somebody may try to inspect arguments provided to slaveapp, and execute it from console.
I want slaveapp to become executable only by masterapp, so that user can't run it in console mode (or as slave or another app). I was thinking about providing some unique_string and md5(unique_string + salt), but if somebody will inspect arguments he may understand what's goin' on. Is there some way to do it only by providing some unique, trusted argument that can't be used twice (and there is no resource sharing like files with private/ public keys etc)?
How about just encrypting the paramaters passed with a pre-defined encryption key and including a check_string of some type (i.e. EPOCH time). Then decode the paramaters in salveapp and verify the check_string (in this example that EPOCH time) is within a certain range or is a certain value.
Here is a simple ruby example, its in a single file so you would need to modify it to handel command line arguments ect.
require 'openssl'
require 'digest/sha1'
c = OpenSSL::Cipher::Cipher.new("aes-256-cbc")
c.encrypt
# your pass is what is used to encrypt/decrypt
c.key = key = Digest::SHA1.hexdigest("1094whfiubf9qwer8y32908u3209fn2032")
c.iv = iv = c.random_iv
e = c.update("#{Time.now.to_i}")
e << c.final
puts "encrypted: #{e}\n"
#sleep(15) #if you uncomment this the validation will fail.
c = OpenSSL::Cipher::Cipher.new("aes-256-cbc")
c.decrypt
c.key = key
c.iv = iv
d = c.update(e)
d << c.final
if(Time.now.to_i - d.to_i < 10)
puts "decrypted: #{d}\n"
puts "Validated EPOCH Time"
else
puts "Validation FAILED."
end
It is basically impossible to avoid replay attacks if your communication channel only goes master -> slave. Signing the request with a timestamp in it could help, but even that isn't perfect (especially if the attacker has some control of the clock).
The better strategy is to establish a two-way communication between master and slave. I'm not sure what language you're working in, but usually there's a way for the master to talk to the slave after it is forked, other than just the command line.
Using that channel, you can have the slave generate a random nonce, send that to the master, have the master sign it, send it back to the slave, and check the signature in the slave.
Make sure the slave app is owned by the same user the master app runs as, and make sure it's not world readable or executable.
Related
I am trying to ping a large amount of urls and retrieve information regarding the certificate of the url. As I read in this thoughtbot article here Thoughtbot Threads and others, I've read that the best way to do this is by using Threads. When I implement threads however, I keep running into Timeout errors and other problems for urls that I can retrieve successfully on their own. I've been told in another related question that I asked earlier that I should not use Timeout with Threads. However, the examples I see wrap API/NET::HTTP/TCPSocket calls in the Timeout block and based opn what I've read, that entire API/NET::HTTP/TCP Socket call will be nested within the Thread. Here is my code:
class SslClient
attr_reader :url, :port, :timeout
def initialize(url, port = '443', timeout = 30)
#url = url
#port = port
#timeout = timeout
end
def ping_for_certificate_info
context = OpenSSL::SSL::SSLContext.new
certificates = nil
verify_result = nil
Timeout.timeout(timeout) do
tcp_client = TCPSocket.new(url, port)
ssl_client = OpenSSL::SSL::SSLSocket.new tcp_client, context
ssl_client.hostname = url
ssl_client.sync_close = true
ssl_client.connect
certificates = ssl_client.peer_cert_chain
verify_result = ssl_client.verify_result
tcp_client.close
end
{certificate: certificates.first, verify_result: verify_result }
rescue => error
puts url
puts error.inspect
end
end
[VERY LARGE LIST OF URLS].map do |url|
Thread.new do
ssl_client = SslClient.new(url)
cert_info = ssl_client.ping_for_certificate_info
puts cert_info
end
end.map(&:value)
If you run this code in your terminal, you will see many Timeout errors and ERNNO:TIMEDOUT errors for sites like fandango.com, fandom.com, mcaffee.com, google.de etc that should return information. When I run these individually however I get the information I need. When I run them in the thread they tend to fail especially for domains that have a foreign domain name. What I'm asking is whether I am using Threads correctly. This snippet of code that I've pasted is part of a larger piece of code that interacts with ActiveRecord objects in rails depending on the results given. Am I using Timeout and Threads correctly? What do I need to do to make this work? Why would a ping work individually but not wrapped in a thread? Help would be greatly appreciated.
There are several issues:
You'd not spawn thousands of threads, use a connection pool (e.g https://github.com/mperham/connection_pool) so you have maximum 20-30 concurrent requests going (this maximum number should be determined by testing at which point network performance drops and you get these timeouts).
It's difficult to guarantee that your code is not broken when you use threads, that's why I suggest you use something where others figured it out for you, like https://github.com/httprb/http (with examples for thread safety and concurrent requests like https://github.com/httprb/http/wiki/Thread-Safety). There are other libs out there (Typhoeus, patron) but this one is pure Ruby so basic thread safety is easier to achieve.
You should not use Timeout (see https://jvns.ca/blog/2015/11/27/why-rubys-timeout-is-dangerous-and-thread-dot-raise-is-terrifying and https://medium.com/#adamhooper/in-ruby-dont-use-timeout-77d9d4e5a001). Use IO.select or something else.
Also, I suggest you learn about threading issues like deadlocks, starvations and all the gotchas. In your case you are doing a starvation of network resources because all the threads are fighting for bandwidth/network.
An older version of Net::SSH had #send_signal. Since that method no longer seems to be available, have tried sending "\cC" via #send_data, and also tried closing the channel, but the remote command continues running.
What's the right way to send signals to a Net::SSH::Channel now?
You want to use request_pty to create a pseudo-terminal. You can use that terminal to send an interrupt command.
Unless you have a PTY, you can't send control characters. This can be as easy as calling request_pty.
It took me a while to figure out, mostly as my system is a bit more complicated - I have multiple SSH sessions running and another thread needs to cause all channels to terminate, so my code looks something like this:
def do_funny_things_to_server host,user
Net::SSH.start(host, user) do |ssh|
#connections << ssh
ssh.open_channel do |chan|
chan.on_data do |ch, data|
puts data
end
# get a PTY with no echo back, this is mostly to hide the "^C" that
# is otherwise output when I interrupt the channel
chan.request_pty(modes: [ [ Net::SSH::Connection::Term::ECHO, 0] ])
chan.exec "tail -f /some/log/file"
end
ssh.loop(0.1)
#connections.delete ssh
end
end
def cancel_all
#connections.each do |ssh|
ssh.channels.each do |id,chan|
chan.send_data "\x03"
end
end
end
###Rant:
There's so little documentation about how to use request_pty parameters that it can be said to not exist. I had to figure out modes: by reading both the source of net-ssh and the SSH RFC and sprinkling some educated guesses about the meaning of the word "flags" in section 8 of the RFC.
Someone pointed in another relevant (though not Ruby specific) answer that there's a "signal" message that can be used to send signals over the SSH connection, but also that OpenSSH (the implementation I use for the server) does not support it. If your server supports it, you might want to try to use it like this:
channel.send_channel_request 'signal', :string, 'INT'
See "signal" channel message in the RFC and the buffer.rb source file to understand the parameters. Insert here the expected rant about complete lack of documentation of how to use send_channel_request. The above suggestion is mostly to document for myself how to use this method.
The answer linked above also mentions an SSH extension called "break" which is supposedly supported by OpenSSH, but I couldn't get it to work to interrupt a session.
I'm trying to implement a memory based, multi process shared mutex, which supports timeout, using Redis.
I need the mutex to be non-blocking, meaning that I just need to be able to know if I was able to fetch the mutex or not, and if not - simply continue with execution of fallback code.
something along these lines:
if lock('my_lock_key', timeout: 1.minute)
# Do some job
else
# exit
end
An un-expiring mutex could be implemented using redis's setnx mutex 1:
if redis.setnx('#{mutex}', '1')
# Do some job
redis.delete('#{mutex}')
else
# exit
end
But what if I need a mutex with a timeout mechanism (In order to avoid a situation where the ruby code fails before the redis.delete command, resulting the mutex being locked forever, for example, but not for this reason only).
Doing something like this obviously doesn't work:
redis.multi do
redis.setnx('#{mutex}', '1')
redis.expire('#{mutex}', key_timeout)
end
since I'm re-setting an expiration to the mutex EVEN if I wasn't able to set the mutex (setnx returns 0).
Naturally, I would've expected to have something like setnxex which atomically sets a key's value with an expiration time, but only if the key does not exist already. Unfortunately, Redis does not support this as far as I know.
I did however, find renamenx key otherkey, which lets you rename a key to some other key, only if the other key does not already exist.
I came up with something like this (for demonstration purposes, I wrote it down monolithically, and didn't break it down to methods):
result = redis.multi do
dummy_key = "mutex:dummy:#{Time.now.to_f}#{key}"
redis.setex dummy_key, key_timeout, 0
redis.renamenx dummy_key, key
end
if result.length > 1 && result.second == 1
# do some job
redis.delete key
else
# exit
end
Here, i'm setting an expiration for a dummy key, and try to rename it to the real key (in one transaction).
If the renamenx operation fails, then we weren't able to obtain the mutex, but no harm done: the dummy key will expire (it can be optionally deleted immediately by adding one line of code) and the real key's expiration time will remain intact.
If the renamenx operation succeeded, then we were able to obtain the mutex, and the mutex will get the desired expiration time.
Can anyone see any flaw with the above solution? Is there a more standard solution for this problem? I would really hate using an external gem in order to solve this problem...
If you're using Redis 2.6+, you can do this much more simply with the Lua scripting engine. The Redis documentation says:
A Redis script is transactional by definition, so everything you can do with a Redis transaction, you can also do with a script, and usually the script will be both simpler and faster.
Implementing it is trivial:
LUA_ACQUIRE = "return redis.call('setnx', KEYS[1], 1) == 1 and redis.call('expire', KEYS[1], KEYS[2]) and 1 or 0"
def lock(key, timeout = 3600)
if redis.eval(LUA_ACQUIRE, key, timeout) == 1
begin
yield
ensure
redis.del key
end
end
end
Usage:
lock("somejob") { do_exclusive_job }
Starting from redis 2.6.12 you can do: redis.set(key, 1, nx: true, ex: 3600) which is actually SET key 1 NX EX 3600.
I was inspired by the simplicity that of both Chris's and Mickey's solutions, and created gem - simple_redis_lock with this code(and some features and rspec):
def lock(key, timeout)
if #redis.set(key, Time.now, nx: true, px: timeout)
begin
yield
ensure
release key
end
end
end
I explored some other awesome alternatives:
mlanett/redis-lock
PatrickTulskie/redis-lock
leandromoreira/redlock-rb
dv/redis-semaphore
but they had too many features of blocking to acquire lock and didn't use this single SET KEY 1 NX EX 3600 atomic redis statement.
I am writing a Ruby script which automatically crawls websites for data analysis, and now I have a requirement which is fairly complicated: I have to be able to simulate access from a variety of countries, about 20 different ones. The website will contain different information depending on the IP location, so the only way to get it done is to request it from a server which is actually in that country.
Since I don't want to buy servers in each of those 20 countries, I chose to give Tor a try - as many of you will know, by editing the torrc configuration file it is possible to specify the exit node and hence the country from which the actual request will originate.
When I do this manually, e.g. by editing the torrc file to use an Argentinian server, then disconnecting Tor using Vidalia, reconnecting Vidalia, and then rerunning the request, it works fine. However, I want to automate this process entirely, and do it as efficiently as possible. Tor is written in C, and I'd like to avoid taking apart its entire source code for this. Any idea of what's the easiest way to automate the whole process using only Ruby?
Also, if I'm missing something and there's a simpler alternative to this whole ordeal, let me know.
Thanks!
Please take a look at Tor control protocol. You can control circuits using telnet.
http://thesprawl.org/memdump/?entry=8
To switch to a new circuit wich switches to a new endpoint:
require 'net/telnet'
def switch_endpoint
localhost = Net::Telnet::new("Host" => "localhost", "Port" => "9051", "Timeout" => 10, "Prompt" => /250 OK\n/)
localhost.cmd('AUTHENTICATE ""') { |c| print c; throw "Cannot authenticate to Tor" if c != "250 OK\n" }
localhost.cmd('signal NEWNYM') { |c| print c; throw "Cannot switch Tor to new route" if c != "250 OK\n" }
localhost.close
end
Be aware of the delay to make a new circuit, may take couple seconds, so you'd better add a delay in the code, or check if your address has changed by calling some remote IP detection site.
Is there a way to find out how many bytes of data is available on an TCPSocket in Ruby? I.e. how many bytes can be ready without blocking?
The standard library io/wait might be useful here. Requiring it gives stream-based I/O (sockets and pipes) some new methods, among which is ready?. According to the documentation, ready? returns non-nil if there are bytes available without blocking. It just so happens that the non-nil value it returns it the number of bytes that are available in MRI.
Here's an example which creates a dumb little socket server, and then connects to it with a client. The server just sends "foo" and then closes the connection. The client waits a little bit to give the server time to send, and then prints how many bytes are available for reading. The interesting stuff for you is in the client:
require 'socket'
require 'io/wait'
# Server
server_socket = TCPServer.new('localhost', 0)
port = server_socket.addr[1]
Thread.new do
session = server_socket.accept
sleep 0.5
session.puts "foo"
session.close
end
# Client
client_socket = TCPSocket.new('localhost', port)
puts client_socket.ready? # => nil
sleep 1
puts client_socket.ready? # => 4
Don't use that server code in anything real. It's deliberately short in order to keep the example simple.
Note: According to the Pickaxe book, io/wait is only available if "FIONREAD feature in ioctl(2)", which it is in Linux. I don't know about Windows & others.