I am trying to programmatically upload a large-sized (10+ MB) file to a server that supports XMLRPC.
For those interested, the background is:
The server is Confluence.
I am trying to upload a file as attachment to a Confluence page.
I am using the Confluence XMLRPC API. The API call is addAttachment.
The code looks something like this
require 'xmlrpc/client'
server = XMLRPC::Client.new2(server_url,"",600) # one attempt to set timeout
server.instance_variable_get(:#http).instance_variable_set(:#verify_mode, OpenSSL::SSL::VERIFY_NONE)
server.timeout=6000 # another attempt to set timeout
#conf = server.proxy("confluence2")
I know that the XMLRPC client will wait for TIMEOUT seconds before it aborts. So its natural that I got the following response when trying to upload the large file (smaller files don't throw this error).
C:/Ruby193/lib/ruby/1.9.1/openssl/buffering.rb:318:in `syswrite': An established connection was aborted by the software
in your host machine. (Errno::ECONNABORTED)
from C:/Ruby193/lib/ruby/1.9.1/openssl/buffering.rb:318:in `do_write'
from C:/Ruby193/lib/ruby/1.9.1/openssl/buffering.rb:336:in `write'
from C:/Ruby193/lib/ruby/1.9.1/net/protocol.rb:199:in `write0'
from C:/Ruby193/lib/ruby/1.9.1/net/protocol.rb:173:in `block in write'
from C:/Ruby193/lib/ruby/1.9.1/net/protocol.rb:190:in `writing'
from C:/Ruby193/lib/ruby/1.9.1/net/protocol.rb:172:in `write'
from C:/Ruby193/lib/ruby/1.9.1/net/http.rb:1938:in `send_request_with_body'
from C:/Ruby193/lib/ruby/1.9.1/net/http.rb:1920:in `exec'
However, I am unable to properly set the response timeout. I have tried
server.timeout
the timeout parameter in the new2 call
Neither of which work. I have verified that the client still waits the default 30 seconds for the response, and then throws the error.
So, what am I doing wrong?
Related
I have created a simple HTML page to control the movement of a simulated Gazebo Turtlebot using roslaunch rosbridge_server rosbridge_websocket.launch following this tutorial.
However, in the Web Console of the HTML page (F12) it shows the error "Firefox cant establish a connection to the server at ws://localhost:9090/." I am using the default rosbridge for the websocket(9090). In the Terminal I am also receiving the errors:
[-] failing WebSocket opening handshake ('WebSocket connection denied: origin 'null' not allowed')
[-] dropping connection to peer tcp4:127.0.0.1:41290 with abort=False: WebSocket connection denied: origin 'null' not allowed.
Does anyone have any suggestions on how I can fix this?
Given that you have followed the ROS tutorial and have created an HTML file as shown in Ros Bridge tutorial then you have to run:
runcore
rosrun rospy_tutorials add_two_ints_server
roslaunch rosbridge_server rosbridge_websocket.launch
Now that you have these up and running, you need to serve the html/javascript file (e.g. simple.html) and start the services etc. For example, you can serve the simple.html by using a SimpleHTTPServer, see below an example (e.g. simplehttpserver_test.py):
#!/usr/bin/env python
import SimpleHTTPServer
import SocketServer
class MyRequestHandler(SimpleHTTPServer.SimpleHTTPRequestHandler):
def do_GET(self):
if self.path == '/':
self.path = '/simple.html'
return SimpleHTTPServer.SimpleHTTPRequestHandler.do_GET(self)
Handler = MyRequestHandler
server = SocketServer.TCPServer(('127.0.0.1', 9089), Handler)
server.serve_forever()
Once you run the simplehttpserver_test.py you can open the browser on 127.0.0.1:9089 and you should be able to have it working.
Note that SimpleHTTPServer serves files from the current directory and below, directly mapping the directory structure to HTTP requests, which means that the simple.html should be in the same (or below) directory as the simplehttpserver_test.py. Last, the port for the simplehttpserver_test.py should differ from the one used for the Rosbridge WebSocket server (e.g. default is 9090).
I am trying to install redmine (bugtracker which runs on ruby). I use webrick, it starts all fine, but when I access http://IP:3000/, it throws the following error in the server logs and the page does not load in the browser.
ERROR Errno::ECONNRESET: Connection reset by peer # io_fillbuf - fd:11
/root/.rbenv/versions/2.5.1/lib/ruby/2.5.0/webrick/httpserver.rb:82:in `eof?'
/root/.rbenv/versions/2.5.1/lib/ruby/2.5.0/webrick/httpserver.rb:82:in `run'
/root/.rbenv/versions/2.5.1/lib/ruby/2.5.0/webrick/server.rb:307:in `block in start_thread'
I am a bit stuck here, any help would be greatly appreciated.
Thanks in advance.
In the end, it was due to the port 3000 was being blocked by the network where the server was part of. Once the port block was removed, it started to work. Thanks #mike-k for your tips.
I'm experiencing an unresponsive socket in with my Puma setup after random time. Up to this point I don't have a clue what's causing the issue. I was hoping somebody over here can help we with some answers or point me in the right direction. I'm having the following setup:
I'm using the official docker ruby-2.2.3-slim image together with the latest puma release 2.15.3, I've also installed Nginx as a reverse proxy. But I'm already sure Nginx isn't the problem over here because and I've tried to verify if the socket was working using this script. And the socket wasn't working, I got a timeout over there as well so I could ignore Nginx.
This is a testing environment so the server isn't experiencing any extreme load, I've also check memory consumption it has still several GB's of free space so that couldn't be the issue either.
What triggered me to look at the puma socket was the error message I got in my Nginx error logging:
upstream timed out (110: Connection timed out) while reading response header from upstream
Also I couldn't find anything in the logs of puma indicating what is going wrong, over here are my puma setup:
threads 0, 16
app_dir = ENV.fetch('APP_HOME')
environment ENV['RAILS_ENV']
daemonize
bind "unix://#{app_dir}/sockets/puma.sock"
stdout_redirect "#{app_dir}/log/puma.stdout.log", "#{app_dir}/log/puma.stderr.log", true
pidfile "#{app_dir}/pids/puma.pid"
state_path "#{app_dir}/pids/puma.state"
activate_control_app
on_worker_boot do
require 'active_record'
ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
ActiveRecord::Base.establish_connection(YAML.load_file("#{app_dir}/config/database.yml")[ENV['RAILS_ENV']])
end
And this it the output in my puma state file:
---
pid: 43
config: !ruby/object:Puma::Configuration
cli_options:
conf:
options:
:min_threads: 0
:max_threads: 16
:quiet: false
:debug: false
:binds:
- unix:///APP/sockets/puma.sock
:workers: 1
:daemon: true
:mode: :http
:before_fork: []
:worker_timeout: 60
:worker_boot_timeout: 60
:worker_shutdown_timeout: 30
:environment: staging
:redirect_stdout: "/APP/log/puma.stdout.log"
:redirect_stderr: "/APP/log/puma.stderr.log"
:redirect_append: true
:pidfile: "/APP/pids/puma.pid"
:state: "/APP/pids/puma.state"
:control_url: unix:///tmp/puma-status-1449260516541-37
:config_file: config/puma.rb
:control_url_temp: "/tmp/puma-status-1449260516541-37"
:control_auth_token: cda8879717be7a645ea323d931b88d4b
:tag: APP
The application itself is a Rails app on the latest version 4.2.5, it's deployed on GCE (Google Container Engine).
If somebody could give me some pointer's on how to debug this any further would be very much appreciated. Because now I don't see any output anywhere which could help me any further.
EDIT
I replaced the unix socket with tcp connection to Puma with the same result, still hangs after x time
I'd start with:
How many requests get processed successfully per instance of puma?
Make sure you log the beginning and end of each request with the thread id of the thread executing it, what do you see?
Not knowing more about your application, I'd say it's likely the threads get stuck doing some long/blocking calls without timeouts or spinning on some computation until the whole thread pool gets depleted.
We'll see.
I finally found out why my application was behaving the way it was.
After trying to use a tcp connection and switching to Unicorn I start looking into other possible sources.
That's when I thought maybe my connection to Google Cloud SQL could be the problem. Once I read the faq of Cloud SQL, they mentioned that you have to tweak you Compute instances to ensure they keep open your DB connection. So I performed the next steps they recommend and that solved the problem for me, I added them just in case:
# Display the current tcp_keepalive_time value.
$ cat /proc/sys/net/ipv4/tcp_keepalive_time
# Set tcp_keepalive_time to 60 seconds and make it permanent across reboots.
$ echo 'net.ipv4.tcp_keepalive_time = 60' | sudo tee -a /etc/sysctl.conf
# Apply the change.
$ sudo /sbin/sysctl --load=/etc/sysctl.conf
# Display the tcp_keepalive_time value to verify the change was applied.
$ cat /proc/sys/net/ipv4/tcp_keepalive_time
I have a script that reads lines one by one from a csv file and index the same to elasticsearch in the same order. My elasticsearch host is the same machine on which the script is running. Everything else works fine except for a few rows when I suddenly start getting the following error:
W, [2012-10-09T14:46:00.899876 #11567] WARN -- : Cannot assign requested address connect(2)
D, [2012-10-09T14:46:00.900037 #11567] DEBUG -- : ["/home/azitabh/.rvm/rubies/ruby-1.9.2-p320/lib/ruby/1.9.1/net/http.rb:644:in `initialize'", "/home/azitabh/.rvm/rubies/ruby-1.9.2-p320/lib/ruby/1.9.1/net/http.rb:644:in `open'", "/home/azitabh/.rvm/rubies/ruby-1.9.2-p320/lib/ruby/1.9.1/net/http.rb:644:in `block in connect'", "/home/azitabh/.rvm/rubies/ruby-1.9.2-p320/lib/ruby/1.9.1/timeout.rb:44:in `timeout'", "/home/azitabh/.rvm/rubies/ruby-1.9.2-p320/lib/ruby/1.9.1/timeout.rb:89:in `timeout'", "/home/azitabh/.rvm/rubies/ruby-1.9.2-p320/lib/ruby/1.9.1/net/http.rb:644:in `connect'", "/home/azitabh/.rvm/rubies/ruby-1.9.2-p320/lib/ruby/1.9.1/net/http.rb:637:in `do_start'", "/home/azitabh/.rvm/rubies/ruby-1.9.2-p320/lib/ruby/1.9.1/net/http.rb:626:in `start'", "/home/azitabh/.rvm/gems/ruby-1.9.2-p320/gems/rest-client-1.6.7/lib/restclient/request.rb:172:in `transmit'", "/home/azitabh/.rvm/gems/ruby-1.9.2-p320/gems/rest-client-1.6.7/lib/restclient/request.rb:64:in `execute'", "/home/azitabh/.rvm/gems/ruby-1.9.2-p320/gems/tire-0.4.2/lib/tire/http/client.rb:11:in `get'", "/home/azitabh/.rvm/gems/ruby-1.9.2-p320/gems/tire-0.4.2/lib/tire/search.rb:94:in `perform'", "/home/azitabh/.rvm/gems/ruby-1.9.2-p320/gems/tire-0.4.2/lib/tire/search.rb:20:in `results'", "models/test.rb:31:in `get_details'", "models/test.rb:56:in `block in index_test'", "/home/azitabh/.rvm/rubies/ruby-1.9.2-p320/lib/ruby/1.9.1/csv.rb:1768:in `each'", "/home/azitabh/.rvm/rubies/ruby-1.9.2-p320/lib/ruby/1.9.1/csv.rb:1202:in `block in foreach'", "/home/azitabh/.rvm/rubies/ruby-1.9.2-p320/lib/ruby/1.9.1/csv.rb:1340:in `open'", "/home/azitabh/.rvm/rubies/ruby-1.9.2-p320/lib/ruby/1.9.1/csv.rb:1201:in `foreach'", "models/test.rb:40:in `index_test'", "models/test.rb:85:in `<main>'"]
And the errors are not related to the values at those rows in the csv. I get these errors at different locations at different times.
There is another error "WAIT_TIMEOUT" which I get some time. Couldn't put the trace here as I didn't get that error this time.
I am coding in ruby and using "Tire" gem to talk to elasticsearch. I don't feel these are responsible in any way though.
JAVA was using 8% of my system's memory at the time I got this error. This is much below the assigned value ES_MIN_MEM=2g.
Thanks in advance
-Azitabh
This is happening because tire doesn't close the tcp connection by itself. Once all the available ports gets engaged, no further connection is possible until system closes all connections in waiting state itself. This takes some time and any attempt to establish new connection results in wait_timeout.
I'm trying to set the timeout for subsequent http calls to a very unreliable API. I tried multiple attempts at using Ruby's built-in Timeout.timeout() method but had had no such luck getting it to extend to sub calls. For example, Timeout.timeout(300) will set the first timeout to 300 but sub calls go back to 60. I added a print of the seconds_delay and here is what I saw:
[16:55:16 miker#laughwhat-lm ~/optisol/src/rails/tools_app/trunk/adhoc/ticket] $ bundle exec ruby buck.rb
300
nil
warning: peer certificate won't be verified in this SSL session
60
60
nil
warning: peer certificate won't be verified in this SSL session
60
Here is the error I receive with full stack trace:
[16:49:50 miker#laughwhat-lm ~/optisol/src/rails/tools_app/trunk/adhoc/ticket] $ bundle exec ruby buck.rb
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
/Users/miker/.rvm/rubies/ree-1.8.7-2011.03/lib/ruby/1.8/timeout.rb:64:in `rbuf_fill': execution expired (Timeout::Error)
from /Users/miker/.rvm/rubies/ree-1.8.7-2011.03/lib/ruby/1.8/net/protocol.rb:134:in `rbuf_fill'
from /Users/miker/.rvm/rubies/ree-1.8.7-2011.03/lib/ruby/1.8/net/protocol.rb:116:in `readuntil'
from /Users/miker/.rvm/rubies/ree-1.8.7-2011.03/lib/ruby/1.8/net/protocol.rb:126:in `readline'
from /Users/miker/.rvm/rubies/ree-1.8.7-2011.03/lib/ruby/1.8/net/http.rb:2028:in `read_status_line'
from /Users/miker/.rvm/rubies/ree-1.8.7-2011.03/lib/ruby/1.8/net/http.rb:2017:in `read_new'
from /Users/miker/.rvm/rubies/ree-1.8.7-2011.03/lib/ruby/1.8/net/http.rb:1051:in `request_without_fakeweb'
from /Users/miker/.rvm/gems/ree-1.8.7-2011.03/gems/fakeweb-1.3.0/lib/fake_web/ext/net_http.rb:50:in `request'
from /Users/miker/.rvm/rubies/ree-1.8.7-2011.03/lib/ruby/1.8/net/http.rb:845:in `post'
from /Users/miker/.rvm/rubies/ree-1.8.7-2011.03/lib/ruby/1.8/soap/netHttpClient.rb:93:in `post'
from /Users/miker/.rvm/rubies/ree-1.8.7-2011.03/lib/ruby/1.8/soap/netHttpClient.rb:116:in `start'
from /Users/miker/.rvm/rubies/ree-1.8.7-2011.03/lib/ruby/1.8/net/http.rb:543:in `start'
from /Users/miker/.rvm/rubies/ree-1.8.7-2011.03/lib/ruby/1.8/soap/netHttpClient.rb:115:in `start'
from /Users/miker/.rvm/rubies/ree-1.8.7-2011.03/lib/ruby/1.8/soap/netHttpClient.rb:92:in `post'
from /Users/miker/.rvm/rubies/ree-1.8.7-2011.03/lib/ruby/1.8/soap/streamHandler.rb:170:in `send_post'
from /Users/miker/.rvm/rubies/ree-1.8.7-2011.03/lib/ruby/1.8/soap/streamHandler.rb:109:in `send'
from /Users/miker/.rvm/rubies/ree-1.8.7-2011.03/lib/ruby/1.8/soap/rpc/proxy.rb:170:in `route'
from /Users/miker/.rvm/rubies/ree-1.8.7-2011.03/lib/ruby/1.8/soap/rpc/proxy.rb:141:in `call'
from /Users/miker/.rvm/rubies/ree-1.8.7-2011.03/lib/ruby/1.8/soap/rpc/driver.rb:178:in `call'
from /Users/miker/.rvm/rubies/ree-1.8.7-2011.03/lib/ruby/1.8/soap/rpc/driver.rb:232:in `getByBuyer'
from buck.rb:9
from /Users/miker/.rvm/gems/ree-1.8.7-2011.03/gems/yieldmanager-0.8.2/lib/yieldmanager/client.rb:131:in `session'
from buck.rb:8
from /Users/miker/.rvm/rubies/ree-1.8.7-2011.03/lib/ruby/1.8/timeout.rb:67:in `timeout'
from /Users/miker/.rvm/rubies/ree-1.8.7-2011.03/lib/ruby/1.8/timeout.rb:101:in `timeout'
from buck.rb:6
So I guess my question would be how can I go about patching the protocol.rb BufferedIO's method to look like this:
class BufferedIO
private
def rbuf_fill
puts "working"
timeout(300) { # forced 300 second timeout
#rbuf << #io.sysread(BUFSIZE)
}
end
end
Adding that to my ruby file before or after I do my requires/includes does not have an affect (i.e. no "working" is ever printed out). Hope someone has a solution. Thanks!
Of all the various Ruby library's, the clumsiest to work with and ugliest, in my opinion, would have to be net::http. Have you considered switching to something like this:
https://github.com/dbalatero/typhoeus
The requests in typhoeus occur via lib-curl and so aren't bound by Ruby's not-so-reliable when nested or threaded timeout method.
In terms of the rbuf_fill issue, I'm not sure if that will get you there. If I remember correctly, when a timeout exception fires, it always shows the location that the code is currently at in the stack. That location is only incidental. Take the example below I just ran in irb. Notice how it tells you the timeout occured in "sleep"? Where the timeout is reported is just what it happened to be doing at that moment, not where the timeout code is necessarily implemented, nor do you know which timeout it is that is tripping if there are multiple. I'd have to chase down the rbuf_fill to confirm this for you though, and I have to run at the moment...
irb> timeout(2){sleep 5}
Timeout::Error: execution expired
from (irb):3:in sleep'
from (irb):3:inblock in irb_binding'
from (irb):3
from /home/ebelan