Net::ReadTimeout with local URI - ruby

I'm trying to understand why is the following method is blocking my app.
url = 'http://192.168.1.33/assets/my_small_pic.jpg'
image_file = open(url).read
It's working perfectly when I try it in the console. But when I do it from an API method, it blocks my app and after a long while I have the following error:
Net::ReadTimeout (Net::ReadTimeout)
What does my app not like my way of reading the file?

I assume you're using 'open-uri' and the api is a part of the same RoR app where you're sending the request to. In this case your app is just being blocked by the first request while you send a second request to it so the second request gives a timeout. You should see this issue in development only. In production things will be a bit different since static assets to be served by the Nginx or Apache. Additionally Rails 4 is by default is thread safe in production which means it can serve multiple requests at a time. So if you're on Rails4 then in production calls to other apis will work as well. For Rails3 you would have to explicitly specify config.threadsafe!
In general I would recommend you to access resources or make any other API calls from the same app directly. It's more efficient. In your example above you can read the file like this:
File.read(File.join(Rails.root, 'public/assets/my_small_pic.jpg'))
If you still want to send the request then to make it work in development you have to start a new thread similar to this:
Thread.new do
open('http://192.168.1.33/assets/my_small_pic.jpg').read
end

Related

GoLang SPA returning 500 Internal Server Error on Refresh only

I have a golang API app that is currently serving double-duty as a Single Page App server with static content using the method shown here: https://hackandsla.sh/posts/2021-11-06-serve-spa-from-go/
Everything is working great in terms of navigation until users try to refresh URIs with encoded JSON in them. For example:
/licenses
will refresh file and draw the page as it would normally have appeared through internal history.push()
/licenses/show/%7B"options":%7B"container":"home","field":"date","order":"desc"%7D,"license":"00001a"%7D
will cause the 500 error.
I did the initial development with IIS as a web server so these Refresh errors never happened in that environment. And when the server is ready to be deployed I plan to use Caddy and reverse proxy the API and am assuming it will handle the Refreshes with the same aplomb as IIS.
But for now I am hoping to run tests against my simple server so I'd like to solve this issue out of curiosity in addition to development expediency.
Bottom line: What cause golang http.ListenAndServe to return 500 errors?
UPDATE:
As I need to be able to test and hand off for others I have converted to a querystring which http.ListenAndServe is happy with:
/licenses/show/%7B"options":%7B"container":"home","field":"date","order":"desc"%7D,"license":"00001a"%7D
causes 500 error
/licenses/show?state=%7B"options":%7B"container":"home","field":"date","order":"desc"%7D,"license":"00001a"%7D
works fine

curl php returning status code 0 for golang api

I have created a getList api in golang. Now I am trying to call the getList api from my php function using php-curl.
I am making thousands of request from my php function. However, around 15k requests are served properly but after 15k - 20k (number varies) and further requests,
Curl CURLINFO_HTTP_CODE return 0 and response is "" and curl_error return empty string.Also the curl_errno return 7
My golang getList api is simple. It takes data from db and returns it. It Does not contain anygoroutines.
I don't understand that why after 15k-20k requests it starts giving me empty response. Don't know if it is curl-php problem or golang api problem. Also It can be that my golang api is denying serving the requests.
Please help.
Have you tried to test it with HTTP testing tools like ab, httperf, jmeter or alike?
* Try to run them with different number of requests and simultaneous requests.
First put static file to the webserver and try to fetch it in the same manner. Do you see such problems? If yes, there can problems with network configuration, few buffers, sockets maxfiles and so on.
If not - try to feed this static file with golang app. If you see problems investigate them in golang settings.
If no, check your app with db config. If there are problems check DB connections. May be they're not properly closed and got exhausted.

Problems attempting to upload image to Twitter via POST in Sinatra

I'm using Sinatra 1.2.6 in Ruby 1.8.7 and I have something like a Twitter client that I'm writing. I am using the Twitter gem version 1.7.2 written by John Nunemaker. For database ORM I'm using Sequel 3.29.0.
Overall, things are working great. I've got a good Oauth sequence working and any user who goes through the Oauth process can post Tweets to my application.
I cannot however for the life of me get media upload working using update_with_media. I'm trying to upload a multi-part octet-stream image file, keep it in memory and then give it to Twitter.
post '/file_upload' do
user_name = params[:user]
if params[:action] == "FILE-UPLOAD"
unless params[:name].match(/\.jpg|png|jpeg/).nil?
#Assume these 3 lines work, and properly authorize to Twitter
current_user = User[:user_name => user_name, :current_account => "1"]
client = current_user.authorize_to_twitter #Handles the Oauth keys/process
client.update("Text status updates work properly")
#Something incorrect is happening in the next two lines.
#I'm either handling the file upload wrong, or posting wrong to Twitter
datafile = params[:file]
client.update_with_media("File upload from Skype: ", datafile)
return "File uploaded ok"
end
end
end
Yet, when I try this, I'm getting:
Twitter::Unauthorized - POST https://upload.twitter.com/1/statuses/update_with_media.json: 401: Could not authenticate with OAuth.
Its saying the line causing this error is the client.update_with_media line.
I am trying to use Rack::RawUpload, but I don't know if I'm using it incorrectly. If I don't need to use it I won't, but I'm just currently stuck. The only thing outside of this code snippet that's using it is this at the top of my code:
require 'rack/raw_upload'
use Rack::RawUpload
Any help on this would be massively appreciated. I've tried messing around with Tempfile.new() as well, but that didn't seem to help much, and I was either getting 401 or 403 errors. I'm fairly new to Ruby, so being as explicit as possible about changes needed would be really helpful.
I should note that I'd like to avoid putting the file on the filesystem if possible. I'm really just passing along the upload here, and I never need access in my scenario to the file on-disk afterward. Keeping the files in-memory is much preferred.
You need to check how your library HTTP headers are setup and logically connected to the POST method you have written here. The thing is that for upload_with_media, twitter api in this gem version requires you to use http://upload.twitter.com upload endpoint instead of the default api endpoint.
The gem may be forcing the api site so while the OAuth based status update works fine, it crashes when you try it with an image. You will need to check the gem documentation to figure out how to force the upload twitter site into the HTTP headers for this method.
Alternatively, consider updating to the latest twitter gem. This is what I got from http://rdoc.info/gems/twitter
The Twitter::API#update_with_media method no longer uses the custom upload.twitter.com endpoint, so media_endpoint configuration has been removed. Likewise, the Twitter::API#search method no longer uses the custom search.twitter.com endpoint, so search_endpoint configuration has also been removed.

Any way to get around the browser http timeout during debugging?

I am currently working on a Django development. There is a problem, which isn't a true problem but very annoying. Often, when I try to debug my Django app by putting down some break points, I get this error at the server end:
error: [Errno 32] Broken pipe
After reading this other post, Django + WebKit = Broken pipe, I have learned that this has nothing to do with the server but the client browser used. Basically, what happened is that the browser has a http request timeout. If it doesn't receive a response within the timeout, it will close down the connection with the server.
I find this timeout isn't really needed, indeed causing headache, during debugging. Is there any way I can lift this timeout or increase it for my browser (Chrome)? Or maybe a substitute browser that doesn't have this constraint?
Note: Although I am using Django and have mentioned about it, this isn't a Django-related question. It's more like a question on how to make my debugging process more effective.
I prefer using linux/unix curl command for debugging web applications. It's good approach, especially if you want to focus on some specific request, for example: POST does not work fine for some set of parameters, or cookies are not set as expected.
Of course it may take some time at the beginning to find out how to use it, but then, you will have a total control about every single piece of request: timeouts, cookies, headers and so on. It's very helpful, because you can be sure that what you wanted to send is actually sent (no additional data is added by the web browser).

Recieving a 404 HTTPError on a working page in Ruby Script

This is my first time asking a question, please be gentle!
I have a Rails application that handles content for a whole bunch of domains (over 100 so far). Each domain either points to where my app is hosted (Heroku, if you're interested), or the original place it was hosted. Every time a domain is ready, it needs to point to the heroku servers, so that my app can serve content for it.
To check to see if a domain has successfully been changed over from its original location to my application, I'm writing a script that looks for a special hidden tag I included in them. If it finds the tag, then the domain is pointing to my app. If not, it hasn't been changed, which I record.
The problem is that, at least for one domain so far, I'm getting a 404 OpenURI::HTTPError exception for my script. Which is strange, because I can visit the site just fine and I can even get it via curl. Does anyone know why a working site would get an error like this? Here's the important snippet:
require 'rubygems'
require 'open-uri'
require 'hpricot'
...
url = "http://www.#{domainname}.com"
doc = Hpricot(open(url)) #<---- Problem right here.
...
Thanks for all of your help!
Welcome to SO!
Here would be my debugging method:
See if you can replicate in irb with open-uri alone, no Hpricot:
$ irb -rubygems -ropen-uri
>> open('http://www.somedomain.com')
Look in your Heroku log to see if it even touches the server.
Look in your original server's log for the same.
Throw open something like Wireshark to see the HTTP transaction, and see if a 404 is indeed coming back.
Start with that, and come back with your results.

Resources