Request read through websocket getting garbled - ruby

I am attempting to create a minimal Websocket implementation using Cramp framework.
Where as Cramp successfully renders normal web content, I run into trouble when I try to use HTML5 websockets.
My action class is as follows :
Cramp::Websocket.backend = :thin
class HomeAction < Cramp::Action
self.transport = :websocket
keep_connection_alive
on_data :recv_data
def recv_data data
puts "got message"
puts "#{data}"
render "Hello world"
end
end
My javascript code is as follows :
$(function(){
window.socket = new WebSocket("ws://localhost:3000/game");
socket.onmessage = function(evt){
console.log(evt.data);
socket.close();
}
socket.onclose = function(evt) {
console.log("end");
}
socket.onopen = function() {
console.log("Now open!");
socket.send("Hello");
}
})
The server (thin) detects when data is sent but the text that is read is garbled.
the encoding of the data is ASCII-8BIT (puts data.encoding prints "ASCII-8BIT"). However forcing UTF encoding through data.force_encoding('UTF-8') does not resolve the issue. In addition after forcing encoding - data.valid_encoding? returns false where as it was true before forcing.
I have tested the app in ruby-1.8.7 as well as ruby-1.9.3 . The output is same in both scenarios.
Another weird thing is that in client side the onmessage event is never fired.
Also, if I remove keep_connection_alive call from HomeAction the connection immediately terminates after the data is received and still the client does not receive the data being sent by server ("Hello world").
I have tested the app in Google chrome (latest version) and Mozilla firefox (latest version). The problem remains exactly the same in both of them. My operating system is Ubuntu 12.04 LTS (Precise Pangolin).
Any help in this regard would be strongly appreciated.

I have been running into the same thing, and it seems to be an issue with the released version of the cramp 0.15.1 gem versus what you get from the github repo (https://github.com/lifo/cramp) thought is still marked as 0.15.1.
Try this experiment which works for me:
Clone the GH repo locally
Copy in the bin/ and lib/ folders, as well as the cramp.gemspec file from the repo to your test cramp project
Change your gemfile, instead of just
gem 'cramp'
Include the local copy of code:
gemspec
gem 'cramp', :path => File.dirname(__FILE__)
Erase your Gemfile.lock and re-bundle, see that bundler now reports it will use the local copy of the cramp gem
Try your app again, in my scenario, this now works exactly as expected.
It would appear there is either a fix in github they have not released yet (but have not incremented the working version in their gemspec) or some other version snafu, but either way the code in GH works whereas a "gem install cramp" doesn't give you working code for websockets.

Related

Prevent phantomjs from raising Capybara::Poltergeist::StatusFailError when requesting never ending assets

I am having some issues with Capybara::Poltergeist::Driver
When I visit the the following url with poltergeist, I am exerpiencing an issue where an asset that seemingly doesn't exist takes for ever to load and eventually an error gets raised: https://www.feinstein.senate.gov/public/index.cfm/e-mail-me
$ brew install phantomjs
$ gem install capybara -v 2.17.0
$ gem install poltergeist -v 1.7.0
$ gem install selenium-webdriver -v 2.53.4
Then in irb:
require 'capybara/poltergeist'
module Drivers
class Poltergeist < Capybara::Poltergeist::Driver
def needs_server?
false
end
end
end
Capybara.register_driver :poltergeist_errorless do |app|
options = ['--load-images=no', '--ignore-ssl-errors=yes', '--ssl-protocol=any', '--disk-cache=true', '--max-disk-cache-size=500000']
Drivers::Poltergeist.new(app, js_errors: false, phantomjs_options: options)
end
session = Capybara::Session.new(:poltergeist_errorless)
session.visit('https://www.feinstein.senate.gov/public/index.cfm/e-mail-me')
After 10-20 seconds, the request fails, and I get back a Capybara::Poltergeist::StatusFailError exception with a message that says:
Request to 'https://www.feinstein.senate.gov/public/index.cfm/e-mail-me' failed to reach server, check DNS and/or server status - Timed out with the following resources still waiting https://sdc1.senate.gov/NEED_VALUE/wtid.js
But if I then call:
session.save_screenshot('/tmp/sc.png', full: true)
the outputted screenshot is shows that the rest of the page loaded just fine. If this were any other browser, it would just continue to function happily without worrying about an asset that is taking forever to load.
Is there anyway to configure phantomjs to not wait for this asset and to not raise this exception?
The easiest way to deal with that is to use Poltergeists blacklist to block the url - https://github.com/teampoltergeist/poltergeist#customization -
and/or - https://github.com/teampoltergeist/poltergeist#url-blacklisting--whitelisting
If your situation is more dynamic you could rescue the exception, parse out the URL, add it to the blacklist, and then retry the visit.
Additionally, there is no need to override needs_server?. If you don't pass a second parameter (the app to run) to Session#new (which you aren't doing) then needs_server? is irrelevant.
I'll play around with the session timeout params:
session = Capybara::Session.new(:poltergeist_errorless, :timeout=>ASSET_LOAD_TIME)

Downloading a track from Soundcloud using Ruby SDK

I am trying to download a track from Soundcloud using the ruby sdk (soundcloud 0.2.0 gem) with an app. I have registered the app on soundcloud and the client_secret is correct. I know this because I can see my profile info and tracks using the app.
Now when I try to download a track using the following code
#track = current_user.soundcloud_client.get(params[:track_uri])
data = current_user.soundcloud_client.get(#track.download_url)
File.open("something.mp3","wb"){|f|f.write(data)}
and when I open the file it has nothing in it. I've tried many approaches including the following one,
data = current_user.soundcloud_client.get(#track.download_url)
file = File.read(data)
And this one gives me an error
can't convert nil into String
on line 13 which is in
app/controllers/store_controller.rb:13:in `read'
that is the File.read function.
I have double checked that the track I am trying to download is public and downloadable.
I tried to test the download_url that is being used explicitly by copying it from console and sending a request using Postman and it worked. I am not sure why it is not working with the app when other things are working so well.
What I want to do is to successfully be able to either download or at least get the data which I could store somewhere.
Version details : -
ruby 1.9.3p194 (2012-04-20 revision 35410) [x86_64-linux]
Rails 3.2.18
soundcloud 0.2.0
There are few assumptions that you have to understand before doing this thing.
Not every track on SoundClound can be downloaded! Only tracks that are flagged as downloadable can be downloaded - your code has to consider that option!
Your track URL has to be "resolved" before you get to download_url and after you get download_url you have to use your client_id to get the final download URL.
Tracks can be big, and downlowding them requires time! You should never do tasks like this straight from your Rails app in your controller or model. If the tasks runs longer you always use some background worker or some other kind of background processing "thing" - Sidekiq for example.
Command-line client example
This is example of working client, that you can use to download tracks from SoundClound. Its using official Official SoundCloud API Wrapper for Ruby, assumes that you are using Ruby 1.9.x and its not dependent on Rails in any way.
# We use Bundler to manage our dependencies
require 'bundler/setup'
# We store SC_CLIENT_ID and SC_CLIENT_SECRET in .env
# and dotenv gem loads that for us
require 'dotenv'; Dotenv.load
require 'soundcloud'
require 'open-uri'
# Ruby 1.9.x has a problem with following redirects so we use this
# "monkey-patch" gem to fix that. Not needed in Ruby >= 2.x
require 'open_uri_redirections'
# First there is the authentication part.
client = SoundCloud.new(
client_id: ENV.fetch("SC_CLIENT_ID"),
client_secret: ENV.fetch("SC_CLIENT_SECRET")
)
# Track URL, publicly visible...
track_url = "http://soundcloud.com/forss/flickermood"
# We call SoundCloud API to resolve track url
track = client.get('/resolve', url: track_url)
# If track is not downloadable, abort the process
unless track["downloadable"]
puts "You can't download this track!"
exit 1
end
# We take track id, and we use that to name our local file
track_id = track.id
track_filename = "%s.aif" % track_id.to_s
download_url = "%s?client_id=%s" % [track.download_url, ENV.fetch("SC_CLIENT_ID")]
File.open(track_filename, "wb") do |saved_file|
open(download_url, allow_redirections: :all) do |read_file|
saved_file.write(read_file.read)
end
end
puts "Your track was saved to: #{track_filename}"
Also note that files are in AIFF (Audio Interchange File Format). To convert them to mp3 you do something like this with ffmpeg.
ffmpeg -i 293.aif final-293.mp3

serving static files with Ruby Espresso

I'm trying to serve some assets using the el gem but can't seem to get it to work. I've referred to another question posted here -- Assets in espresso breaks my app
My setup looks like this --
require 'e'
require 'el'
...
app = E.new(true){
assets_url '/pub', true
}
But hitting localhost:5252/pub/hello.txt (yes, this file exists) results in 404. What am I missing?
You missed to append any paths to Sprockets environment.
http://espresso.github.io/Periphery/Assets.html#sprockets
Please try:
app = E.new(true){
assets_url '/pub', true
assets.append_path 'relative-path-to-static-files'
}

wrong number of arguments (3 for 2) after upgrading from Rails 3.0.14 to Rails 3.1.4

Everything was working fine in Rails 3.0.14, but after changing
gem 'rails', '3.0.14' to gem 'rails', '3.1.4' and running bundle update rails I now get the following error:
Started GET "/" for 127.0.0.1 at 2012-03-16 11:11:44 -0400
Processing by PagesController#index as HTML
Completed 500 Internal Server Error in 54ms
ArgumentError (wrong number of arguments (3 for 2)):
app/controllers/application_controller.rb:37:in `customize_by_subdomain'```
The most popular answer seemed to be that sqlite3 needed to be updated, but I did bundle update sqlite3 and I still have the same problem.
Here is the full trace: https://gist.github.com/2050530
The method that it is complaining about looks like this:
35 def customize_by_subdomain
36 subdomain = (request.subdomain.present? && request.subdomain != 'www' && request.subdomain) || 'launch'
37 #current_org = Organization.find_by_subdomain(subdomain) || Organization.find_by_subdomain('launch')
38 end
I have looked at the multitude of similar questions and I not found anything that solves my problem. The closest was question to mine was: wrong number of arguments (3 for 1) after upgrading rails from 3.1.1 to 3.1.3 but I am using authlogic and the version I am using didn't change after upgrading rails.
The only other interesting thing is my entire test suite passes, except for one request/integration spec which goes through the process of creating a new user. It seems strange that my request specs work fine when I can't even access a page in development.
Any ideas on what I can do to get to the bottom of this?
It looks like your New Relic plugin may need to be updated to a new version. In your stacktrace, the first line is from the New Relic code in your plugins folder. From their site, it looks like they released new Rails 3.1-specific code:
http://blog.newrelic.com/2011/07/29/for-the-active-record-new-relic-support-for-rails-3-1-is-here/
In the blog post, they talk about changes to the way ActiveRecord does logging, and your exception was triggered on the log_with_instrumentation method.
It looks like now you should install it as a gem rather than a plugin:
https://github.com/newrelic/rpm
Hope this helps.

Job handler serialization incorrect when running delayed_job in production with Thin or Unicorn

I recently brought delayed_job into my Rails 3.1.3 app. In development
everything is fine. I even staged my DJ release on the same VPS as my
production app using the same production application server (Thin),
and everything was fine. Once I released to production, however, all
hell broke loose: none of the jobs were entered into the jobs table
correctly, and I started seeing the following in the logs for all
processed jobs:
2012-02-18T14:41:51-0600: [Worker(delayed_job host:hope pid:12965)]
NilClass# completed after 0.0151
2012-02-18T14:41:51-0600: [Worker(delayed_job host:hope pid:12965)] 1
jobs processed at 15.9666 j/s, 0 failed ...
NilClass and no method name? Certainly not correct. So I looked at the
serialized handler on the job in the DB and saw:
"--- !ruby/object:Delayed::PerformableMethod\nattributes:\n id: 13\n
event_id: 26\n name: memememe\n api_key: !!null \n"
No indication of a class or method name. And when I load the YAML into
an object and call #object on the resulting PerformableMethod I get
nil. For kicks I then fired up the console on the broken production
app and delayed the same job. This time the handler looked like:
"--- !ruby/object:Delayed::PerformableMethod\nobject: !ruby/
ActiveRecord:Domain\n attributes:\n id: 13\n event_id: 26\n
name: memememe\n api_key: !!null \nmethod_name: :create_a\nargs: []
\n"
And sure enough, that job runs fine. Puzzled, I then recalled reading
something about DJ not playing nice with Thin. So, I tried Unicorn and
was sad to see the same result. Hours of research later and I think
this has something to do with how the app server is loading the YAML
libraries Psych and Syck and DJ's interaction with them. I cannot,
however, pin down exactly what is wrong.
Note that I'm running delayed_job 3.0.1 official, but have tried upgrading to
the master branch and have even tried downgrading to 2.1.4.
Here are some notable differences between my stage and production
setups:
In stage I run 1 Thin server on a TCP port -- no web proxy in front
In production I run 2+ Thin servers and proxy to them with Nginx.
They talk over a UNIX socket
When I tried unicorn it was 1 app server proxied to by Nginx over a
UNIX socket
Could the web proxying/Nginx have something to do with it? Please, any insight is greatly appreciated. I've spent a lot of time
integrating delayed_job and would hate to have to shelve the work or, worse,
toss it. Thanks for reading.
I fixed this by not using #delay. Instead I replaced all of my "model.delay.method" code with custom jobs. Doing so works like a charm, and is ultimately more flexible. This fix works fine with Thin. I haven't tested with Unicorn.
I'm running into a similar problem with rails 3.0.10 and dj 2.1.4, it's most certainly a different yaml library being loaded when running from console vs from the app server; thin, unicorn, nginx. I'll share any solution I come up with
Ok so removing these lines from config/boot.rb fixed this issue for me.
require 'yaml'
YAML::ENGINE.yamler = 'syck'
This had been placed there to fix an YAML parsing error, forcing YAML to use 'syck'. Removing this required me to fix the underlying issues with the .yml files. More on this here
Now my delayed job record handlers match between those created via the server (unicorn in my case) and the console. Both my server and delayed job workers are kicked off within bundler
Unicorn
cd #{rails_root} && bundle exec unicorn_rails -c #{rails_root}/config/unicorn.rb -E #{rails_env} -D"
DJ
export LANG=en_US.utf8; export GEM_HOME=/data/reception/current/vendor/bundle/ruby/1.9.1; cd #{rail
s_root}; /usr/bin/ruby1.9.1 /data/reception/current/script/delayed_job start staging

Resources