Sinatra always prints '!! Invalid request" on standard output - ruby

I was messing around and recklessly trying to implement SSL with Sinatra. I was using many scripts that I found on the internet without actually knowing what they did. I then realized I didn't actually need SSL and now sinatra is completely broken.
No matter what my app is, it always always prints "!! Invalid request" on the terminal whenever it receives a request from the browser. I've noticed that the http://localhost:4567 always switches to https://localhost:4567 in the browser URL.
I would like to have Sinatra act like it did when I originally installed it.
I'm new to this...

Uninstall & reinstall Sinatra and Ruby

Related

Chrome DevTools Protocol addScriptToEvaluateOnNewDocument using ruby chrome_remote

So I am trying to inject a script to run on any page using addScriptToEvaluateOnNewDocument on chrome 79, but it doesn't seem to be working.
I am using the ruby gem chrome_remote, which gives a pretty basic access to the CDP.
Here is an example ruby:
scpt =<<EOF
window.THIS_WAS_SET = 1
EOF
ChromeRemote.client.send_cmd 'Page.addScriptToEvaluateOnNewDocument',{source: scpt}
ChromeRemote.client.send_cmd "Page.navigate", url: "http://localhost:4567/test"
I then start chrome with --remote-debugging-port=9222
The Page.addScriptToEvaluateOnNewDocument will always return {"identifier"=>"1"} (even if I call it multiple times, say with different scripts).
And when I open console on the opened tab in Chrome (which works, so I know CDP in general is working), and check the value of window.THIS_WAS_SET, it is undefined.
Is there any way to verify the command was sent to the browser, such as a log in the browser it was received? Any way to see what scripts were injected? Why does each call always return a ScriptIdentifier of 1, that seems problematic?
Anyone have a similar example working?
you should call "page.enable" first.

Get request with curl works but Net::HTTP and OpenURI fail

My Rails application regularly polls partners' ICS files and sometimes it fails for no reason whatsoever. When I do:
curl https://www.airbnb.es/calendar/ical/234892374.ics?s=23412342323
(params #'s faked here)
I get output matching the content of the ICS file. Just opening it in the browser works fine as well.
When I use:
Net::HTTP.get(URI(a.ics_link))
I get a "503 Service Temporarily Unavailable" response. I also tried the same with OpenURI with similar results.
Why is it that the server is treating requests from curl or a browser differently?
Is there some way to get Ruby to get around this?
It's an https issue... not sure why, but switch your url in Ruby to https and it should work.

I am getting an RPC error message when running my cucumber tests against a password protected https url

So what am I using:
I am using the following gems on Ruby 1.9.3:
capybara, commander, cucumber, cucumber-rails, fakeweb, factory_girl_rails, flexmock, gherkin, parallel, parallel_tests, poltergeist, rspec, rspec-rails, sauce, sauce-connect, sauce-cucumber, selenium-webdriver'
For my config file I am using yaml. so config.yml
Now to access the homepage of the site I am testing in my config I have
base_url: https://<username>:<password>#the.url.com
When running my cucumber tests (using poltergeist) the following message is shown several times while the tests are running:
Invalid rpc message origin. https://username#the.url.com vs https://the.url.com
It does not cause the tests to fail but is incredibly untidy and I would really like to get rid of it.
I have been, and still am, investigating a solution to this but if someone gets there first that would be amazing.
Some things that I have tried/know what is happening:-
I know my tests are working as I have run them using the browser (firefox) and it is fine with none of these messages.
Also if I remove the s from https the message dissapears. But alas that will not work for the site as require https.
Putting the url in double quotes does not solve the problem.
I have pinpointed the issue directly to the config and specifying the UN and PW in the url.
It looks like your username and password have gone AWOL:
Invalid rpc message origin. https://#the.url.com vs https://the.url.com
The first part should be https://username:password#the.url.com. Could it be because your config is lacking the double slashes?
base_url: https:<username>:<password>#the.url.com
should probably be
base_url: https://<username>:<password>#the.url.com

Any way to get around the browser http timeout during debugging?

I am currently working on a Django development. There is a problem, which isn't a true problem but very annoying. Often, when I try to debug my Django app by putting down some break points, I get this error at the server end:
error: [Errno 32] Broken pipe
After reading this other post, Django + WebKit = Broken pipe, I have learned that this has nothing to do with the server but the client browser used. Basically, what happened is that the browser has a http request timeout. If it doesn't receive a response within the timeout, it will close down the connection with the server.
I find this timeout isn't really needed, indeed causing headache, during debugging. Is there any way I can lift this timeout or increase it for my browser (Chrome)? Or maybe a substitute browser that doesn't have this constraint?
Note: Although I am using Django and have mentioned about it, this isn't a Django-related question. It's more like a question on how to make my debugging process more effective.
I prefer using linux/unix curl command for debugging web applications. It's good approach, especially if you want to focus on some specific request, for example: POST does not work fine for some set of parameters, or cookies are not set as expected.
Of course it may take some time at the beginning to find out how to use it, but then, you will have a total control about every single piece of request: timeouts, cookies, headers and so on. It's very helpful, because you can be sure that what you wanted to send is actually sent (no additional data is added by the web browser).

Recieving a 404 HTTPError on a working page in Ruby Script

This is my first time asking a question, please be gentle!
I have a Rails application that handles content for a whole bunch of domains (over 100 so far). Each domain either points to where my app is hosted (Heroku, if you're interested), or the original place it was hosted. Every time a domain is ready, it needs to point to the heroku servers, so that my app can serve content for it.
To check to see if a domain has successfully been changed over from its original location to my application, I'm writing a script that looks for a special hidden tag I included in them. If it finds the tag, then the domain is pointing to my app. If not, it hasn't been changed, which I record.
The problem is that, at least for one domain so far, I'm getting a 404 OpenURI::HTTPError exception for my script. Which is strange, because I can visit the site just fine and I can even get it via curl. Does anyone know why a working site would get an error like this? Here's the important snippet:
require 'rubygems'
require 'open-uri'
require 'hpricot'
...
url = "http://www.#{domainname}.com"
doc = Hpricot(open(url)) #<---- Problem right here.
...
Thanks for all of your help!
Welcome to SO!
Here would be my debugging method:
See if you can replicate in irb with open-uri alone, no Hpricot:
$ irb -rubygems -ropen-uri
>> open('http://www.somedomain.com')
Look in your Heroku log to see if it even touches the server.
Look in your original server's log for the same.
Throw open something like Wireshark to see the HTTP transaction, and see if a 404 is indeed coming back.
Start with that, and come back with your results.

Resources