I am using Ruby 1.9.3 and am running the following command:
open("ftp://user name:password#datafeeds.domain.com/1234/1234.txt.gz")
which returns:
URI::InvalidURIError: bad URI(is not URI?)
Encoding the user name (replacing spaces with %20) does not work either:
Net::FTPPermError: 530 Invalid userid/password
The URI works fine in all browsers and FTP clients tested - just not when using OpenURI. Also, using Net::FTP (which is wrapped by OpenURI) works fine as well:
require 'net/ftp'
ftp = Net::FTP.new
ftp.connect("datafeeds.domain.com", 21)
ftp.login("user name", "password")
ftp.getbinaryfile("/1234/1234.txt.gz")
Any idea why the OpenURI method does not work, while the Net::FTP method does? Thanks.
By definition in the specification, URL user names only allow these characters:
user alphanum2 [ user ]
[...]
alphanum2 alpha | digit | - | _ | . | +
Browsers are notorious for ignoring the specifications so saying they support it isn't a good proof. They shouldn't per the spec.
If cURL supports them, then use the Curb gem and see if that'll let you use them.
According to this StackOverflow answer, you should be able to just escape the special characters in your username and password. You could do something like:
login = URI.escape('user name') + ':' + URI.escape('password')
open("ftp://#{login}#datafeeds.domain.com/1234/1234.txt.gz")
open-uri seems broken in that matter. I had the similar issue with the password which contained # character.
I ended up bypassing set_password, which doesn't allow # char in password, by setting URI's #password instance variable directly.
uri = URI.parse(...)
if uri.password
uri.instance_variable_set "#password", "password_with_#_char"
end
open(uri) # now this works fine
It's hacky, but does the job.
Related
We're cleaning up some errors on our site after migration from ruby 1.8.7 to 1.9.3, Rails 3.2.12. We have one encoding error left -- Bing is sending requests for URLs in the form
/search?q=author:\"Andr\xc3\xa1s%20Guttman\"
(This reads /search?q=author:"András Guttman", where the á is escaped).
In fairness to Bing, we were the ones that gave them those bogus URLs, but ruby 1.9.3 isn't happy with them any more.
Our server is currently returning a 500. Rails is returning the error "Encoding::CompatibilityError: incompatible character encodings: UTF-8 and ASCII-8BIT"
I am unable to reproduce this error in a browser, or via curl or wget from OS X or Linux command line.
I want to send a 301 redirect back with a properly encoded URL.
I am guessing that I want to:
detect that the URL has old UTF-8 then if it is malformed, only
use String#encode to get from old to new UTF-8
use CGI.escape() to %-encode the URL
301 redirect to the corrected URL
So I have read a lot and am not sure how (or if) I can detect this bogus URL. I need to detect because otherwise I would have to 301 everything!
When I try in irb I get these results:
1.9.3p392 :015 > foo = "/search?q=author:\"Andr\xc3\xa1s%20Guttman\""
=> "/search?q=author:\"András%20Guttman\""
1.9.3p392 :016 > "/search?q=author:\"Andr\xc3\xa1s%20Guttman\"".encoding
=> #<Encoding:UTF-8>
1.9.3p392 :017 > foo.encoding
=> #<Encoding:UTF-8>
I have read this SO post but I am not sure if I have to go this far or even if this applies.
[Update: since posting, we have added a call to the code in the SO post linked above prior to all requests.]
So the question is: how can I detect the old-style encoding so that I can do the other steps.
First, let's look at the string manipulation side of things. It looks to like using the URI module and unescaping then re-escaping will just work:
2.0.0p0 :007 > foo = "/search?q=author:\"Andr\xc3\xa1s%20Guttman\""
=> "/search?q=author:\"András%20Guttman\""
2.0.0p0 :008 > URI.unescape foo
=> "/search?q=author:\"András Guttman\""
2.0.0p0 :009 > URI.escape URI.unescape foo
=> "/search?q=author:%22Andr%C3%A1s%20Guttman%22"
So the next question is where to do that? I'd say the problem with trying to detect string with the \x escape character is that you can't GUARANTEE those strings were not supposed to be slash-x versus escaped (although, in practice, maybe that is an okay assumption).
You might consider just adding a small rack middleware that does this. See this Railscast for more on rack. Assuming you only get these in the parameters (i.e., after the ? in the URL), then your middleware would look something like (untested, just for illustration; place in your /lib folder as reescape_parameters.rb):
require 'uri' # possibly not needed?
class ReescapeParameters
def initialize(app)
#app = app
end
def call(env)
env['QUERY_STRING'] = URI.escape URI.unescape env['QUERY_STRING']
status, headers, body = #app.call(env)
[status, headers, body]
end
end
Then you use the middleware by adding a line to your application config or an initializer. For example, in /config/application.rb (or, alternatively, in an initializer):
config.middleware.use "ReescapeParameters"
Note that you will probably need to catch theme parameters before any parameter handling by Rails. I'm not sure where in the Rack stack you'll need to put it, but you will more likely need:
config.middleware.insert_before ActionDispatch::ParamsParser, ReescapeParameters
Which would put it in the stack before ActionDispatch::ParamsParser. You'll need to figure out the correct module to put it after. This is just a guess. (FYI: There is an insert_after as well.)
UPDATE (REVISED)
If you MUST detect these and then send a 301, you could try:
def call(env)
if env['QUERY_STRING'].encoding.name == 'ASCII-8BIT' # could be 'ASCII_8BIT' ?
location = URI.escape URI.unescape env['QUERY_STRING']
[301, {'Content-Type' => 'text','Location' => location}, '']
else
status, headers, body = #app.call(env)
[status, headers, body]
end
end
This is a trial -- it might match everything. But hopefully, "regular" strings are being encoded as something else (and hence you only get the error for the ASCII-8BIT encoding).
Per one of the comments, you could also convert instead of unescape and escape:
location = env['QUERY_STRING'].encode('UTF-8')
but you might still need to URI escape the resulting string anyway (not sure, depends on your circumstances).
Please use CGI::unescapeHTML(string)
I made a very small app for the raspberry pi, that uses Sinatra:
https://github.com/khebbie/SpeakPi
The app lets the user input some text in a textarea and asks Google to create an mp3 file for it.
In there I have a shell script called speech2.sh which calls Google and plays the mp3 file:
#!/bin/bash
say() {
wget -q -U Mozilla -O out.mp3 "http://translate.google.com/translate_tts?tl=da&q=$*";
local IFS=+;omxplayer out.mp3;
}
say $*
When I call speech.sh from the commandline like so:
./speech2.sh %C3%A6sel
It pronounces %C3%A6 like the danish letter 'æ', which is correct!
I call speech2.sh from a Sinatra route like so:
post '/say' do
message = params[:body]
system('/home/pi/speech2.sh '+ message)
haml :index
end
And when I do so Google pronounces some very weird chars like 'a broken pipe...' which is wrong!
All chars a-z are pronounced correctly
I have tried some URL encoding and decoding, nothing worked.
I tried outputting the message to the command-line and it was exactly "%C3%A6" that certainly did not make sense.
Do you have any idea what I am doing wrong?
EDIT
To Sum it up and simplify - if I type like so in bash:
./speech2.sh %C3%A6sel
It works
If I start an irb session and type:
system('/home/pi/speech2.sh', '%C3%A6sel')
It does not work!
Since it is handling UTF-8, make sure that the encoding remains right the way through the process by adding the # encoding: UTF-8 magic comment at the top of the Ruby script and passing the ie=UTF-8 parameter in the query string when calling Google Translate.
what is the equivalent in Rails of this (PHP):
hash_hmac('sha512', $password . $salt, $siteSalt);
I got as far as this:
Digest::SHA512.hexdigest(password + salt)
But have no idea how to incorporate the site salt into the equation, all online examples I've seen do not pass the salt to the hexdigest method. When I've tried it I get an error for too many arguments.
And this notation with a colon (which I saw somewhere):
salted = password + salt
Digest::SHA512.hexdigest("#{salted}:site_salt")
Doesn't produce the same hash.
Thanks
Edit
I stumbled upon this that looks closer to what I need (sorry, I'm very new to the whole hashing thing):
OpenSSL::HMAC.hexdigest('sha512', site_salt, salted)
But it still produces a different hash than the one stored in the database.
I'm using Rails 4 and #brian's rails code didn't compile for me.
Here is what worked for me.
Rails shell:
2.1.2 :001 > Digest::HMAC.hexdigest("password"+"salt","siteSalt",Digest::SHA512)
=> "15b45385a00b10eb25c3aa8198d747862575a796a89a6c79b5a0b8ea332a8d75b1ec0dc1f0c9f7930d30c9359279e86df29067bbbc5d9bcf87839f855ac7a677"
PHP (from command line)
$ php -r 'print hash_hmac("sha512", "password" . "salt", "siteSalt") . "\n";'
15b45385a00b10eb25c3aa8198d747862575a796a89a6c79b5a0b8ea332a8d75b1ec0dc1f0c9f7930d30c9359279e86df29067bbbc5d9bcf87839f855ac7a677
I think this will do what you want:
HMAC::SHA512.hexdigest(site_salt, password + salt)
It looks like the PHP code you're referencing is using the siteSalt as the key for the HMAC function, with the password and salt concatenated specified as the value to be hashed.
I checked this by running this code in PHP:
% php -r 'print hash_hmac("sha512", "password" . "salt", "siteSalt") . "\n";'
15b45385a00b10eb25c3aa8198d747862575a796a89a6c79b5a0b8ea332a8d75b1ec0dc1f0c9f7930d30c9359279e86df29067bbbc5d9bcf87839f855ac7a677
And then in the Rails shell:
>> HMAC::SHA512.hexdigest('siteSalt', 'password' + 'salt')
=> "15b45385a00b10eb25c3aa8198d747862575a796a89a6c79b5a0b8ea332a8d75b1ec0dc1f0c9f7930d30c9359279e86df29067bbbc5d9bcf87839f855ac7a677"
It turns out the salt was empty in the PHP code, hence the discrepancy. But now both methods return the same.
We have an automatic build system that spits out packages, regression-tested & wrapped into a neat installer, ready for end-users to d/l & deploy.
We do tracking of end user support requests/bug reports via redmine. So far we uploaded the packages manually to the resp. 'Files' section of the redmine project, via the web interface.
What I'd like to do is to automate this step.
I imagine this would requires a few lines of Ruby to interface with redmine's db. I have zero knowledge about redmine's internals. :)
Basically I want the equivalent of a
mv package-x.y.z.tbz /usr/local/redmine/files/
as a Ruby (or whatever language suits the need) script that creates the right filename and registers the file in redmine's db so it shows up as if it had been uploaded through the Web interface, manually.
Cheers!
I've been frustrated with Redmine about things like this before. But before I go much further: is there a specific reason why you're using the Files section for this? It seems another tool (such as SSH/SFTP for uploading to someplace accessible to HTTP) might be a better fit for your needs. It would also be easily scriptable. Just point people to some constant URL like http://yourcompany.com/productname-current.zip.
If you really need to use Redmine for managing this, you might check out Mechanize: http://mechanize.rubyforge.org/. They should have a RESTful API also, but I've never used it.
I found this post, hope it can help you
Automating packaging and RedMine
I'm a bit late, but I've wrote a Redmine upload tool in Perl, using the WWW::Mechanize module.
Please find it on http://github.com/subogero/redgit
As already stated, you can use Mechanize for that.
There's a Python script written by Gert van Dijk's: https://github.com/gertvdijk/redmine-file-uploader
To use it you'll have to install Python Mechanize package first:
easy_install mechanize
If you prefer Ruby, you can use:
require 'mechanize'
# Replaces \ with / and removes "
ARGV.map!{|a|a.gsub('\\','/').gsub(/^"(.+)"$/,'\\1')}
filename = ARGV[0] || abort('Filename must be specified')
puts "File: #{filename}"
url = ARGV[1] || abort('Redmine URL must be specified')
puts "Redmine URL: #{url}"
username = ARGV[2] || abort('Redmine username must be specified')
puts "Username: #{username}"
password = ARGV[3] || abort('Redmine password must be specified')
puts "Password: #{'*' * password.length}"
project = ARGV[4] || abort('Redmine project must be specified')
puts "Project: #{project}"
login_page_path = '/login'
files_page_path = '/projects/' + project + '/files'
agent = Mechanize.new
# No certificate verification (I had to use this hack because our server is bound to custom port)
# agent.agent.http.verify_mode = OpenSSL::SSL::VERIFY_NONE
agent.get(URI.join(url, login_page_path)) do |login_page|
login_page.form_with(:action => login_page_path) do |login_form|
login_form.username = username
login_form.password = password
end.submit
end
agent.get(URI.join(url, files_page_path + '/new')) do |upload_page|
upload_page.form_with(:action => files_page_path) do |upload_form|
upload_form.file_uploads.first.file_name = filename
end.submit
end
And don't forget to install gem first:
gem install mechanize
Is there a cURL library for Ruby?
Curb and Curl::Multi provide cURL bindings for Ruby.
If you like it less low-level, there is also Typhoeus, which is built on top of Curl::Multi.
Use OpenURI and
open("http://...", :http_basic_authentication=>[user, password])
accessing sites/pages/resources that require HTTP authentication.
Curb-fu is a wrapper around Curb which in turn uses libcurl. What does Curb-fu offer over Curb? Just a lot of syntactic sugar - but that can be often what you need.
HTTP clients is a good page to help you make decisions about the various clients.
You might also have a look at Rest-Client
If you know how to write your request as a curl command, there is an online tool that can turn it into ruby (2.0+) code: curl-to-ruby
Currently, it knows the following options: -d/--data, -H/--header, -I/--head, -u/--user, --url, and -X/--request. It is open to contributions.
the eat gem is a "replacement" for OpenURI, so you need to install the gem eat in the first place
$ gem install eat
Now you can use it
require 'eat'
eat('http://yahoo.com') #=> String
eat('/home/seamus/foo.txt') #=> String
eat('file:///home/seamus/foo.txt') #=> String
It uses HTTPClient under the hood. It also has some options:
eat('http://yahoo.com', :timeout => 10) # timeout after 10 seconds
eat('http://yahoo.com', :limit => 1024) # only read the first 1024 chars
eat('https://yahoo.com', :openssl_verify_mode => 'none') # don't bother verifying SSL certificate
Here's a little program I wrote to get some files with.
base = "http://media.pragprog.com/titles/ruby3/code/samples/tutthreads_"
for i in 1..50
url = "#{ base }#{ i }.rb"
file = "tutthreads_#{i}.rb"
File.open(file, 'w') do |f|
system "curl -o #{f.path} #{url}"
end
end
I know it could be a little more eloquent but it serves it purpose. Check it out. I just cobbled it together today because I got tired of going to each URL to get the code for the book that was not included in the source download.
There's also Mechanize, which is a very high-level web scraping client that uses Nokogiri for HTML parsing.
Adding a more recent answer, HTTPClient is another Ruby library that uses libcurl, supports parallel threads and lots of the curl goodies. I use HTTPClient and Typhoeus for any non-trivial apps.
To state the maybe-too-obvious, tick marks execute shell code in Ruby as well. Provided your Ruby code is running in a shell that has curl:
puts `curl http://www.google.com?q=hello`
or
result = `
curl -X POST https://www.myurl.com/users \
-d "name=pat" \
-d "age=21"
`
puts result
A nice minimal reproducible example to copy/paste into your rails console:
require 'open-uri'
require 'nokogiri'
url = "https://www.example.com"
html_file = URI.open(url)
doc = Nokogiri::HTML(html_file)
doc.css("h1").text
# => "Example Domain"