I'm trying to get the query_string from a Ruby file. For example;
http://localhost/rubyfile.rb?hello=world
I would like to be able to ask what's hello and for it to print "world", but for the life of me I cannot find the correct syntax/way to do it anywhere. Even the Ruby documentation seems dazed.
#!/program files (x86)/ruby/bin/ruby
require 'cgi'
cgi_request = CGI::new("html4")
This simply starts a new CGI spawn when the file is run, but how do I find the query_string?
puts cgi.params[query_string]
Doesn't seem to work-- I assume there is something i'm completly missing and im stupid but...
It should be simple, shouldn't it?
Thanks
The following should work:
require "cgi"
cgi_request = CGI::new("html4")
puts "Content-Type: text/html; charset=UTF-8"
puts
puts cgi_request['hello']
puts cgi_request.query_string
puts cgi_request.params['hello']
Related
I need to replace a large set of broken HTML links in a file. For that, I'd need to do a find/replace disabling any kind of regular expression- i.e. the kind of basic Find/Replace you would do from your notepad.
I came across to a Ruby script which should do exactly that:
ruby -p -i -e "gsub('Home', 'NEWLINK')" test.txt
However, the file test.txt is not changed, nor an output is returned. (I don't know much about ruby so I might be just missing something obvious)
Is there any other tool which does what I need?
Edit: I'd expect that the following test.txt file:
Home
....is changed to:
NEWLINK
Thanks
Instead of a regular expression consider using a HTML parser which actually understands HTML and won't leave you with a broken HTML document.
# link_parser.rb
require 'bundler/inline'
gemfile do
source 'https://rubygems.org'
gem 'nokogiri'
end
fn = ARGV[0]
if File.exist(fn)
puts "Processing #{fn}..."
File.open(fn, 'rw') do |file|
doc = Nokogiri::HTML(file)
links = doc.css('a[href="index.php?option=com_content&view=article&id=130&catid=111&Itemid=324"]')
if links.any?
links.each do |link|
link.href = "NEWLINK"
end
file.rewind
file.write(doc.to_s)
puts "#{links.length} links replaced"
else
puts "No links found"
end
end
else
puts "File not found."
end
ruby link_parser.rb path/to/file.html
I was trying to run Sinatra and Ruby in my MacBook, and all was working fine. Then, suddenly, I tried again and it just stays like this:
I can't access to localhost or anything. I don't know what to do. I've been researching for hours. Please, help me.
This is what my ruby code looks like:
require 'sinatra'
gets '/ejemplo1' do
puts 'Hello World'
end
Seems to be a typo. Should be get and not gets.
require 'sinatra'
get '/ejemplo1' do
puts 'Hello World'
end
Additional info:
gets in ruby is a way to get user input:
name = gets
puts "Your name is #{name}"
Like mentioned by #Norly Canarias you should use get for routing in sinatra. Moreover if you use puts statement in get block it will print only in terminal when you run your code not in webpage when you access localhost. Correct way to make it display in webpage is given below
require 'sinatra'
get '/ejemplo1' do
'Hello World'
end
I want to be able to have one ruby file that can require all the common dependencies, so that other files can just have one require on this shared file.for example; I have foo.rb and bar.rb and allrequires.rb. I want to have the line require "allrequires.rb" in both foo.rb and bar.rb, but bar.rb doesn't need all the requires.
Does it matter if I use require in .rb file that do not really require that file? Could it have an impact on performance maybe?
I am currently on ruby 1.8.7 (2010-08-16 patchlevel 302) [i386-mingw32]
Update
It looks like it is not the best idea to 'share'/use all requires in both .rb files. What would be solution to that?
Right now I can think of using file name in a condition.
There's two main performance penalties:
The time taken to do the require itself. In Ruby 1.9.1 and Ruby 1.9.2, the time taken to do all the requires had worse than linear scalability - if you doubled the number of requires, it took you more than twice as long - I think it took you four times as long.
The time taken to execute the code in the file being required. If you have code like the following, then executing the code will take a non-trivial amount of time.
class MyClass
MY_CONSTANT = File.read("data.txt")
end
Another approach of conditional requiring, the following script gives no error on the JSON parser because it is named require1.rb, in scripts that have no name like require1.rb of script2.rb the gem isn't required
require 'json' if "require1.rb, script2.rb"[File.basename(__FILE__)]
p File.basename(__FILE__)
text = '[{ "name" : "car", "status": "good"}, { "name" : "bus", "status": "bad"},{ "name" : "taxi", "status": "soso"},
{"noname":"", "status" : "worse"}
]'
data = JSON.parse(text)
p data.collect { |item| item['name'] }
EDIT: here a version that uses an array
["require1.rb","script1.rb"].find{|script|require 'json' if script===File.basename(__FILE__)}
Yes, there will be a speedpanalty, you can benchmark how mutch to consider if it realy matters.
With multiple require's i put hem in my code like this so that it doens't take much screenspace.
['green_shoes','Hpricot'].each(&method(:require))
You could also do a conditional require, but that would be ugly having all around your code
begin
data = JSON.parse(text)
rescue
require 'json_pure'
data = JSON.parse(text)
end
So in short, give each rb its own require's
My final solution is (in case someone finds it useful):
requires.rb is called from web.rb or testweb.rb, rufus.rb or testrufus.rb
called_from=caller[0].split(":")[0]
puts "loading web 'requires' for file: #{called_from} ..." if (["web"].any?{|s| called_from[s]})
puts "loading web 'requires' for file: #{called_from} ..." if (["rufus"].any?{|s| called_from[s]})
puts "loading web 'requires' for file: #{called_from} ..." if (["settings"].any?{|s| called_from[s]})
require 'rubygems'
require 'socket' if (["web","settings"].any?{|s| called_from[s]})
require 'ruby-growl' if (["web","settings","rufus"].any?{|s| called_from[s]})
require 'sinatra' if (["web"].any?{|s| called_from[s]})
Thank you #Andrew for the explanation and #peter for the hint how to solve this.
I've installed the latest version of the stanfordparser and the ruby wrapper library for it. When trying to test it with a simple example from the website:
vi test.rb:
require "stanfordparser"
preproc =
StanfordParser::DocumentPreprocessor.new
puts
preproc.getSentencesFromString("This
is a sentence. So is this.")
ruby -rubygems test.rb
This
is
a
sentence
.
So
is
this
.
This is a sanity check really - am I doing something wrong, or is this a bug in the parser or wrapper?
You might be confused about how puts is formatting the output. Try this:
x = preproc.getSentencesFromString("This is a sentence. So is this.")
puts x.inspect
to make sure that you're getting what you're supposed to be getting.
Is there a cURL library for Ruby?
Curb and Curl::Multi provide cURL bindings for Ruby.
If you like it less low-level, there is also Typhoeus, which is built on top of Curl::Multi.
Use OpenURI and
open("http://...", :http_basic_authentication=>[user, password])
accessing sites/pages/resources that require HTTP authentication.
Curb-fu is a wrapper around Curb which in turn uses libcurl. What does Curb-fu offer over Curb? Just a lot of syntactic sugar - but that can be often what you need.
HTTP clients is a good page to help you make decisions about the various clients.
You might also have a look at Rest-Client
If you know how to write your request as a curl command, there is an online tool that can turn it into ruby (2.0+) code: curl-to-ruby
Currently, it knows the following options: -d/--data, -H/--header, -I/--head, -u/--user, --url, and -X/--request. It is open to contributions.
the eat gem is a "replacement" for OpenURI, so you need to install the gem eat in the first place
$ gem install eat
Now you can use it
require 'eat'
eat('http://yahoo.com') #=> String
eat('/home/seamus/foo.txt') #=> String
eat('file:///home/seamus/foo.txt') #=> String
It uses HTTPClient under the hood. It also has some options:
eat('http://yahoo.com', :timeout => 10) # timeout after 10 seconds
eat('http://yahoo.com', :limit => 1024) # only read the first 1024 chars
eat('https://yahoo.com', :openssl_verify_mode => 'none') # don't bother verifying SSL certificate
Here's a little program I wrote to get some files with.
base = "http://media.pragprog.com/titles/ruby3/code/samples/tutthreads_"
for i in 1..50
url = "#{ base }#{ i }.rb"
file = "tutthreads_#{i}.rb"
File.open(file, 'w') do |f|
system "curl -o #{f.path} #{url}"
end
end
I know it could be a little more eloquent but it serves it purpose. Check it out. I just cobbled it together today because I got tired of going to each URL to get the code for the book that was not included in the source download.
There's also Mechanize, which is a very high-level web scraping client that uses Nokogiri for HTML parsing.
Adding a more recent answer, HTTPClient is another Ruby library that uses libcurl, supports parallel threads and lots of the curl goodies. I use HTTPClient and Typhoeus for any non-trivial apps.
To state the maybe-too-obvious, tick marks execute shell code in Ruby as well. Provided your Ruby code is running in a shell that has curl:
puts `curl http://www.google.com?q=hello`
or
result = `
curl -X POST https://www.myurl.com/users \
-d "name=pat" \
-d "age=21"
`
puts result
A nice minimal reproducible example to copy/paste into your rails console:
require 'open-uri'
require 'nokogiri'
url = "https://www.example.com"
html_file = URI.open(url)
doc = Nokogiri::HTML(html_file)
doc.css("h1").text
# => "Example Domain"