Using Mechanize to log into https://kindle.amazon.com/login - ruby

I am trying to use Mechanize to log into my Kindle account at Amazon.
The login page URL is https://kindle.amazon.com/login
I can manually log into this page without issue but if I try it using the following code it always fails with an error (see screenshot below).
require 'mechanize'
mechanize_agent = Mechanize.new
mechanize_agent.user_agent_alias = 'Windows Mozilla'
signin_page = mechanize_agent.get("https://kindle.amazon.com/login")
signin_form = signin_page.form("signIn")
signin_form.email = "email#example.com"
signin_form.password = "password"
post_signin_page = mechanize_agent.submit(signin_form)
This is always the resulting page (again, I'm certain my script is using valid values):

Looks like mechanize is trying to submit the form without the propper action. Try using the Continue button, and send the form with that button:
# ...
submit_button = signin_form.buttons.find { |b| b.value == "Continue" }
post_signin_page = mechanize_agent.submit signin_form, submit_button

Related

Ruby Mechanize - how to set wait till a web page is fully loaded

This is my first time using ruby Mechanize to perform web crawling. It works fine till one of the websites it never returns an expected URL link because the web page isn't fully loaded but Machanize alredy search the web page.
Below is my code
agent = Mechanize.new
page = agent.get(website_url)
page.links.each do |link|
tmp_href_str = "#{link.href}".to_s
job_title = "#{link.text.strip}".to_s
job_title = job_title.gsub(/\r\n?/, ' ')
job_title_length = "#{link.text.strip}".to_s.length
end

Mechanize form submission

I have a website that I am attempting to scrape using Mechanize.
When I submit the form, the form is submitted with an URL of the following format :
https://www.website.com/Login/Options?returnURL=some_form_options
(If I enter that URL in the browser, it will send me to a nice error page saying that the requested page does not exist)
Whereas, if I submit the form from the website, the returned URL will be of the following format :
https://www.website.com/topic/country/list_of_form_options
The website has a login form that is not necessary to fill in to be able to submit a search query.
Any idea why I would get a different URL submitting the same form with Mechanize ? And how to counter that ?
I cannot process the URL I get after "mechanizing" the form.
Thanks!
You can find the exact form that you want to submit then submit, If you are unable to find the path then Even you can add form field using Mechanize and submit that form. Here is my code that i have used in my project.
I had create a rake task for this task:
namespace :test_namespace do
task :mytask => [:environment] do
site = "http://www.website.com/search/search.aspx?term=search term"
# prepare user agent
ua = Mechanize.new
page = ua.get("#{site}")
while (true)
page.search("//div[#class='resultsNoBackground']").each do |res|
puts res.at("table").at('tr').at('td').text
link_text =res.at_css('strong').at('a').text
link_href = res.at_css('strong').at('a')['href']
link_href ="http://www.website.com"+link_href
page_content=''
res.css('span').each do |ss|
ss.css('strong').remove
page_content=ss.text.gsub(/Vi.*s\)/, '')
end
# puts "HERE IS THE SUMMMER ......#{content_summery}"
end
if page.search("#ctl00_ContentPlaceHolder1_ctrlResults_gvResults_ctl01_lbNext").count > 0
form = page.forms.first
form.add_field! "__EVENTTARGET", "ctl00$ContentPlaceHolder1$ctrlResults$gvResults$ctl01$lbNext"
form.add_field! "__EVENTARGUMENT", ""
page = form.submit
else
break
end
end
end
end

Crawl data using ruby mechanize

I am crawling data from http://www.mca.gov.in/DCAPortalWeb/dca/MyMCALogin.do?method=setDefaultProperty&mode=53
Below is the code I have tried :
uri = "http://www.mca.gov.in/DCAPortalWeb/dca/MyMCALogin.do?method=setDefaultProperty&mode=53"
#html, html_content = #mobj.get_data(uri)
agent = Mechanize.new
html_page = agent.get uri
html_form = html_page.form
html_form.radiobuttons_with(:name => 'search',:value => '2')[0].check
html_form.submit
puts html_page.content
Error :
var/lib/gems/1.9.1/gems/mechanize-2.7.3/lib/mechanize/http/agent.rb:308:in `fetch': 500 => Net::HTTPInternalServerError for http://www.mca.gov.in/DCAPortalWeb/dca/ProsecutionDetailsSRAction.do -- unhandled response (Mechanize::ResponseCodeError)
from /var/lib/gems/1.9.1/gems/mechanize-2.7.3/lib/mechanize.rb:1281:in `post_form'
from /var/lib/gems/1.9.1/gems/mechanize-2.7.3/lib/mechanize.rb:548:in `submit'
from /var/lib/gems/1.9.1/gems/mechanize-2.7.3/lib/mechanize/form.rb:223:in `submit'
from ministry_corp_aff.rb:32:in `start'
from ministry_corp_aff.rb:52:in `<main>'
If I manually click on the 3rd radio button and then submit it, I get a .zip file. I was trying to fetch data from the .xls file from that zip..
The radio button has an onclick even handler that triggers the execution of some javascript. In addition, clicking on the Submit <a> tag also causes some javascript to execute. That javascript probably sets some values that are returned with the form, which the server examines.
Mechanize cannot execute the javascript. You need selenium webdriver for that.

Ruby Mechanize 405 Net::HTTPMethodNotAllowed Error While Scraping Fedex Billing

I have a script that goes into Fedex Billing each week when they mail me my invoice, digs out information and posts it to xpenser.com. After the recent Fedex Billing site redesign, when I run this code:
agent = Mechanize.new
page = agent.get 'http://fedex.com/us/fcl/pckgenvlp/online-billing/'
form = page.form_with(:name => 'logonForm')
form.username = FEDEX['username']
form.password = FEDEX['password']
page = agent.submit form
pp page
I receive this error:
Mechanize::ResponseCodeError: 405 => Net::HTTPMethodNotAllowed
I see there is a javascript auth function that seems to build a URL that sets hidden variables. I've tried to pass various combinations of variable strings in without success.
While Mechanize doesn't support javascript, it will pass in variable strings and if you hit the correct one, you can auth that way. I'm hoping to do that here.
Using mechanize-1.0.0 the following works:
agent = Mechanize.new
page = agent.get 'http://fedex.com/us/fcl/pckgenvlp/online-billing/'
form = page.form_with(:name => 'logonForm')
form.username = FEDEX['username']
form.password = FEDEX['password']
form.add_field!('field_name', 'Page$2')
page = agent.submit form
pp page
try this. it may help you

How to pass cookies from one page to another using curl in Ruby?

I am doing a video crawler in ruby. In there I have to log in to a page by enabling cookies and download pages. For that I am using the CURL library in ruby. I can successfully log in, but I can't download the pages inside that with curl. How can I fix this or download the pages otherwise?
My code is
curl = Curl::Easy.new(1st url)
curl.follow_location = true
curl.enable_cookies = true
curl.cookiefile = "cookie.txt"
curl.cookiejar = "cookie.txt"
curl.http_post(1st url,field)
curl.perform
curl = Curl::Easy.perform(2nd url)
curl.follow_location = true
curl.enable_cookies = true
curl.cookiefile = "cookie.txt"
curl.cookiejar = "cookie.txt"
curl.http_get
code = curl.body_str
What I've seen in writing my own similar "post-then-get" script is that ruby/Curb (I'm using version 0.7.15 with ruby 1.8) seems to ignore the cookiejar/cookiefile fields of a Curl::Easy object. If I set either of those fields and the http_post completes successfully, no cookiejar or cookiefile file is created. Also, curl.cookies will still be nil after your curl.http_post, however, the cookies ARE set within the curl object. I promise :)
I think where you're going wrong is here:
curl = Curl::Easy.perform(2nd url)
The curb documentation states that this creates a new object. That new object doesn't have any of your existing cookies set. If you change your code to look like the following, I believe it should work. I've also removed the curl.perform for the first url since curl.http_post already implicitly does the "perform". You were basically http_post'ing twice before trying your http_get.
curl = Curl::Easy.new(1st url)
curl.follow_location = true
curl.enable_cookies = true
curl.http_post(1st url,field)
curl.url = 2nd url
curl.http_get
code = curl.body_str
If this still doesn't seem to be working for you, you can verify if the cookie is getting set by adding
curl.verbose = true
Before
curl.http_post
Your Curl::Easy object will dump all the headers that it gets in the response from the server to $stdout, and somewhere in there you should see a line stating that it added/set a cookie. I don't have any example output right now but I'll try to post a follow-up soon.
HTTPClient automatically enables cookies, as does Mechanize.
From the HTTPClient docs:
clnt = HTTPClient.new
clnt.get_content(url1) # receives Cookies.
clnt.get_content(url2) # sends Cookies if needed.
Posting a form is easy too:
body = { 'keyword' => 'ruby', 'lang' => 'en' }
res = clnt.post(uri, body)
Mechanize makes this sort of thing really simple (It will handle storing the cookies, among other things).

Resources