Ruby Mechanize 405 Net::HTTPMethodNotAllowed Error While Scraping Fedex Billing - ruby

I have a script that goes into Fedex Billing each week when they mail me my invoice, digs out information and posts it to xpenser.com. After the recent Fedex Billing site redesign, when I run this code:
agent = Mechanize.new
page = agent.get 'http://fedex.com/us/fcl/pckgenvlp/online-billing/'
form = page.form_with(:name => 'logonForm')
form.username = FEDEX['username']
form.password = FEDEX['password']
page = agent.submit form
pp page
I receive this error:
Mechanize::ResponseCodeError: 405 => Net::HTTPMethodNotAllowed
I see there is a javascript auth function that seems to build a URL that sets hidden variables. I've tried to pass various combinations of variable strings in without success.
While Mechanize doesn't support javascript, it will pass in variable strings and if you hit the correct one, you can auth that way. I'm hoping to do that here.

Using mechanize-1.0.0 the following works:
agent = Mechanize.new
page = agent.get 'http://fedex.com/us/fcl/pckgenvlp/online-billing/'
form = page.form_with(:name => 'logonForm')
form.username = FEDEX['username']
form.password = FEDEX['password']
form.add_field!('field_name', 'Page$2')
page = agent.submit form
pp page
try this. it may help you

Related

Using Mechanize to log into https://kindle.amazon.com/login

I am trying to use Mechanize to log into my Kindle account at Amazon.
The login page URL is https://kindle.amazon.com/login
I can manually log into this page without issue but if I try it using the following code it always fails with an error (see screenshot below).
require 'mechanize'
mechanize_agent = Mechanize.new
mechanize_agent.user_agent_alias = 'Windows Mozilla'
signin_page = mechanize_agent.get("https://kindle.amazon.com/login")
signin_form = signin_page.form("signIn")
signin_form.email = "email#example.com"
signin_form.password = "password"
post_signin_page = mechanize_agent.submit(signin_form)
This is always the resulting page (again, I'm certain my script is using valid values):
Looks like mechanize is trying to submit the form without the propper action. Try using the Continue button, and send the form with that button:
# ...
submit_button = signin_form.buttons.find { |b| b.value == "Continue" }
post_signin_page = mechanize_agent.submit signin_form, submit_button

Ruby Mechanize - how to set wait till a web page is fully loaded

This is my first time using ruby Mechanize to perform web crawling. It works fine till one of the websites it never returns an expected URL link because the web page isn't fully loaded but Machanize alredy search the web page.
Below is my code
agent = Mechanize.new
page = agent.get(website_url)
page.links.each do |link|
tmp_href_str = "#{link.href}".to_s
job_title = "#{link.text.strip}".to_s
job_title = job_title.gsub(/\r\n?/, ' ')
job_title_length = "#{link.text.strip}".to_s.length
end

Mechanize form submission

I have a website that I am attempting to scrape using Mechanize.
When I submit the form, the form is submitted with an URL of the following format :
https://www.website.com/Login/Options?returnURL=some_form_options
(If I enter that URL in the browser, it will send me to a nice error page saying that the requested page does not exist)
Whereas, if I submit the form from the website, the returned URL will be of the following format :
https://www.website.com/topic/country/list_of_form_options
The website has a login form that is not necessary to fill in to be able to submit a search query.
Any idea why I would get a different URL submitting the same form with Mechanize ? And how to counter that ?
I cannot process the URL I get after "mechanizing" the form.
Thanks!
You can find the exact form that you want to submit then submit, If you are unable to find the path then Even you can add form field using Mechanize and submit that form. Here is my code that i have used in my project.
I had create a rake task for this task:
namespace :test_namespace do
task :mytask => [:environment] do
site = "http://www.website.com/search/search.aspx?term=search term"
# prepare user agent
ua = Mechanize.new
page = ua.get("#{site}")
while (true)
page.search("//div[#class='resultsNoBackground']").each do |res|
puts res.at("table").at('tr').at('td').text
link_text =res.at_css('strong').at('a').text
link_href = res.at_css('strong').at('a')['href']
link_href ="http://www.website.com"+link_href
page_content=''
res.css('span').each do |ss|
ss.css('strong').remove
page_content=ss.text.gsub(/Vi.*s\)/, '')
end
# puts "HERE IS THE SUMMMER ......#{content_summery}"
end
if page.search("#ctl00_ContentPlaceHolder1_ctrlResults_gvResults_ctl01_lbNext").count > 0
form = page.forms.first
form.add_field! "__EVENTTARGET", "ctl00$ContentPlaceHolder1$ctrlResults$gvResults$ctl01$lbNext"
form.add_field! "__EVENTARGUMENT", ""
page = form.submit
else
break
end
end
end
end

Crawl data using ruby mechanize

I am crawling data from http://www.mca.gov.in/DCAPortalWeb/dca/MyMCALogin.do?method=setDefaultProperty&mode=53
Below is the code I have tried :
uri = "http://www.mca.gov.in/DCAPortalWeb/dca/MyMCALogin.do?method=setDefaultProperty&mode=53"
#html, html_content = #mobj.get_data(uri)
agent = Mechanize.new
html_page = agent.get uri
html_form = html_page.form
html_form.radiobuttons_with(:name => 'search',:value => '2')[0].check
html_form.submit
puts html_page.content
Error :
var/lib/gems/1.9.1/gems/mechanize-2.7.3/lib/mechanize/http/agent.rb:308:in `fetch': 500 => Net::HTTPInternalServerError for http://www.mca.gov.in/DCAPortalWeb/dca/ProsecutionDetailsSRAction.do -- unhandled response (Mechanize::ResponseCodeError)
from /var/lib/gems/1.9.1/gems/mechanize-2.7.3/lib/mechanize.rb:1281:in `post_form'
from /var/lib/gems/1.9.1/gems/mechanize-2.7.3/lib/mechanize.rb:548:in `submit'
from /var/lib/gems/1.9.1/gems/mechanize-2.7.3/lib/mechanize/form.rb:223:in `submit'
from ministry_corp_aff.rb:32:in `start'
from ministry_corp_aff.rb:52:in `<main>'
If I manually click on the 3rd radio button and then submit it, I get a .zip file. I was trying to fetch data from the .xls file from that zip..
The radio button has an onclick even handler that triggers the execution of some javascript. In addition, clicking on the Submit <a> tag also causes some javascript to execute. That javascript probably sets some values that are returned with the form, which the server examines.
Mechanize cannot execute the javascript. You need selenium webdriver for that.

Ruby webscrape script for GoDaddy

I'm new to Ruby and for my first scripting assignment, I've been asked to write a web scraping script to grab elements of our DNS listings from GoDaddy.
Having issues with scraping the links and then I need to follow the links. I need to get the link from the "GoToSecondaryDNS" js element below. I'm using Mechanize and Nokogiri:
<td class="listCellBorder" align="left" style="width:170px;">
<div style="padding-left:4px;">
<div id="gvZones21divDynamicDNS"></div>
<div id="gvZones21divMasterSlave" cicode="41022" onclick="GoToSecondaryDNS('iwanttoscrapethislink.com',0)" class="listFeatureButton secondaryDNSNoPremium" onmouseover="ShowSecondaryDNSAd(this, event);" onmouseout="HideAdInList(event);"></div>
<div id="gvZones21divDNSSec" cicode="41023" class="listFeatureButton DNSSECButtonNoPremium" onmouseover="ShowDNSSecAd(this, event);" onmouseout="HideAdInList(event);" onclick="UpgradeLinkActionByID('gvZones21divDNSSec'); return false;" useClick="true" clickObj="aDNSSecUpgradeClicker"></div>
<div id="gvZones21divVanityNS" onclick="GoToVanityNS('iwanttoscrapethislink.com',0)" class="listFeatureButton vanityNameserversNoPremium" onmouseover="ShowVanityNSAd(this, event);" onmouseout="HideAdInList(event);"></div>
<div style="clear:both;"></div>
</div>
</td>
How can I scrape the link 'iwanttoscrapethislink.com' and then interact with the onclick to follow the link and scrape content on the following page with Ruby?
So far, I have a simple start to the code:
require 'rubygems'
require 'mechanize'
require 'open-uri'
def get_godaddy_data(url)
web_agent = Mechanize.new
result = nil
### login to GoDaddy admin
page = web_agent.get('https://dns.godaddy.com/Default.aspx?sa=')
## there is only one form and it is the first form on thepage
form = page.forms.first
form.username = 'blank'
form.password = 'blank'
## form.submit
web_agent.submit(form, form.buttons.first)
site_name = page.css('div.gvZones21divMasterSlave onclick td')
### export dns zone data
page = web_agent.get('https://dns.godaddy.com/ZoneFile.aspx?zone=' + site_name + '&zoneType=0&refer=dcc')
form = page.forms[3]
web_agent.submit(form, form.buttons.first).save(uri.host + 'scrape.txt')
## end
end
### read export file
##return File.open(uri.host + 'scrape.txt', 'rb') { |file| file.read }
end
def scrape_dns(url)
site_name = page.css('div.gvZones21divMasterSlave onclick td')
LIST_URL = "https://dns.godaddy.com/ZoneFile.aspx?zone=" + site_name + '&zoneType=0&refer=dcc"
page = Nokogiri::HTML(open(LIST_URL))
#not sure how to scrape onclick urls and then how to click through to continue scraping on the second page for each individual DNS
end
You can't interact with "onclick" because Nokogiri isn't a JavaScript engine.
You can extract the contents and then use that as the URL for a subsequent web request. Assuming doc contains the parsed HTML:
doc.at('div[onclick^="GoToSecondaryDNS"]')['onclick']
will give you the value for the onclick parameter. ^= means "find the word starting with", so that lets us rule out other <div> tags with onclick parameters and returns:
"GoToSecondaryDNS('iwanttoscrapethislink.com',0)"
Using a simple regex [/'(.+)'/,1] will get you the hostname:
doc.at('div[onclick^="GoToSecondaryDNS"]')['onclick'][/'(.+)'/,1]
=> "iwanttoscrapethislink.com"
The rest, such as how to get access to Mechanize's internal Nokogiri document, and how to create the new URL, are left for you to figure out.

Resources