I am trying to scrape
https://www.maybank.co.id/others/locate-us?Keyword=&LocType=branch&LocSubType=all
to obtain branch name and address for all bank branches. There are 44 pages I need to scrape for which the url doesn't change. I cant iterate over the pages.
for page_no in range(1,45):
payload='page='+str(page_no)+'&PageSize=9&id=%7B5066AC98-FE40-407A-B4FE-03C814BED5F5%7D&keyword=&LocType=branch&LocSubType=all'
response = requests.post(url, data=payload)
page = requests.post(url,data=payload)
print('Page',page_no)
for i in soup.find_all('div',class_="col-md-4 col-sm-6 col-xs-12 property-item"):
Branch=i.find_all('h3') if i.find_all('h3') else ''
Address=i.find_all('p') if i.find_all('p') else ''
for j in Address:
j = re.sub(r'<(.*?)>', '', str(j))
j = j.strip()
Address_list.append(j)
for k in Branch:
k=re.sub(r'<(.*?)>', '', str(k))
Branch_list.append(k)
Can someone suggest should be done here?
You should use the API to get what you need.
Try this:
from urllib.parse import urlencode
import requests
from bs4 import BeautifulSoup
api_url = "https://www.maybank.co.id/api/sitecore/MapsLocation/MapsLocationListPaging?"
payload = {
"page": "44",
"id": "{5066AC98-FE40-407A-B4FE-03C814BED5F5}",
"keyword": "",
"LocType": "branch",
"LocSubType": "all",
}
headers = {
"user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.183 Safari/537.36",
"x-requested-with": "XMLHttpRequest",
}
for page in range(1, 45):
payload["PageSize"] = page
page = requests.get(f"{api_url}{urlencode(payload)}", headers).text
soup = BeautifulSoup(page, "html.parser").find("div", {"class": "col-md-4 col-sm-6 col-xs-12 property-item"})
branch_data = [
soup.find("h3").getText(strip=True),
[p.getText(strip=True) for p in soup.find_all("p")],
soup.find("a")["href"],
]
print(branch_data)
Output:
['KC MANADO', ['Jl. Kawasan Mega Mas Jl. Pierre Tendean Boulevard Blok I C1 No. 24,25,26 dan Blok I C2 No. 27,28,29 Manado', 'Closed until 03.30 PM0431 - 860543'], '/others/locate-us/locate-us-detail?id=337&loctype=Branch&locsubtype=']
['KC SUNSET ROAD, DPS', ['Jl. Sunset Road No 811, Kuta - Badung, Bali', 'Closed until 03.30 PM0361 - 3003811'], '/others/locate-us/locate-us-detail?id=294&loctype=Branch&locsubtype=']
['KCP BSB CITY', ['Ruko Taman Niaga Bukit Semarang Baru (BSB) Blok E No. 3A, Semarang', 'Closed until 03.30 PM(024) 76670611'], '/others/locate-us/locate-us-detail?id=217&loctype=Branch&locsubtype=']
['KCP GRAHA IRAMA', ['Jl. HR Rasuna Said Kav. 1-2 Ground Floor Blok B Jakarta Selatan', 'Closed until 03.30 PM021-5261330-4'], '/others/locate-us/locate-us-detail?id=111&loctype=Branch&locsubtype=']
['KCP KLP. GADING BULEVARD II', ['Jl. Raya Boulevard I-3 no. 4, Jakarta', 'Closed until 03.30 PM021 - 4515253'], '/others/locate-us/locate-us-detail?id=199&loctype=Branch&locsubtype=']
['KCP PALM SPRING BATAM CENTER', ['Komplek Palm Spring BTC Blok D1 No. 10, Batam Centre', 'Closed until 03.30 PM0778 - 6053070'], '/others/locate-us/locate-us-detail?id=26&loctype=Branch&locsubtype=']
and so on...
Related
I was trying to deploy my NLP model to Heroku, and I got the following error in the logs upon trying to predict the result of the inputs-
2022-09-07T15:36:35.497488+00:00 app[web.1]: if self.n_features_ != n_features:
2022-09-07T15:36:35.497488+00:00 app[web.1]: AttributeError: 'DecisionTreeClassifier' object has no attribute 'n_features_'
2022-09-07T15:36:35.498198+00:00 app[web.1]: 10.1.22.85 - - [07/Sep/2022:15:36:35 +0000] "POST /predict HTTP/1.1" 500 290 "https://stocksentimentanalysisapp.herokuapp.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.0.0 Safari/537.36"
This specific line is strange considering I never used a Decision Tree Classifier, only Random Forest-
AttributeError: 'DecisionTreeClassifier' object has no attribute 'n_features_'
The model runs perfectly well in Jupyter Notebook. This issue began only when I tried to deploy it.
Here is my model-
import pandas as pd
import pickle
df = pd.read_csv('D:\Sa\Projects\Stock Sentiment Analysis\data\data.csv', encoding='ISO-8859-1')
train = df[df['Date']<'20150101']
test = df[df['Date']>'20141231']
#Removing non-alphabetic characters
data = train.iloc[:,2:27]
data.replace('[^a-zA-Z]', ' ', regex=True, inplace=True)
#Renaming columns to numerical index
idx_list = [i for i in range(25)]
new_index = [str(i) for i in idx_list]
data.columns = new_index
for index in new_index:
data[index] = data[index].str.lower()
combined_headlines = []
for row in range(0, len(data.index)):
combined_headlines.append(' '.join(str(x) for x in data.iloc[row, 0:25]))
from sklearn.ensemble import RandomForestClassifier
#Bag of words
from sklearn.feature_extraction.text import CountVectorizer
count_vectorizer = CountVectorizer(ngram_range=(2,2))
train_data = count_vectorizer.fit_transform(combined_headlines)
pickle.dump(count_vectorizer, open('countVectorizer.pkl', 'wb'))
rfc = RandomForestClassifier(n_estimators=200, criterion='entropy')
rfc.fit(train_data, train['Label'])
test_transform = []
for row in range(0, len(data.index)):
test_transform.append(' '.join(str(x) for x in data.iloc[row, 2:27]))
test_data = count_vectorizer.transform(test_transform)
predictions = rfc.predict(test_data)
# Saving model to disk
pickle.dump(rfc, open('randomForestClassifier.pkl', 'wb'))
Please help me understand what is going wrong.
I am trying to extract products description, the first loop runs through each product and nested loop enters each product page and grabs description to extract.
for page in range(1, 2):
guitarPage =
requests.get('https://www.guitarguitar.co.uk/guitars/acoustic/page-
{}'.format(page)).text
soup = BeautifulSoup(guitarPage, 'lxml')
guitars = soup.find_all(class_='col-xs-6 col-sm-4 col-md-4 col-lg-3')
this is the loop for each product
for guitar in guitars:
title_text = guitar.h3.text.strip()
print('Guitar Name: ', title_text)
price = guitar.find(class_='price bold small').text.strip()
print('Guitar Price: ', price)
priceSave = guitar.find('span', {'class': 'price save'})
if priceSave is not None:
priceOf = priceSave.text
print(priceOf)
else:
print("No discount!")
image = guitar.img.get('src')
print('Guitar Image: ', image)
productLink = guitar.find('a').get('href')
linkProd = url + productLink
print('Link of product', linkProd)
here i am adding the links collected to an array
productsPage.append(linkProd)
here is my attempt at entering each product page and extracting the description
for products in productsPage:
response = requests.get(products)
soup = BeautifulSoup(response.content, "lxml")
productsDetails = soup.find("div", {"class":"description-preview"})
if productsDetails is not None:
description = productsDetails.text
# print('product detail: ', description)
else:
print('none')
time.sleep(0.2)
if None not in(title_text,price,image,linkProd, description):
products = {
'title': title_text,
'price': price,
'discount': priceOf,
'image': image,
'link': linkProd,
'description': description,
}
result.append(products)
with open('datas.json', 'w') as outfile:
json.dump(result, outfile, ensure_ascii=False, indent=4, separators=(',', ': '))
# print(result)
print('--------------------------')
time.sleep(0.5)
The outcome should be
{
"title": "Yamaha NTX700 Electro Classical Guitar (Pre-Owned) #HIM041005",
"price": "£399.00",
"discount": null,
"image": "https://images.guitarguitar.co.uk/cdn/large/150/PXP190415342158006-3115645f.jpg?h=190&w=120&mode=crop&bg=ffffff&quality=70&anchor=bottomcenter",
"link": "https://www.guitarguitar.co.uk/product/pxp190415342158006-3115645--yamaha-ntx700-electro-classical-guitar-pre-owned-him",
"description": "\nProduct Overview\nThe versatile, contemporary styled NTX line is designed with thinner bodies, narrower necks, 14th fret neck joints, and cutaway designs to provide greater comfort and playability f... read more\n"
},
but the description works for the first one and does not change later on.
[
{
"title": "Yamaha APX600FM Flame Maple Tobacco Sunburst",
"price": "£239.00",
"discount": "Save £160.00",
"image": "https://images.guitarguitar.co.uk/cdn/large/150/190315340677008f.jpg?h=190&w=120&mode=crop&bg=ffffff&quality=70&anchor=bottomcenter",
"link": "https://www.guitarguitar.co.uk/product/190315340677008--yamaha-apx600fm-flame-maple-tobacco-sunburst",
"description": "\nProduct Overview\nOne of the world's best-selling acoustic-electric guitars, the APX600 series introduces an upgraded version with a flame maple top. APX's thinline body combines incredible comfort,... read more\n"
},
{
"title": "Yamaha APX600FM Flame Maple Amber",
"price": "£239.00",
"discount": "Save £160.00",
"image": "https://images.guitarguitar.co.uk/cdn/large/150/190315340676008f.jpg?h=190&w=120&mode=crop&bg=ffffff&quality=70&anchor=bottomcenter",
"link": "https://www.guitarguitar.co.uk/product/190315340676008--yamaha-apx600fm-flame-maple-amber",
"description": "\nProduct Overview\nOne of the world's best-selling acoustic-electric guitars, the APX600 series introduces an upgraded version with a flame maple top. APX's thinline body combines incredible comfort,... read more\n"
},
{
"title": "Yamaha AC1R Acoustic Electric Concert Size Rosewood Back And Sides with SRT Pickup",
"price": "£399.00",
"discount": "Save £267.00",
"image": "https://images.guitarguitar.co.uk/cdn/large/105/11012414211132.jpg?h=190&w=120&mode=crop&bg=ffffff&quality=70&anchor=bottomcenter",
"link": "https://www.guitarguitar.co.uk/product/11012414211132--yamaha-ac1r-acoustic-electric-concert-size-rosewood-back-and-sid",
"description": "\nProduct Overview\nOne of the world's best-selling acoustic-electric guitars, the APX600 series introduces an upgraded version with a flame maple top. APX's thinline body combines incredible comfort,... read more\n"
}
]
this is the result I am getting, It changes all the time, sometimes it shows the previous description of the product
It does loop but it seems there are protective measures in place server side and the pages which fail change. The pages which did fail I checked and they had the searched for content. No single measure seems to suffice in my testing (I didn't try sleep over 2 but did try some IP and User-Agent changes with sleeps <=2.)
You could try alternating IPs and User-Agents, back off retries, changing time between requests.
Changing proxies: https://www.scrapehero.com/how-to-rotate-proxies-and-ip-addresses-using-python-3/
Changing User-Agent: https://pypi.org/project/fake-useragent/
I am using this page:
https://www.google.com/search?q=ford+fusion+msrp&oq=ford+fusion+msrp&aqs=chrome.0.0l6.2942j0j7&sourceid=chrome&ie=UTF-8
I am trying to get the this element: class="_XWk"
page = HTTParty.get('https://www.google.com/search?q=ford+fusion+msrp&oq=ford+fusion+msrp&aqs=chrome.0.0l6.11452j0j7&sourceid=chrome&ie=UTF-8')
parse_page = Nokogiri::HTML(page)
parse_page.css('_XWk')
Here I can see the whole page in parse_page but when I try the .cc('classname') I don't see anything. Am I using the method the wrong way?
Check out the SelectorGadget Chrome extension to grab css selectors by clicking on the desired element in the browser.
It's because of a simple typo, e.g. . (dot) before selector as ran already mentioned.
In addition, the next problem might occur because no HTTP user-agent is specified thus Google will block a request eventually and you'll receive a completely different HTML that will contain an error message or something similar without the actual data you was looking for. What is my user-agent.
Pass a user-agent:
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}
HTTParty.get("https://www.google.com/search", headers: headers)
Iterate over container to extract titles from Google Search:
data = doc.css(".tF2Cxc").map do |result|
title = result.at_css(".DKV0Md")&.text
Code and example in the online IDE:
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}
params = {
q: "ford fusion msrp",
num: "20"
}
response = HTTParty.get("https://www.google.com/search",
query: params,
headers: headers)
doc = Nokogiri::HTML(response.body)
data = doc.css(".tF2Cxc").map do |result|
title = result.at_css(".DKV0Md")&.text
link = result.at_css(".yuRUbf a")&.attr("href")
displayed_link = result.at_css(".tjvcx")&.text
snippet = result.at_css(".VwiC3b")&.text
puts "#{title}#{snippet}#{link}#{displayed_link}\n\n"
-------
'''
2020 Ford Fusion Prices, Reviews, & Pictures - Best Carshttps://cars.usnews.com/cars-trucks/ford/fusionhttps://cars.usnews.com › Cars › Used Cars › Used Ford
Ford® Fusion Retired | Now What?Not all vehicles qualify for A, Z or X Plan. All Mustang Shelby GT350® and Shelby® GT350R prices exclude gas guzzler tax. 2. EPA-estimated city/hwy mpg for the ...https://www.ford.com/cars/fusion/https://www.ford.com › cars › fusion
...
'''
Alternatively, you can achieve this by using Google Organic Results API from SerpApi. It's a paid API with a free plan.
The difference in your case is that you don't need to figure out what the correct selector is or why results are different in the output since it's already done for the end-user.
Basically, the only thing that needs to be done is just to iterate over structured JSON and get the data you were looking for.
Example code:
require 'google_search_results'
params = {
api_key: ENV["API_KEY"],
engine: "google",
q: "ford fusion msrp",
hl: "en",
num: "20"
}
search = GoogleSearch.new(params)
hash_results = search.get_hash
data = hash_results[:organic_results].map do |result|
title = result[:title]
link = result[:link]
displayed_link = result[:displayed_link]
snippet = result[:snippet]
puts "#{title}#{snippet}#{link}#{displayed_link}\n\n"
-------
'''
2020 Ford Fusion Prices, Reviews, & Pictures - Best Carshttps://cars.usnews.com/cars-trucks/ford/fusionhttps://cars.usnews.com › Cars › Used Cars › Used Ford
Ford® Fusion Retired | Now What?Not all vehicles qualify for A, Z or X Plan. All Mustang Shelby GT350® and Shelby® GT350R prices exclude gas guzzler tax. 2. EPA-estimated city/hwy mpg for the ...https://www.ford.com/cars/fusion/https://www.ford.com › cars › fusion
...
'''
P.S - I wrote a blog post about how to scrape Google Organic Search Results.
Disclaimer, I work for SerpApi.
It looks like something is swapping the classes so what you see in the browser is not what you are getting from the http call. In this case from _XWk to _tA
page = HTTParty.get('https://www.google.com/search?q=ford+fusion+msrp&oq=ford+fusion+msrp&aqs=chrome.0.0l6.11452j0j7&sourceid=chrome&ie=UTF-8')
parse_page = Nokogiri::HTML(page)
parse_page.css('._tA').map(&:text)
# >>["Up to 23 city / 34 highway", "From $22,610", "175 to 325 hp", "192″ L x 73″ W x 58″ H", "3,431 to 3,681 lbs"]
Change parse_page.css('_XWk') to parse_page.css('._XWk')
Note the dot (.) difference. The dot references a class.
Using parse_page.css('_XWk'), nokogiri doesn't know wether _XWk is a class, id, data attribute etc..
I want to create a small excel sheet which sort of like Bloomberg's launchpad for me to monitor live stock market price. So far, out of all the available free data source, I only found Google finance provides real time price for a list of exchanges I need. The issue with Google finance is they have already closed down their finance API. I am looking for a way to help me to programmatically retrieve the real price that I circled in chart below to have it update live in my excel.
I have been searching around and to no avail as of now. I read some post here:
How does Google Finance update stock prices? but the method suggested in the answer points to retrieving a time series of data in the chart, instead of the live updating price part I need. I have been examining the network communication of the web page in chrome's inspection and didn't find any request that returns the part of real time price I need. Any help is greatly appreciated. some sample codes (can be in other languages other than VBA) would be very beneficial. Thanks everyone !
There are so many way ways to do this: VBA, VB, C# R, Python, etc. Below is a way to download statistics from Yahoo finance.
Sub DownloadData()
Set ie = CreateObject("InternetExplorer.application")
With ie
.Visible = True
.navigate "https://finance.yahoo.com/quote/AAPL/key-statistics?p=AAPL"
' Wait for the page to fully load; you can't do anything if the page is not fully loaded
Do While .Busy Or _
.readyState <> 4
DoEvents
Loop
' Set a reference to the data elements that will be downloaded. We can download either 'td' data elements or 'tr' data elements. This site happens to use 'tr' data elements.
Set Links = ie.document.getElementsByTagName("tr")
RowCount = 1
' Scrape out the innertext of each 'tr' element.
With Sheets("DataSheet")
For Each lnk In Links
.Range("A" & RowCount) = lnk.innerText
RowCount = RowCount + 1
Next
End With
End With
MsgBox ("Done!!")
End Sub
I will leave it up to you to find other technologies that do the same. Thing, for instance, R and Prthon can do exactly the same thing, although, the scripts will be a bit different than the VBA scripts that do this kind of work.
It's fairly easy to make it work in Python. You'll need a few libraries:
Library
Purpose
requests
to make a request to Google Finance and then return HTML.
bs4
to process returned HTML.
pandas
to easily save to CSV/Excel.
Code and full example in the online IDE:
from bs4 import BeautifulSoup
import requests, lxml, json
from itertools import zip_longest
def scrape_google_finance(ticker: str):
# https://docs.python-requests.org/en/master/user/quickstart/#passing-parameters-in-urls
params = {
"hl": "en", # language
}
# https://docs.python-requests.org/en/master/user/quickstart/#custom-headers
# https://www.whatismybrowser.com/detect/what-is-my-user-agent
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.60 Safari/537.36",
}
html = requests.get(f"https://www.google.com/finance/quote/{ticker}", params=params, headers=headers, timeout=30)
soup = BeautifulSoup(html.text, "lxml")
ticker_data = {"right_panel_data": {},
"ticker_info": {}}
ticker_data["ticker_info"]["title"] = soup.select_one(".zzDege").text
ticker_data["ticker_info"]["current_price"] = soup.select_one(".AHmHk .fxKbKc").text
right_panel_keys = soup.select(".gyFHrc .mfs7Fc")
right_panel_values = soup.select(".gyFHrc .P6K39c")
for key, value in zip_longest(right_panel_keys, right_panel_values):
key_value = key.text.lower().replace(" ", "_")
ticker_data["right_panel_data"][key_value] = value.text
return ticker_data
# tickers to iterate over
tickers = ["DIS:NYSE", "TSLA:NASDAQ", "AAPL:NASDAQ", "AMZN:NASDAQ", "NFLX:NASDAQ"]
# temporary store the data before saving to the file
tickers_prices = []
for ticker in tickers:
# extract ticker data
ticker_data = scrape_google_finance(ticker=ticker)
# append to temporary list
tickers_prices.append({
"ticker": ticker_data["ticker_info"]["title"],
"price": ticker_data["ticker_info"]["current_price"]
})
# create dataframe and save to csv/excel
df = pd.DataFrame(data=tickers_prices)
# to save to excel use to_excel()
df.to_csv("google_finance_live_stock.csv", index=False)
Outputs:
ticker,price
Walt Disney Co,$137.06
Tesla Inc,"$1,131.21"
Apple Inc,$176.99
"Amazon.com, Inc.","$3,321.61"
Netflix Inc,$384.93
Returned data from ticker_data
{
"right_panel_data": {
"previous_close": "$138.61",
"day_range": "$136.66 - $139.20",
"year_range": "$128.38 - $191.67",
"market_cap": "248.81B USD",
"volume": "9.98M",
"p/e_ratio": "81.10",
"dividend_yield": "-",
"primary_exchange": "NYSE",
"ceo": "Bob Chapek",
"founded": "Oct 16, 1923",
"headquarters": "Burbank, CaliforniaUnited States",
"website": "thewaltdisneycompany.com",
"employees": "166,250"
},
"ticker_info": {
"title": "Walt Disney Co",
"current_price": "$136.66"
}
}
If you want to scrape more data with a line-by-line explanation, there's a Scrape Google Finance Ticker Quote Data in Python blog post of mine that also covers scraping time-series chart data.
I have list of objects like this:
{
"pattern": "Mozilla/5.0 (*Mac OS X 10?4*) AppleWebKit/* (KHTML, like Gecko) Chrome/46.*Safari/*",
"name": "Macintosh",
"brand": "Apple"
}
{
"pattern": "Mozilla/5.0 (*Windows NT 5.1*rv:46.0*) Gecko/*/",
"name": "Windows",
"brand": "Microsoft"
}
or like this (the same, but in regular expression notation):
{
"pattern": "mozilla/5\.0 \(.*linux.*android.4\.4.*gxt_dongle_3188 build/.*\) applewebkit/.* \(khtml, like gecko\) version/.* chrome/.*safari/.* bdbrowserhd_i18n/1\.(\d).*",
"name": "Macintosh",
"brand": "Apple"
}
This is dictionary of browser user agents with 7000 items. I have user agent string, for example:
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36"
And I need to find name and brand as fast as possible. Now I split dictionary by chunks (100 patterns) and glue to one big regexp. After this I try to match user agent of this big regexp. If matched - I walk all items of this chunk.
Would you recommend some DB engine which can help me with this? Or simply algorithm which can help me do it faster?