Pyramid force download generated file - download

I need to export GPX file from pyramid application. I've prepared gpx template in jinja2 and it works fine, but now, I want to offer to user download, instead of displaying file in browser.
How to do it ?

I've found solution:
response = render_to_response( <template>, <data>, request=request)
response.content_type = 'application/gpx+xml'
response.content_disposition = 'attachment; filename="file.gpx"'
return response

Related

Google sheets IMPORTXML fails for ASX data

I am trying to extract the "Forward Dividend & Yield" value from https://finance.yahoo.com/ for multiple companies in different markets, into Google Sheets.
This is successful:
=IMPORTXML("https://finance.yahoo.com/quote/WBS", "//*[#id='quote-summary']/div[2]/table/tbody/tr[6]/td[2]")
But this fails with #N/A:
=IMPORTXML("https://finance.yahoo.com/quote/CBA.AX", "//*[#id='quote-summary']/div[2]/table/tbody/tr[6]/td[2]")
I cannot work out what needs to be different for ASX ticker codes, why does CBA.AX cause a problem?
Huge thanks for any help
When I tested the formula of =IMPORTXML("https://finance.yahoo.com/quote/CBA.AX", "//*"), an error of Error Resource at url not found. occurred. I thought that this might be the reason of your issue.
But, fortunately, when I try to retrieve the HTML from the same URL using Google Apps Script, the HTML could be retrieved. So, in this answer, I would like to propose to retrieve the value using the custom function created by Google Apps Script. The sample script is as follows.
Sample script:
Please copy and paste the following script to the script editor of Google Spreadsheet and save it. And, please put a formula of =SAMPLE("https://finance.yahoo.com/quote/CBA.AX") to a cell. By this, the value is retrieved.
function SAMPLE(url) {
const res = UrlFetchApp.fetch(url).getContentText().match(/DIVIDEND_AND_YIELD-value.+?>(.+?)</);
return res && res.length > 1 ? res[1] : "No value";
}
Result:
When above script is used, the following result is obtained.
Note:
When this script is used, you can also use =SAMPLE("https://finance.yahoo.com/quote/WBS").
In this case, when the HTML structure of the URL is changed, this script might not be able to be used. I think that this situation is the same with IMPORTXML and the xpath. So please be careful this.
References:
Custom Functions in Google Sheets
Class UrlFetchApp
An other solution is to decode the json contained in the source of the web page. Of course you can't use importxml since the web page is built on your side by javascript and not on server's side. You can access data by this way and get a lot of informations
var source = UrlFetchApp.fetch(url).getContentText()
var jsonString = source.match(/(?<=root.App.main = ).*(?=}}}})/g) + '}}}}'
i.e. for what you are looking for you can use
function trailingAnnualDividendRate(){
var url='https://finance.yahoo.com/quote/CBA.AX'
var source = UrlFetchApp.fetch(url).getContentText()
var jsonString = source.match(/(?<=root.App.main = ).*(?=}}}})/g) + '}}}}'
var data = JSON.parse(jsonString)
var dividendRate = data.context.dispatcher.stores.QuoteSummaryStore.summaryDetail.trailingAnnualDividendRate.raw
Logger.log(dividendRate)
}

How to scrape the data using requests module only in python

I am actually trying to parse a website using the requests module, and extract some text out of it.
Url : https://www.icsi.in/student/Members/MemberSearch.aspx
after hitting the url in the Cp Number text field input : 16803
hit search,
on the bottom you can see some data, I want that data, let's say a name.
I am successfully able to get the data using selenium, but can't able to get it using requests module.
I have tried the requests module giving parameters, sessions, cookies etc.
but nothing worked.
url = "https://www.icsi.in/student/Members/MemberSearch.aspx"
ss = {'dnn$ctr410$MemberSearch$txtCpNumber':'16803',
'__EVENTTARGET':'dnn$ctr410$MemberSearch$btnSearch',
'__VIEWSTATEGENERATOR':'6A295697',
'dnn$ctlHeader$dnnSearch$Search':'SiteRadioButton'}
session = requests.Session()
cookies = session.cookies.get_dict()
for cookie in cookies:
session.cookies.set(cookie['name'], cookie['value'])
response = requests.post(url, data=ss)
print(response)
HTMLTree = html.fromstring(response.content)
name = HTMLTree.xpath('//div[#class="name_head"]//text()')
print(name)
I expect the output of the name of the person.
Anyone out there please help me.
If you don't mind using C# code I would be more than happy to help you otherwise it's a very lengthy process. If you choose that python is the only road you're willing to take then you should try grabbing the encrypted value within C:\User[USERNAME]\Appdata\Local\Google\Chrome\User Data\Default\Cookies You can change the file path accordingly to your OS. You can use SQLite to read and modify the encrypted values.
cookie = Decrypt(Encoding.Default.GetBytes(SQLDatabase1.GetValue(i, "encrypted_value")
if (cookie.Contains(".ASPXANONYMOUS")):
Step1 = cookie + "END"
Step2 = (step1 + ".ASPXANONYMOUS")
The following code above may help you with your journey.

How to use SassDoc as parser for single file without generating full documentation files

Question relates to http://sassdoc.com package
I would like to parse each *.scss file in ./source folder, but instead of generating sassdoc folder i would like to create partial-html for each parsed file. For example:
parse: variables.scss and receive variables.html, without page header, sidebar - pure content, even without html and body tags.
My current code:
var gulp = require('gulp'),
sassdoc = require('sassdoc');
var paths = {
scss: [
'source/**/*.scss'
]
};
gulp.task('sassdoc', function () {
console.log("sassdoc task finished");
return gulp.src(paths.scss)
.pipe(sassdoc());
});
It's not possible with SassDoc' default theme. So you'd need to build your own theme to acheive this.
http://sassdoc.com/using-your-own-theme
Each item is given a file key in resulting data, so I would leverage that and do some merging.
That could potentially end up in a sassdoc-extra custom filter.
http://sassdoc.com/extra-tools
EDIT:
Actually your question is quite misleanding, you want a variable.html file but with no html ...
If all that you want is the raw JSON data from SassDoc, without any kind of theme processing, then the parse method is what you're looking for.
But again, unless you call SassDoc on each file separately, you'll get all files together, meaning post data processing to split them, that's why a custom theme (even with no html output) is the way to go.

How to not show extracted links and scraped items?

Newbie here, running scrapy in windows. How to avoid showing the extracted links and crawled items in the command window? I found comments in the "parse" section on this linkhttp://doc.scrapy.org/en/latest/topics/commands.html, not sure if it's relevant and how to apply it if so. Here is more detail with part of the code, starting from my second Ajax request (In the first Ajax request, the callback function is "first_json_response":
def first_json_response(self, response):
try:
data = json.loads(response.body)
meta = {'results': data['results']}
yield Request(url=url, callback=self.second_json_response,headers={'x-requested-with': 'XMLHttpRequest'}, meta = meta)
def second_json_response(self, response):
meta = response.meta
try:
data2 = json.loads(response.body)
...
The "second_json_response" is to retrieve the response from the requested result in first_json_response, as well as to load the new requested data. "meta" and "data" are then both used to define items that need to be crawled. Currently, the meta and links are shown in the windows terminal where I submitted my code. I guess it is taking up some extra time for computer to show them on the screen, and thus want them to disappear. I hope by running scrapy on a kinda-of batch mode will speed up my lengthy crawling process.
Thanks! I really appreciate your comment and suggestion!
From scrapy documentation:
"You can set the log level using the –loglevel/-L command line option, or using the LOG_LEVEL setting."
So append to your scray crawl etc command -loglevel='ERROR' . That should make all the info disappear from your command line, but I don't think this will speed things much.
In your pipelines.py file, try using something like:
import json
class JsonWriterPipeline(object):
def __init__(self):
self.file = open('items.jl', 'wb')
def process_item(self, item, spider):
line = json.dumps(dict(item)) + "\n"
self.file.write(line)
return item
This way, when you yield an item from your spider class, it will print it out to items.jl.
Hope that helps.

blobstore images get_serving_url

I am new to the Google App Engine and I am trying to use the Blobstore to store images that I want to display later on.
The image storage works fine. Now I want to dynamically change some images in my html code. Therefore I need a method of getting the images out of the blobstore and passing them. I am using Python. I found the get_serving_url-command, which seemed to be the perfect fit. Sadly, this causes an Error and I seem to be unable to fix it.
My basic code looks like this:
blob_key = "yu343mQ7kT4344N434ewQ=="
if blob_key:
blob_info = blobstore.get(blob_key)
if blob_info:
img = images.Image(blob_key=blob_key)
url = images.get_serving_url(blob_key)
...
Everytime the function gets called, I get the following Error in my Log Console.
File "C:\Program Files
(x86)\Google\google_appengine\google\appengine\ext\remote_api\remote_api_stub.py",
line 234, in _MakeRealSyncCall
raise pickle.loads(response_pb.exception())
AttributeError: 'ImagesNotImplementedServiceStub' object has no
attribute 'THREADSAFE'
I have no idea how to fix it or if I am doing something terribly wrong.
I am very grateful for your support! Thank you in advance!
Have a nice day
You probably need an instance of BlobKey so if you are getting blob_info successfully try:
img = images.Image(blob_key=blob_info.key())
url = images.get_serving_url(blob_info.key())

Resources