Show different page on a page break in RTF template (BI publisher) - reporting

I am trying to call a different page after every page break as shown below. The problem here is that my conditions (test2) can only be displayed within the same page as the header / total page but not break into a new page. I tried using (split-by-page-break) and even page braking the section, but all it does it appearing on the same page as the header.
Header/Page1 (Countdown), followed by a Total/Page1 (test)
Page break then Conditions/Page2 (Test2)
Header/Page3 (Countdown), followed by a Total/Page3 (test)
Page break then Conditions/Page4 (Test2)
<?template:countdown?>
Temp_Param
Recursive_Template
<?end template?>
**to call the total section**
<?call-template:test?>
<?end body?>
**this is to assign an after page break variable = 1**
<?xdoxslt:set_variable($_XDOCTX, 'Var', '1')?>
**to calculate the total section**
<?template:test?>
TOTAL
<? format-number:Order_Total_Taxable_ID2;’999G999D99’?>
<? format-number:Order_Total_Tax_ID3;’999G999D99’?>
<?format-number:Order_Total_Gross_Amount_ID7;’999G999D99’?>
<?end template?>
**to call a different page**
<?if:xdoxslt:get_variable($_XDOCTX, 'Var')='1'?>
<?call-template:test2?>
<?end if?>
**different page after every page break**
**need to break into a new page**
<?template:test2?>
CONDITIONS
This Purchase order is subject to the following terms and conditions:
1.PRICE
TEST
2.DELIVERIES
TEST
<?end template?>

Where you want the page break, try using this code:
page
For more information, see this Oracle thread: https://community.oracle.com/thread/983330

Related

How to populate select fast and only once

I have over 3200 rows in a Google Sheet. I need a dropdown with each value on a web app.
I have this in Apps Script:
function doGet(e) {
var htmlOutput = HtmlService.createTemplateFromFile('CensusWebApp2');
var streets = getStreets();
var businessNames = getbusinessNames();
htmlOutput.message = '';
htmlOutput.streets = streets;
htmlOutput.businessNames = businessNames;
return htmlOutput.evaluate();
}
function getbusinessNames(){
var ss= SpreadsheetApp.getActiveSpreadsheet();
var StreetDataSheet = ss.getSheetByName("businessNames");
var getLastRow = StreetDataSheet.getLastRow();
var return_array = [];
return_array= StreetDataSheet.getRange(2,1,getLastRow-1,1).getValues();
return return_array;
}
This is the HTML code
<select type="select" name="IntestazioneTari" id="IntestazioneTari" class="form-control" >r>
<option value="" ></option>
<? for(var i = 0; i < businessNames.length; i++) { ?>
<option value="<?= businessNames[i] ?>" ><?= businessNames[i] ?></option>
<? } ?>
</select><be>
I'm creating an app similar to surveys forms, but this dropdown will be the same for every entry.
Is there a way to load this only once and not every time the form is submitted and got again for a new survey entry? (from the same operator/device)
I believe your goal as follows.
You want to use the value of businessNames retrieved from Google Spreadsheet at HTML side.
The value of businessNames is not changed. So you want to load the value only one time.
In this case, how about declaring the value in the tag of <script> as a global? When this point is reflected to your script it becomes as follows.
Modified script:
In this case, your HTML side is modified.
<select type="select" name="IntestazioneTari" id="IntestazioneTari" class="form-control" >r>
<option value="" ></option>
<? for(var i = 0; i < businessNames.length; i++) { ?>
<option value="<?= businessNames[i] ?>" ><?= businessNames[i] ?></option>
<? } ?>
</select>
<input type="button" value="ok" onclick="test()">
<script>
const value = JSON.parse(<?= JSON.stringify(businessNames) ?>); // Here, the value of "businessNames" is retrieved.
function test() {
console.log(value);
}
</script>
In this modification, when the HTML is loaded, the value of businessNames is added to the HTML by evaluate() method. At that time, businessNames is given to HTML and Javascript. By this, const value = JSON.parse(<?= JSON.stringify(businessNames) ?>); has the value of businessNames. In order to confirm this value, when you click a sample button of <input type="button" value="ok" onclick="test()">, you can see the value at the console. By this, you can use the value of businessNames at the Javascript side after HTML is loaded.
Reference:
HTML Service: Templated HTML
As the values from the spreadsheet won't change, I created a really long text row with all the options and pasted them directly in the HTML.
I made the same with the other information. Load time decreased enormously.
This is the code I use to generate the values:
return_array= "<option>" + businessNamesSheet.getRange(2,1,getLastRow-1,1).getValues().join("</option><option>")+"</option>";
We are talking about performance, and there are 3 things you need to do when doing so:
Make measurements
Make measurements again
And make some more measurements
A change in the code could have a negative impact for a reason that you didn't expect (it's hard to keep every single little detail in mind). When making a Google Apps Script web app, you have 3 reported times:
The timings in your browser. How much did it really take to load the entire page
The running time on Google Apps Script execution log.
Small timing inside your application using console.time (reference), console.timeLog (reference) and console.timeEnd (reference) (collectively called console timers).
Note that the first 2 may change without you changing a thing, probably because of the inner working in Google.
So let's start doing what I said: measuring. I'd measure:
The entire doGet function
The getStreets()
The getbusinessNames()
The template.evaluate()
How much time it takes to load the page (browser)
This will give me a rough idea on what takes most of the time. Knowing that, you can try the following ideas.
Note that I don't have your code so I can't tell how it will effect your times, so your mileage may vary. Also note that most ideas could be implemented simultaneously, this doesn't mean it's a good idea and can even slow what a single idea could have achieved.
Idea 1: Copy the generated options into the template
If you don't need to load the options from somewhere (like I suppose you are doing), you could generate the template once, copy the generated options and paste it to the HTML. This will obviously avoid the problem of having the request the list of options and evaluating them every time, but you lose flexibility.
Idea 2: Having the options in code instead of somewhere else
If the options won't change or you will be changing them, you could add them into your code:
const BUSINESS_NAMES = `
business 1
hello world
another one
and another one
`
function getbusinessNames() {
return BUSINESS_NAMES
.split('\n')
.filter(v => !!v) // remove empty string
}
It's similar to idea 1 but it's easier to change the values when needed, specially when using the V8 support for multi line strings.
Idea 3: Use a Google Apps Script cache
If what's taking time is querying the options, you could use CacheService (see reference). This would allow you to only query the options every X seconds (up to 6 hours) instead of every time.
function doGet(e) {
// [...]
const cache = CacheService.getScriptCache()
let businessNames = cache.get('businessNames')
if (businessNames == null) {
businessNames = getbusinessNames()
cache.put('businessNames', businessNames, 6*60*60)
}
// use businessNames
// [...]
}
In this case I've only done it with businessName but it can also be use in streets.
having a 6 hour cache means that it could take up to 6 hours for a change in the list to propagate. If you add the options manually you could add a function to force the reloading it:
function forcecacheRealod() {
cache.put('streets', getStreets(), 6*60*60)
cache.put('businessNames', getbusinessNames(), 6*60*60)
}
Idea 4: Improve how you load the data
Is very common for new Google Apps Script users to iterate the rows one by one getting the value. It's way more efficient to get the proper range with all the rows and columns and call getValues (reference).
Idea 5: do a fetch instead of submitting the form
If what is happening is that it takes time to load after sending the data, it might be a good idea to use google.script.run (reference) instead of making a form and submitting it, since it could prevent reloading the entire page again.
Idea 6: SPA web app
The result of doubling down on the last idea. Same benefits and you could load the necessary data in the background while the user lands on the home page.
Idea 7: Load the options dynamically
Use google.script.run (reference) to load the options once the page has already been loaded. May actually be slower but you can give faster feedback to the user.
Idea 8: Save the options in localStorage
Requires idea #7. Save the dynamically loaded options into localStorage(see reference) so the user only needs to wait once. You may need to load them once in a while to make sure they are up-to-date.
References
Console timer (MDN)
CacheService (Google Apps Script reference)
Range.getValues() (Google Apps Script reference)
Class google.script.run (Client-side API) (Google Apps Script reference)
Window.localStorage (MDN)

How to crawl multiple links in Scrapy following an xpath-based rule in the given start page?

I have created a spider that successfully extract the data I want from a single page, now I need it to crawl multiple similar pages and do the same.
The start page is going to be this one, here there are listed many unique item from the game (Araku tiki, sidhbreath etc), I want the spider to crawl all those items.
Given that as a start page, how to identifies which links to follow?
Here are the xpaths for the first 3 links i want it to follow:
//*[#id="mw-content-text"]/div[3]/table/tbody/tr[1]/td[1]/span/span[1]/a[1]
//*[#id="mw-content-text"]/div[3]/table/tbody/tr[2]/td[1]/span/span[1]/a[1]
//*[#id="mw-content-text"]/div[3]/table/tbody/tr[3]/td[1]/span/span[1]/a[1]
As you can see there is an increasing number in the middle, 1, then 2, then 3 and so on. How to crawl those pages?
Here is a snippet of my code working for the first item, Araku Tiki, having its page set as start:
import scrapy
from PoExtractor.items import PoextractorItem
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
class RedditSpider(scrapy.Spider):
name = "arakaali"
# allowed_domains = ['pathofexile.gamepedia.com']
start_urls = ['https://pathofexile.gamepedia.com/Araku_Tiki']
rules = (
Rule(LinkExtractor(allow=(), restrict_xpaths=()), callback="parse",
follow=True),
)
def parse(self, response):
item = PoextractorItem()
item["item_name"] = response.xpath("//*[#id='mw-content-text']/span/span[1]/span[1]/text()[1]").extract()
item["flavor_text"] = response.xpath("//*[#id='mw-content-text']/span/span[1]/span[2]/span[3]/text()").extract()
yield item
Please note: I have not be able to make it follow all the links in the start page either, my code only works if the start page is the one contained the requested data.
Thanks in advance for every reply.
You can send requests in many ways.
1.Since you are using scrapy, the following code can be used
def parse_page1(self, response):
return scrapy.Request("http://www.example.com/some_page.html",
callback=self.parse_page2)
def parse_page2(self, response):
# this would log http://www.example.com/some_page.html
self.logger.info("Visited %s", response.url)
parse_page1 will send requests to the url and you will get the response in parse_page2 function.
2.You can even send requests using python requests module,
import requests
resp = req.get("http://www.something.com")
print(resp.text)
Please comment if you have any doubt regarding this, thank you

CodeIgniter Captcha value munged by session userdata?

I'm attempting to store a CodeIgniter captcha helper challenge word in a Session variable, as follows:
$cap = create_captcha($val /* array of params */); // Works fine
$this->session->set_userdata('cap-word', $cap['word']);
If I echo $cap['word'] before and after the session set, it is correct (i.e. what was displayed in the browser). If I retrieve the session variable immediately after setting it, it's also correct.
However what gets stored in the session is completely different - it's the right length (character count) but a totally different string. Hence, when I try to retrieve the userdata on the server side (captcha validation) it gets the wrong value.
(I'm configured for 'sess_use_database' and inspecting the user data values in phpMyAdmin after each page load. Cookie encryption is disabled.)
Debug attempts:
I've tried prepending / appending known strings to the captcha challenge word before storing it in the session. The known strings make it into the session user data just fine, but are prepended / appended to an incorrect captcha word.
I've tried replacing the challenge word entirely with a fixed string. This makes the round trip through the session no problem.
I've tried saving the captcha challenge word to a different string variable (rather than passing it directly from $cap['word]') with the same result; only the challenge portion gets "munged" on landing in the session.
After debugging, my code looks more like:
$cap = create_captcha($vals);
echo("<br>cap = ");
print_r($cap);
echo("<br>cap['word'] = " . $cap['word']);
$theword = $cap['word'];
echo("<br> Before theword = " . $theword);
$this->session->set_userdata('cap-word', $theword . 'abcdefg');
echo("<br> After theword = " . $theword);
echo("<br> Session output = " . $this->session->userdata('cap-word'));
This produces the following output in the browser:
cap = Array ( [word] => 5CZaDeHm [time] => 1436765602.678 [image] =>
{the image})
cap['word'] = 5CZaDeHm
Before theword = 5CZaDeHm
After theword = 5CZaDeHm
Session output = 5CZaDeHmabcdefg
However, what's stored in the session table userdata fields (and, thus, what pops out when I call $this->session->userdata('cap-word') on submit request) is:
a:2:{s:9:"user_data";s:0:"";s:8:"cap-word";s:15:"3g5hb1I3abcdefg";}
Hence, the substring '5CZaDeHm' within $theword has been seemingly replaced by '3g5hb1I3' during the call to $this->session->set_userdata. I have no idea why, or even how this is possible?!
Update 2015-07-13 07:50 EDT: As usual, Occam's Razor applies. Turns out on each page load, my controller is being called twice, generating 2 captcha images with 2 corresponding challenge words. One of these appeared in the browser, the other in the session's database row. Now to figure out why...
Turns out the CodeIgniter controller was being called twice because I used a relative path in the href parameter of a CSS tag. This resulted in the browser attempting to load the stylesheet relative to the page, which triggered a 2nd load of the page.

web2py A() link handling with multiple targets

I need to update multiple targets when a link is clicked.
This example builds a list of links.
When the link is clicked, the callback needs to populate two different parts of the .html file.
The actual application uses bokeh for plotting.
The user will click on a link, the 'linkDetails1' and 'linkDetails2' will hold the script and div return from calls to bokeh.component()
The user will click on a link, and the script, div returned from bokeh's component() function will populate the 'linkDetails'.
Obviously this naive approach does not work.
How can I make a list of links that when clicked on will populate two separate places in the .html file?
################################
#views/default/test.html:
{{extend 'layout.html'}}
{{=linkDetails1}}
{{=linkDetails2}}
{{=links}}
################################
# controllers/default.py:
def test():
"""
example action using the internationalization operator T and flash
rendered by views/default/index.html or views/generic.html
if you need a simple wiki simply replace the two lines below with:
return auth.wiki()
"""
d = dict()
links = []
for ii in range(5):
link = A("click on link %d"%ii, callback=URL('linkHandler/%d'%ii), )
links.append(["Item %d"%ii, link])
table = TABLE()
table.append([TR(*rows) for rows in links])
d["links"] = table
d["linkDetails1"] = "linkDetails1"
d["linkDetails2"] = "linkDetails2"
return d
def linkHandler():
import os
d = dict()
# request.url will be linked/N
ii = int(os.path.split(request.url)[1])
# want to put some information into linkDetails, some into linkDiv
# this does not work:
d = dict()
d["linkDetails1"] = "linkHandler %d"%ii
d["linkDetails2"] = "linkHandler %d"%ii
return d
I must admit that I'm not 100% clear on what you're trying to do here, but if you need to update e.g. 2 div elements in your page in response to a single click, there are a couple of ways to accomplish that.
The easiest, and arguably most web2py-ish way is to contain your targets in an outer div that's a target for the update.
Another alternative, which is very powerful is to use something like Taconite [1], which you can use to update multiple parts of the DOM in a single response.
[1] http://www.malsup.com/jquery/taconite/
In this case, it doesn't look like you need the Ajax call to return content to two separate parts of the DOM. Instead, both elements returned (the script and the div elements) can simply be placed inside a single parent div.
# views/default/test.html:
{{extend 'layout.html'}}
<div id="link_details">
{{=linkDetails1}}
{{=linkDetails2}}
</div>
{{=links}}
# controllers/default.py
def test():
...
for ii in range(5):
link = A("click on link %d" % ii,
callback=URL('default', 'linkHandler', args=ii),
target="link_details")
...
If you provide a "target" argument to A(), the result of the Ajax call will go into the DOM element with that ID.
def linkHandler():
...
content = CAT(SCRIPT(...), DIV(...))
return content
In linkHandler, instead of returning a dictionary (which requires a view in order to generate HTML), you can simply return a web2py HTML helper, which will automatically be serialized to HTML and then inserted into the target div. The CAT() helper simply concatenates other elements (in this case, your script and associated div).

How to pass through "multi factor" authentication with mechanize?

I am attempting to log into a site via Mechanize that is proving to be a bit of a challenge. I can get past the first two form pages, but then after submitting an ID and password, I get a third page prior to being able to go into the main content that I'm attempting to scrape.
In my case I have the following Ruby source that gets me to the point where I'm hitting a roadblock:
agent = Mechanize.new
start_url = 'https://sub.domain.tld/action'
# Get first page
page = agent.get(start_url)
# Fill in the first ID field and submit form
login_form = page.forms.first
id_field = login_form.field_with(:name => "ctl00$PageContent$Login1$IdTextBox")
id_field.value = "XXXXXXXXXXX"
# Get the next password page & submit form:
page = agent.submit(login_form, login_form.buttons.first)
login_form = page.forms.first
password_field = login_form.field_with(:name => "ctl00$PageContent$Login1$PasswordTextBox")
password_field.value = "XXXXXXXXXXX"
# Get the next page and...
page = agent.submit(login_form, login_form.buttons.first)
# Try to go into the main content, just to hit a wall. :s
page = agent.click(page.link_with(:text => /Skip to main/))
The contents of the third page as per the mechanize agent output is:
https://gist.github.com/5ed57292c8f6532352fd
As you may note from this, it seems that one should be able to simply use agent.click on the first link to get on into the main content. Unfortunately, this is simply looping back to this page. Each time this loads I see that it is loading a new page, but it ends up having the precise same contents each time. Something is preventing me from being able to go on in through this multi-factor login to the main content it would seem, but I can't put my finger on what that might be.
Here is the page.content from that third request: http://f.cl.ly/items/252y261c303R0m2P1R0j/source.html
Any thoughts on what may be stopping me from going forward to content here?

Resources