We are using wkhtmltopdf to convert dynamic html pages to pdf.
We need to wait until all the ajax requests are finished.
Is there a possibility to delay the printing by a condition?
You can use the --window-status option, see this post on the mailing list.
If you can change the javascript code of the webpage then add this line of code to your javascript when you are certain everything is done loading:
if (your_condition_is_met_here){
window.status = 'ready_to_print';
}
Then pass a flag --window-status ready_to_print to wkhtmltopdf. For example:
wkhtmltopdf --window-status ready_to_print map.html map.pdf
See: wkhtmltopdf javascript delay for output of google maps
You can try using the --javascript-delay option.
Related
I want to dynamically display a certain server status in an asciidoc (rendered in gitlab)
Something like
:status-server-1: 1
ifeval::[{status-server-1} == 1]
image::green_dot.png[green_dot,20,20]
endif::[]
Now how to dynamically set the attribute?
Is there a way the change the attribute with javascript or similar?
asciidoctor doesn't process JavaScript. It transforms Asciidoc markup to HTML (or other formats). JavaScript only comes into play when the resulting HTML is loaded into a browser.
It would be possible to run a JavaScript program before you run asciidoctor to determine the server status and set an environment variable that could then be used during asciidoctor processing. For example:
STATUS=`node status.js`; asciidoctor -a server_status="$STATUS" <file.adoc>
A different approach would be to use Docinfo files to add custom JS or CSS. Custom JS would allow you to perform an XHR request to discover the current server status and then adjust the classes/styles/images needed to reflect that status.
So I would go to an instagram account, say, https://www.instagram.com/foodie/ to copy its xpath that gives me the number of posts, number of followers, and number of following.
I would then run the command this command on a scrapy shell:
response.xpath('//*[#id="react-root"]/section/main/article/header/section/ul')
to grab the elements on that list but scrapy keeps returning an empty list. Any thoughts on what I'm doing wrong here? Thanks in advance!
This site is a Single Page Application (SPA) so it's javascript that render DOM is not rendered yet at the time your downloader working.
When you use view(response) the javascript that your downloader collected can continue render by your browser, so you can see the page with DOM rendered (but can't interacting with Site API). You can look at your downloaded content via response.text and saw that!
In this case, you can apply selenium + phantomjs to making a rendered page for your spider!
Another trick: You can use regular expression to select the JSON part of Script, parse it to JSON obj and select your correspond attribute value (number of post, following, ...) from script!
I have aroung 6k posts with images and now I need to link all them to the source image (to open them with pretty photo and make them "responsive") there are no custom fields with src links, simply html code with src.
Any suggestion in how to do it?
Thanks in advance!
You could use a javascript code that you embed in your template. This code looks for all images in the post content and modifies them as wanted. With jQuery something like that:
$("img").attr("src", function() {
return "/resources/" + this.title;
});
- taken from the jQuery Docs
The down-side of that is that this script has to run everytime the post is requested again. The alternative option would be to write a php script to replace all links in the database once. That is probably more effort at first but in my opinion worth it since its a one time job.
I have an NS Window with a WebView.
My program takes in a search query and executes a Google search with it, the results being displayed in the WebView, like a browser.
Instead of displaying the search results in the WebView, I'd like to automatically open the first link and display the contents of that result instead.
As a better example, how do I display the contents of the first result of Google in a WebView?
Is this even possible?
Any help greatly appreciated. Thanks!
You could use the direct Google Search API. That would be more convinient.
https://developers.google.com/custom-search/v1/cse/list?hl=de-DE
Also you could also try to make a google request like the "I'm feeling lucky" button, which will direct you automatically to the first search result.
If you have to parse the HTML, you need to have a look at the HTML structure of the google result page. Look for specific id and class css properties in the div and a tags. If you found the ones, where the actual results are you can start parsing that content. Also i guess it would be easier to put some javascript together, that will find the first result and open it. (More easier than parsing the HTML using obj-c). You can evaluate javascript in the webview using [myWebView stringByEvaluatingJavaScriptFromString: #"put your js code here"].
Sure it is possible.
The first way to accomplish that that goes through my head is to parse the HTML response from Google, then launch a WebView with the first link you extracted.
Take a look at regular expressions to make it easy.
Is there a way I can load a script at the end of the body tag instead of loading in the header? I want to load Facebox and load the jscipt calls to it after the body has loaded.
Despite what jdog wrote, there are a number of ways to take content just before Joomla echoes it to the browser and edit it. This article gives a good overview: http://www.howtojoomla.net/how-tos/development/how-to-fix-joomla-content-plugins
The specific example turns strings into links, but you can modify that to insert your markup right before the </body> tag.
no.
I assume you want to do this for website load speed reasons, what you could do is look at CSS/JS compression components, such as JFinalizer and see which of those support deferred loading of Javascript.