how to make docpad regenerate only the modified pages? - docpad

I have a simple site done in docpad, but all the time I make a small change I have wait for the all the pages to regenarate (around 40 sec). Any ideea how to make it regenerate only the modified pages ?

While working on your site, you can run the command docpad watch.
»To render standalone files with DocPad programatically (will output to stdout)«
docpad render filePath
»E.g. To render a markdown file and save the result to an output file, we would use:«
docpad render inputMarkdownFile.html.md > outputMarkdownFile.html
»To just watch your website for changes and re-generate whenever a change is made, use:«
docpad watch
Source: http://docpad.org/docs/cli

Related

How to update scss on host

I have made a wordpress theme with understrap and I have uploaded on a webhost and it works fine.
The issue is that I wanted to make some changes so I edit my scss file localy and when I uploaded it the changes weren't applied. Am I suppose to upload any other file in order to work?
Thank you in advance
I don't know if this will help anyone but in order to update website's scss on host you simply copy the css file which usually is in the css folder and you just upload it to the same folder.
The SCSS needs to be compiled into - and replace the existing - CSS files enqueued by your theme. To do this you can either, install a plugin that does that, or have it done by setting up css pre-processing in your local IDE/environment.
clear cache of your browser and close recent tab and try again. it just browser runnung with previous cache.

Laravel refresh page takes up to 4s to load the whole DOM

I'm new with Laravel's framework, so correct me if i'm mistaken.
I have set up some templates with some assets (css and js) in its corresponding folder:
SASS and JS goes inside resources/sass and resources/js respectively, which then gets compiled into public/css and public/js
Views go into resources/views, and then i have set up some routes inside routes/web.php
If you want me to add some more information i can for sure add it.
Image of console network debugging
Please, ignore images time load as I dont care the loading of them cause they load after DOM.
What I'm asking here is: why is every .css file or .js file taking more than a second to load?
Solution found
Seems like xDebug was causing strong loads in my page, commenting it fixed it.

Vuepress - any way to compile/bulk-validate page markdown contents?

Let's take some markdown
This `<goodtag>` is OK
This <badtag> is not.
In Typora, a mac markdown editor, this renders as:
In Macdown:
In both editors, the problem is localized, the rest of the contents after badtag appears normally.
However, in Vuepress, things don't work at all and the page is all blank except for its title.
Module Error (from ./node_modules/vue-loader/lib/loaders/templateLoader.js):
(Emitted value instead of an instance of Error)
Errors compiling template:
tag <badtag> has no matching end tag.
1 |
2 | <ContentSlotsDistributor :slot-key="$parent.slotKey"><p>This <code><goodtag></code> is OK</p>
3 | <p>This <badtag> is not.</p>
I understand that <badtag> is going to cause a problem with Vuepress and why. But I am not writing this as an individual page, I am pushing lots of existing markdown contents into a Vuepress directory. The overall Vuepress dev build doesn't complain, this only shows as a problem when I navigate to that particular page, find it all blank and look at the JS console.
I already have a Python batch that performs transformations on the local markdown (for example, pushing local file system images into .vuepress/public with the appropriate URLs. Can I catch these errors, by having the batch call Vuepress itself to pre-validate content?
Is there any way for me to run a "node vuepress compile /foo/mymarkdown.md" at an individual page, check its return code and notify me that something needs fixing on that page?
As I wrote this I think maybe an npm run build to build the actual distribution would do the trick. Maybe. But I need to isolate it on a page by page basis, not bulk-run it on 500+ pages at a time and then pick out all the errors.

Compiled CSS file wouldn't display in my browser

I am new to laravel homestead. I am using a windows 10 for my learning and able to run npm watch-poll. The app.scss is compiled successfully and the app.css file is updated in the public directory but, it wouldn't display in my browser. I have included the CSS script in my blade template. Only when I restart my machine then the css is displayed. Any one can help please.
I do appreciate.
This is how i solved this problem in my own project:
delete your link tag containing href="app.css"
save file
paste the link tag again
save file
If the above doesn't work, try pressing ctrl+f5 to clear browser cache.
How to disable browser cache on chrome:
https://www.technipages.com/google-chrome-how-to-completely-disable-cache

How to download pdf file in ruby without .pdf in the link

I need to download a pdf from a website which does not provide a link ending with (.pdf) using ruby. Manually, when i click on the link to download the pdf, it takes me to a new page and the dialog box to save/open the file appears after some time.
Please help me in downloading the file.
The link
You an do this
require 'open-uri'
File.open('my_file_name.pdf', "wb") do |file|
file.write open('http://someurl.com/2013-1-2/somefile/download').read
end
I have been doing this for my projects and it works.
If you just need a simple ruby script to do it, I'd just run wget. Like this exec 'wget "http://path.to.the.file/and/some/params"'
At that point though, you might as well run wget.
The other way, is to just run a get on the page that you know the pdf is at
source = Net::HTTP.get("http://the.website.com", "/and/some/params")
There are a number of other http clients that you could use, but as long as you make a get request to the endpoint that the pdf is at, it should give you the raw data. Then you can just rename the file, and you'll have the pdf
In your case, I ran the following commands to get the pdf
wget http://www.lawcommission.gov.np/en/documents/prevailing-laws/constitution/func-download/129/chk,d8c4644b0f086a04d8d363cb86fb1647/no_html,1/
mv index.html thefile.pdf
Then open the pdf. Note that these are linux commands. If you want to get the file with a ruby script, you could use something like what I previously mentioned.
Update:
There is an added complication that was not initially stated, which is that the url to the pdf changes every time there is an update to the pdf. In order to make this work, you probably want to do something involving web scraping. I suggest nokogiri. This way you can look at the page where the download is and then perform a get request on the desired URL. Furthermore, the server that hosts the pdf is misconfigured, and breaks chrome within a few seconds of opening the page.
How to solve this problem: I went to the site, and refreshed it. Then broke the connection to the server (press the X where there would otherwise be a refresh button). Then right click next to the download link, and select inspect element. Then browse the dom to find something that is definitively identifying (like an id). Thankfully, I found something <strong id="telecharger"> Download</strong>. This means that you can use something like page.css('strong#telecharger')[0].parent['href'] This should give you a URL. Then you can perform a get request as described above. I don't have time to make the script for you (too much work to do), but this should be enough to solve the problem.

Resources