I am trying to create an ultimate gulpfile that we can use on one of our big sites (one with multiple themes depending on the section of the site you are in). I'm trying to get it to only run the process it needs to run and not recompile everything.
Let me layout exactly what i'm trying to achieve:
Folder Structure
src/
master-theme/
css/
style.scss
partials/
_a.scss
_b.scss
img/
a.jpg
b.jpg
sub-theme/
css/
style.scss
partials/
_c.scss
_d.scss
img/
c.png
d.jpg
I want these files to be compressed/compiled and to end up in the destination folder with the same folder structure (just replace src with dest in your mind)
The Problem
At the moment i can get it to do what I want - but the gulpfile compiles and compresses everything. E.g. if I add an image tosub-theme/img it will run the image compression for all the "themes". I am using gulp-changed but it still means that it is looking at all the images accross the site.
The same is also for the sass - if I update _c.scss, but the master css and the sub-theme css get compiled which is obviously not desired.
Current Solution
I don't really have one at the moment. Right now I am using gulp-file-tree to generate a json file of the folder structure, then whenever a file is changed, looping through that
with a function (which I know is horrible - but a solution which currently works)
var tree = require('./build/tree.json');
var children = tree.children;
for (var i = children.length - 1; i >= 0; i--) {
var child = children[i];
if(child.isDirectory)
task(child)
}
There task() is a gulp tasks passed in (e.g. Sass compilation)
The folder structure is not up for discussion - I don't want this to turn into a 'structure your files differently' kind of thing. There are several other factors involved which are not related to this issue as to why we are this way (Sorry I had to say that...)
I'm open to trying anything as i've stared at this file for days now.The tasks I am trying to run are:
Sass compilation
Sprite generation
SVG sprite to PNG sprite
Image compression
Javascript compression
Thanks in advance for your help. If a solution is found, I'll write a proper post about it so that others will hopefully not feel my pain...
I'm doing pretty much the same thing, and I think I've nailed it.
gulpfile.js:
var gulp = require('gulp'),
debug = require('gulp-debug'),
merge = require('merge-stream'),
sass = require('gulp-sass'),
less = require('gulp-less'),
changed = require('gulp-changed'),
imagemin = require('gulp-imagemin'),
prefix = require('gulp-autoprefixer'),
minifyCSS = require('gulp-minify-css'),
browserSync = require('browser-sync'),
reload = browserSync.reload,
path = require('path'),
glob = require('glob');
// Log errors to the console
function errorHandler(error) {
console.log(error.toString());
this.emit('end');
}
function processThemeFolder(src) {
function debugTheme(type) {
return debug({ title: 'theme ' + theme + ' ' + type});
}
var theme = path.basename(src);
var dest = 'public/themes/' + theme;
return merge(
gulp
.src([src + '/sass/**/*.scss'])
.pipe(changed(dest + '/css', { extension: '.css' }))
.pipe(debugTheme('sass'))
.pipe(sass())
.pipe(minifyCSS())
.pipe(gulp.dest(dest + '/css')),
gulp
.src([src + '/less/**/*.less'])
.pipe(changed(dest + '/css', { extension: '.css' }))
.pipe(debugTheme('less'))
.pipe(less())
.pipe(minifyCSS())
.pipe(gulp.dest(dest + '/css')),
gulp
.src([src + '/js/**/*.js'])
.pipe(changed(dest + '/js'))
.pipe(debugTheme('js'))
.pipe(uglify())
.pipe(gulp.dest(dest + '/js')),
gulp
.src([src + '/img/**/*.{png,jpg,gif}'])
.pipe(changed(dest + '/img'))
.pipe(debugTheme('img'))
.pipe(imagemin())
.pipe(gulp.dest(dest + '/img'))
).on('change', reload);
}
gulp.task('themes', function() {
var srcThemes = glob.sync('resources/themes/*');
return merge(srcThemes.map(processThemeFolder));
});
// ...
The key here is to use gulp-changed to only pass through the changed files. The rest is cream on top.
The compilation streams all show a debug line detailing what files are going into the stream. On a change in the stream,
the browserSync is notified to reload the browsers, using streaming (if possible). The theme task is only completed once
all its compilation streams are done, and the over-all themes task will only be marked as done when all the themes are done.
The theme's source files are stored in resources/themes/themename, and writes its output to public/themes/themename.
This is working very well for me, YMMV. :-)
I would use the following plugin to manage a cache for your processed files. It will then use the cache and determine what needs to be changed and what has already been processed prior to this.
https://github.com/wearefractal/gulp-cached
HTH
You can create a function with parameters that compiles only the changed file then you can call another one that combines the result. For example generate a.css and b.css and when a.scss is updated only a.css should be updated. After each call, trigger a combine function that puts a and b together. Google too see how you get the path of the changed file. Idon't remember which plugin I used
Related
Maybe this is one of the impossible ones, but here goes.
I have a ton of images of qr-codes in a folder. It doesnt matter if its in google drive or in a local folder for me.
I would a script that automaticly loads all images in column A and then the file names in column B. AND automaticly adds new images when uploaded to the google drive folder.
Example:
Qr1.jpeg will be loaded into cell a1 and cell b1 will be "Qr1"
Qr2.jpeg will be loaded into cell a2 and so on..
It would be nice if the images are scaled to 10x10 cm. :)
Is this even possible?
Hope you guys can help!
Thanks!
Oliver
Not the Complete Answer...but it's a part
I've been working on getting thumbnails into my spreadsheet ever since you asked this question. I won't go into all the paths I took but I finally found this Github link in the comments of this issue
Which resulted in the following code:
function myImageFiles()
{
var ss=SpreadsheetApp.getActive();
var sh=ss.getSheetByName('MyImages');
var files = DriveApp.getFiles();
var s='';
var cnt=1;
while(files.hasNext())
{
var fi=files.next();
var type=fi.getMimeType();
if(type==MimeType.GIF || type==MimeType.JPEG || type==MimeType.PNG)
{
sh.appendRow([cnt++,fi.getName(),type,fi.getUrl(),'=IMAGE("' + getThumbNailLink(fi.getId()) + '",1)']);
}
}
}
function getThumbNailLink(fileId)
{
var file=Drive.Files.get(fileId);
return file.thumbnailLink;
}
The result is now I have a spreadsheet with all of my image filenames and thumbnails. You'll need Advanced Drive and take a close look at getThumbNailLink().
I am a novice user at ExtendScript, Basil.js & Javascript.
After seeing the project http://basiljs.ch/gallery/ending-the-depression-through-amazon/ I wondered if it is possible to simply get the Basil.js Script/ Extendscript/Indesign to grab images from a webpage and insert it directly into a indesign document.
I am a beginner so if someone had a complete script, from directory, functions... with example url, import, save, place.... ?
This is in the hope that I could simply put an address into the script and it would harvest all images on the webpage and put them into a indesign document?
I tried their example http://basiljs.ch/tutorials/getting-images-from-urls/
but when I copy that and run it in extend script it is just searching my computer...
UPDATE*
#includepath "~/Documents/;%USERPROFILE%Documents";
#include "basil/basil.js";
function draw() {
var url = "https://raw.github.com/basiljs/basil.js/master/lib/basil.png";
// download the url to a default location, filename according to url:
// -> "the project folder" + data/download/basil.png
//b.download(url);
// download url to a specific location in the project folder:
// -> "the project folder" + data/download_images_files/basil_logo.png
// b.download(url, "download_images_files/basil_logo.png");
// download url to a specific location e.g. to your desktop
// -> ~/Desktop/basil_logo.png
var newFile = new File("~/Desktop/basil_logo.png");
b.download(url, newFile);
}
b.go();
... It finds in Error in Basil's Core.js #
var runDrawOnce = function() {
app.doScript(function() {
if (typeof glob.draw === 'function') {
glob.draw();
}
}, ScriptLanguage.javascript, undef, UndoModes.ENTIRE_SCRIPT);
};
Console says...
Using basil.js 1.08 ...
sh ~/Documents/basiljs/bundle/lib/download.sh /Users/jamestk/Desktop (Address here, I don't have the sufficient reputation on stack overflow)
Basil.js Error -> b.shellExecute(): Error: sh: /Users/jamestk/Documents/basiljs/bundle/lib/download.sh: No such file or directory
The easiest way would probably be to us the b.shellExecute(cmd) command to run a shell script that downloads your images.
You could use wget from shell to download all images from an url, see this question: How do I use Wget to download all Images into a single Folder
You should make sure then to save the images in the documents data directory. After that, you could get all images by using the Folder object's getFiles() method:
var myImagesArray = Folder('~/path/to/my/folder').getFiles();
Finally you could use that array in a loop to place all the images into your document.
Subjective question time!
I'm coding a website that hosts a large amount of files and folders for an open organization that must post all documents online for public scrutiny. I have not yet began coding the actual viewer, as I'm wondering what the standard, most accessible approach is.
The site must be easy to access and available to all devices from desktops to phones. That said, I don't have to code in mind of older, outdated browsers. The previous site used a static approach on Python and Django. This is my first real node.js + Express job, and I'm not sure of performance differences.
At present, I see two ways to accomplish my task:
1. Use Ajax
I know I can shove everyone onto a generic /documents page, and allow them to navigate through the folders themselves. However, I want document links to work if shared, so I'll have to be changing the URL manually as users move around, and submitting plenty of Ajax requests back to the server
I like this approach in that it will likely give a nicer user interaction. I don't like the amount of Ajax requests, and I fear that on less powerful devices like phones and tablets, all that Ajax and DOM manipulation will slow down or not work. Additionally, I'd have to parse the url to a resource with either the back end or front end for retrieval.
2. Go 'Static'
I'm using node.js and Jade on the back end, so I know I can just break apart a url, find the folder hierarchy, and give a whole new page to the user.
I like this approach because it doesn't require the user's machine to do any computation (and will likely be faster on slower devices), and it means not doing a ton of url work. I don't like that desktop users will end up waiting for a bunch of synchronous operations that I'll have to use to prepare the pages with, nor the server load or responsiveness.
Currently
I'm looking into the static approach right now for what I perceive to be a bit more accessibility (even at the cost of page load times), but I'm here for more information to guide the right choice. I'm looking for answers that explain the why of which way to go will be better, and are impartial or share experiences. Thank you in advance for your help!
Right. So no one else responded yet, so I just went ahead and made the file browser anyway.
I ended up doing a static method. It turned out to be relatively easy, besides having to manipulate a bunch of strings, and I can only imagine that twice the work would have been necessary for Ajax.
The response times are fairly long: a generic static page that does no computation on my site takes about 40-70ms, while the new documents one takes twice that at ~150ms. Although in practice 150ms isn't anything to get upset over for my needs, in a large scale environment I'm sure my glob functions in the documents folder would just bog down the system.
For anyone wondering, here's what I did
Code
The hierarchy looks like this
|app
|controllers
|-document.js
|views
|-document.jade
|public
|docs
|
|//folders
|
documents.js
var express = require('express');
var router = express.Router();
var glob = require('glob');
module.exports = function(app) {
app.use('/', router);
};
router.get('/documents*', function serveDocsHome(req, res) {
//this removes %20 from the requested url to match files with spaces
req.originalUrl = req.originalUrl.replace('%20', ' ');
//fun string stuff to make links work
var dir = '/docs' + req.originalUrl.substr(10);
var url = req.originalUrl + '/';
//for moving up a directory
var goUp = false;
var folderName = 'Home';
if (req.originalUrl != '/documents') {
var end = req.originalUrl.lastIndexOf('/');
folderName = req.originalUrl.substr(end + 1);
goUp = true;
}
//get all the folders
var folders = glob.sync('*/', {
cwd : 'public' + dir
});
for (var i = 0; i < folders.length; i++) {
folders[i] = folders[i].substr(0, folders[i].length - 1);
}
//get all the files
var files = glob.sync('*', {
cwd : 'public' + dir,
nodir : true
});
//attach the files and folders
res.locals.folders = folders;
res.locals.files = files;
res.locals.loc = dir + '/';
res.locals.goUp = goUp;
res.locals.url = url;
res.locals.folderName = folderName;
//render the doc
res.render('documents', {
title : 'Documents',
});
});
documents.jade
extends layout
append css
link(rel='stylesheet', href='/css/docs.css')
append js
script(src='/js/docs.js')
block content
.jumbotron(style='background: url(/img/docs.jpg); background-position: center 20%; background-repeat: no-repeat; background-size: cover;')
.container
h1= title
p View minutes, policies, and guiding papers of the [name]
.container#docs
.row
.col-xs-12.col-sm-3.sidebar.sidebar-wrap
h3= folderName
ul.no-style.jumplist
hr
if goUp
li#go-up: a.message(href='./') #[img(src='/img/icons/folderOpen.png')] Up One Folder
each val in folders
li: a(href='#{url + val}'): #[img(src='/img/icons/folder.png')] #{val}
.col-xs-12.col-sm-9
h3 Files
ul.no-style
if files.length != 0
each val in files
li: a(href='#{loc + val}')= val
else
li.message No Files Here
And heres part of the page
I've optimized my css compilation for speed in gulp and essentially queued 3 operations together in one task. It works well but now I'm a bit stuck on generating proper sourceMaps.
I simplified a bit your sample but you can try a solution similar to this one:
var gulp = require('gulp'),
filter = require('gulp-filter'),
less = require('gulp-less'),
concat = require('gulp-concat'),
sourcemaps = require('gulp-sourcemaps');
gulp.task('default', function(){
var lessFilter = filter(['**/*.less']);
var cssFilter = filter(['**/*.css']);
return gulp.src('src/**/*')
.pipe(sourcemaps.init())
.pipe(lessFilter)
.pipe(less())
.pipe(lessFilter.restore())
.pipe(cssFilter) // Needed only in case the src folder contains files that are not compiled to css at this stage of the pipeline
.pipe(concat('./final.css'))
.pipe(sourcemaps.write())
.pipe(gulp.dest('dest'));
});
I have a phantomjs script that is stepping through the pages of my site.
For each page, I use page = new WebPage() and then page.close() after finishing with the page. (This is a simplified description of the process, and I'm using PhantomJS version 1.9.7.)
While on each page, I use page.renderBase64('PNG') one or more times, and add the results to an array.
When I'm all done, I build a new page and cycle through the array of images, adding each to the page using <img src="data:image/png;base64,.......image.data.......">.
When done, I use page.render(...) to make a PDF file.
This is all working great... except that the images stop appearing in the PDF after about the 20th image - the rest just show as 4x4 pixel black dots
For troubleshooting this...
I've changed the render to output a PNG file, and have the same
problem after the 19th or 20th image.
I've outputted the raw HTML. I
can open that in Chrome, and all the images are visible.
Any ideas why the rendering would be failing?
Solved the issue. Turns out that PhantomJS was still preparing the images when the render was executed. Moving the render into the onLoadFinished handler, as illustrated below, solved the issue. Before, the page.render was being called immediately after the page.content = assignment.
For those interested in doing something similar, here's the gist of the process we are doing:
var htmlForAllPages = [];
then, as we load each page in PhantomJS:
var img = page.renderBase64('PNG');
...
htmlForAllPages.push('<img src="data:image/png;base64,' + img + '">');
...
When done, the final PDF is created... We have a template file ready, with all the required HTML and CSS etc. and simply insert our generated HTML into it:
var fs = require('fs');
var template = fs.read('DocumentationTemplate.html');
var finalHtml = template.replace('INSERTBODYHERE', htmlForAllPages.join('\n'));
var pdfPage = new WebPage();
pdfPage.onLoadFinished = function() {
pdfPage.render('Final.pdf');
pdfPage.close();
};
pdfPage.content = finalHtml;