How compile SCSS faster - sass

Simple topic simple question: are there a way to compile faster SCSS when you have a MASSIVE folder of partial files like this
I know more partial file you have more slow the compile is but i'd like to know if there is a way to compile faster.

In general, sass needs a compiler written based on different programming languages to be compiled, if the speed of any of these compilers is slow for you, you can use Sass direct or use https://sass-lang.com/dart-sass or use compilers in faster programming languages such as java
this is good answer (--link--) 👇
there are three things to think about:
Sass becomes slowly as many SASS files are included to the process. Big SASS-Frameworks tend to use a lot of files and latest when you use a lot of big modules it heavily could slow down at all. Sometimes there are more modules included than needed.
Often the standard project settings try to to a lot of work at the same time. I.e. writing min-files in same process simply doubles the time. If it is that: just prepare 'min-files' at the end of your work. Up to that using additonal post-processors to autoprefix like linters and maby postcss needs extra time ... which counts doubles when writing min-files at once...
JS-Sass-Compilers are slower at all. So you can save time using native SASS direct. This may not be handsome but in big projects that helped me a lot. If you may try that here the link to information how to install: https://sass-lang.com/install

Related

Is it recommended to keep a program sources (as opposed to lib sources) in a single file?

I am making my first steps into Go and obviously am reasoning from what I'm used to in other languages rather than understanding go specificity and styles yet.
I've decided to rewrite a ruby background job I have that takes ages to execute. It iterates over a huge table in my database and process data individually for each row, so it's a good candidate for parallelization.
Coming from a ruby on rails task and using ORM, this was meant to be, as I thought of it, a quite simple two files program: one that would contain a struct type and its methods to represent and work with a row and the main file to operate the database query and loop on rows (maybe a third file to abstract database access logic if it gets too heavy in my main file). This file separation as I intended it was meant for codebase clarity more than having any relevance in the final binary.
I've read and seen several things on the topic, including questions and answers here, and it always tends to resolve into writing code as libraries, installing them and then using them into a single file source (in package main) program.
I've read that one may pass multiple files to go build/run, but it complains if there is several package name (so basically, everything should be in main) and it doesn't seem that common.
So, my questions are :
did I get it right, and having code mostly as a library with a single file program importing it the way to go?
if so, how do you deal with having to build libraries repeatedly? Do you build/install on each change in library codebase before executing (which is way less convenient that what go run promise to be) or is there something common I don't know of to execute library dependent program quick and fast while working on those libraries code?
No.
Go and the go tool works on packages only (just go run works on files, but that is a different story): You should not think about files when organizing Go code but packages. A package may be split into several files, but that is used for keeping test code separated and limiting file size or
grouping types, methods, functions, etc.
Your questions:
did I get it right, and having code mostly as a library with a single file program
importing it the way to go?
No. Sometimes this has advantages, sometimes not. Sometimes a split may be one lib + one short main,
in other cases, just one large main might be better. Again: It is all about packages and never about files. There is nothing wrong with a single 12 file main package if this is a real standalone program. But maybe extracting some stuff into one or a few other packages might result in more readable code. It all depends.
if so, how do you deal with having to build libraries repeatedly? Do you build/install on each change in library codebase before executing (which is way less convenient that what go run promise to be) or is there something common I don't know of to execute library dependent program quick and fast while working on those libraries code?
The go tool tracks the dependencies and recompiles whatever is necessary. Say you have a package main in main.go which imports a package foo. If you execute go run main.go it will recompile package foo transparently iff needed. So for quick hacks: No need for a two-step go install foo; go run main. Once you extract code into three packages foo, bar, and waz it might be a bit faster to install foo, bar and waz.
No. Look at the Go commands and Go standard packages for exemplars of good programming style.
Go Source Code

How To Structure Large OpenCL Kernels?

I have worked with OpenCL on a couple of projects, but have always written the kernel as one (sometimes rather large) function. Now I am working on a more complex project and would like to share functions across several kernels.
But the examples I can find all show the kernel as a single file (very few even call secondary functions). It seems like it should be possible to use multiple files - clCreateProgramWithSource() accepts multiple strings (and combines them, I assume) - although pyopencl's Program() takes only a single source.
So I would like to hear from anyone with experience doing this:
Are there any problems associated with multiple source files?
Is the best workaround for pyopencl to simply concatenate files?
Is there any way to compile a library of functions (instead of passing in the library source with each kernel, even if not all are used)?
If it's necessary to pass in the library source every time, are unused functions discarded (no overhead)?
Any other best practices/suggestions?
Thanks.
I don't think OpenCL has a concept of multiple source files in a program - a program is one compilation unit. You can, however, use #include and pull in headers or other .cl files at compile time.
You can have multiple kernels in an OpenCL program - so, after one compilation, you can invoke any of the set of kernels compiled.
Any code not used - functions, or anything statically known to be unreachable - can be assumed to be eliminated during compilation, at some minor cost to compile time.
In OpenCL 1.2 you link different object files together.

How expensive it is for the compiler to process an include-guarded header?

To speed up the compilation of a large source file does it make more sense to prune back the sheer number of headers used in a translation unit, or does the cost of compiling code far outweigh the time it takes to process-out an include-guarded header?
If the latter is true an engineering effort would be better spent creating more, lightweight headers instead of less.
So how long does it take for a modern compiler to handle a header that is effectively include-guarded out? At what point would the inclusion of such headers become a hit on compilation performance?
(related to this question)
I read an FAQ about this the other day... first off, write the correct headers, i.e. include all headers that you use and don't depend on undocumented dependencies (which may and will change).
Second, compilers usually recognize include guards these days, so they're fairly efficient. However, you still need to open a lot of files, which may become a burden in large projects. One suggestion was to do this:
Header file:
// file.hpp
#ifndef H_FILE
#define H_FILE
/* ... */
#endif
Now to use the header in your source file, add an extra #ifndef:
// source.cpp
#ifndef H_FILE
# include <file.hpp>
#endif
It'll be noisier in the source file, and you require predictable include guard names, but you could potentially avoid a lot of include-directives like that.
Assuming C/C++, simple recompilation of header files scales non-linearly for a large system (hundreds of files), so if compilation performance is an issue, it is very likely down to that. At least unless you are trying to compile a million line source file on a 1980s era PC...
Pre-compiled headers are available for most compilers, but generally take specific configuration and management to work on non system-headers, which not every project does.
See for example:
http://www.cygnus-software.com/papers/precompiledheaders.html
'Build time on my project is now 15% of what it was before!'
Beyond that, you need to look at the techniques in:
http://www.amazon.com/exec/obidos/ASIN/0201633620/qid%3D990165010/002-0139320-7720029
Or split the system into multiple parts with clean, non-header-based interfaces between them, say .NET components.
The answer:
It can be very expensive!
I found an article where someone had done some testing of the very issue addressed here, and am astonished to find you can increase your compilation time under MSVC by at least an order of magnitude if you write your include guards properly:
http://www.bobarcher.org/software/include/index.html
The most astonishing line from the results is that a test file compiled under MSVC 2008 goes from 5.48s to 0.13s given the right include guard methodology.

Why can't the compiler just compile my code as I type it?

Why can't the compiler just compile my code as I type it?
From the user's point of view, it could work as smoothly as syntax colouring does today. If you stop typing for long enough (maybe a couple of seconds) the compilation (not linking) would finish, and code errors would be identified using something like syntax colouring.
It's not like my 3GHz quad core monster computer was really busy doing something else. Why not let it compile all the time?
That's exactly what the VB.NET code editor in Visual Studio does.
The advantage is much more accurate IntelliSense than C#. The disadvantage is that it wastes truly vast amounts of processor time and memory. :-(
It can. Or, to be more useful, the answer to this question depends on
What language
What degree of optimization you require
How annoyed you will be if you temporarily type something dumb, and the compiler compiles and injects the result into the binary your are debugging before you can fix it.
Some really strong optimizations would be very messy to mess with on the fly. On the other hand, a basic compilation, if there's no need to worry about assigning offsets for X86 instructions? Sure.
Some IDEs do compile (or at least check syntax and some semantics) code as it is typed. For example, I think Eclipse does it. I think Visual Basic 6 (and maybe earlier versions) did this.
Note sure what IDE you're using, but that's how VB.NET works.
I'm not well-versed in compilers or the methods by which code is converted to IL and machine language, etc. But even so I can see how altering my program by one flow control statement can completely invalidate the work a compiler has done up to that point. By adding or changing a single line of code, entire portions of a program may become obsolete, unused, or in some other way require re-evaluation.
I think I'd rather save those CPU cycles for distributed.net or SETI # Home instead of constantly recompiling my code as I alter it.
That totally depend on the language.
Languages that have context-independent syntaxes "could" pre-compile expressions once typed. However, compilation of such languages project is always fast, so why use the cpu when you can batch quickly the work when the code is ready?
Other languages, like infamously C++, are context-dependent. In most cases, the compiler can't understand an expression without having already read the whole code before the expression. It's really really hard to parse and that's why we have error checking before compilation only now (in VS2010 and other recent ide). In this case it looks like impossible to implement the feature you're asking for.
That said, I'm not a specialist at all. That's all I know about it.
Even interpreted languages like PHP have support for this in the Komodo editor. I'm sure there's many more editors out there that support this for almost any language.

Javascript source code analysis ( specifically duplication checking )

Partial duplicate of this
Notes:
I already use JSLint extensively via a tool I wrote that scans in intervals my current project directory for recently updated/created .js files. It's drastically improved productivity for me and I doubt there is anything as good as JSLint for the price (it's free).
That said, is there any analysis tool out there that can find repetitive or near-duplicate code blocks, the goal being to make it easier to find opportunities to consolidate large files or small/medium sized projects?
May not be exactly what your after, but Google's Javascript optimizer is worth a look.
Our CloneDR is a tool for finding exact and near-miss cloned code blocks for a variety of languages. It will find duplicates in the same file or across literally thousands of files if you have them. You don't have to provide it with any guidance; it can find the cloned code by itself. And it won't be fooled by different indentation or line breaks, or even consistent renaming of identifiers
It does support JavaScript, even if it isn't clear from the website.
You can see sample clone reports for a variety of langauges at the website.
You may want to have a look at jsinspect.
jsinspect ./src
It will print a list of code blocks that are either identical or structurally very similar.
And there's also jscpd.

Resources