SASS Rendering in Go - go

I am beginning to use Go for web development, but I am having issues with asset management. I would prefer to have a tool like Rails' Asset Pipeline for managing (and compressing) css/js files (as well as SASS), but I am still able to work with css and js files.
While I am able to work with css and js, I am not able to work with SASS. Is there a way to use SASS in a Golang project? I am not using a framework.
Thank you!

I'm not familiar with Ruby on Rails but, I assume, that ruby on rails gave you some sort of tools for managing the source to distribution client-side asset transition (polyfills, transpiling, minification, compiling of SASS/SCSS to CSS, compiling of XScript to JavaScript ... etc).
While a web development framework might do that to try and ease in developers quickly (I assume rails does that, not ruby) its not exactly the way Go does stuff.
Go is a language, not a framework + language, just a compiler, a few build tools and a set of standards for how to write, test, document and indent stuff (with the indent,test and document part being optional).
A go server, at least the way I built servers with go, is somewhat decoupled from the client. It server static assets when they are needed (e.g. it serves the minified JavaScript and the stylsheets and the html, and jsons with info from the databases... etc), but it doesn't really care about what those are, its a server. The go toolchain is made for building golang applications (e.g. said server), but its not made for building client-side web applications (those consisting of js, css and html).
Now, you may use a framework similar to rails written in go that helps "pack up" css, js, html. But I'm unaware if there are any.
You may use a compiler which turns go into client-side code (i.e. javascript) https://github.com/gopherjs/gopherjs , if you enjoy the go toolchian and want to use it for client-side development. But, go-like performance isn't something this gives you AND you are working with a subset of go. Its really just a different way to write javascript.
However, what you most likely need in your case is a "build-chain" for your client side. Here there are 3 tools which (in my opinion) stand out in 2016:
npm
webpack
bower
I could write an essay about using this tools but here's the summary:
Webpack is used to create a "pipeline" for your code which does thing like, calling babel on javascript, compiling sass to css, minifying assets, allowing js to be written with import syntax... etc, really, its a swis army knife in your js development arsenal and probably matches the functionality of whatever you were using before.
Npm is the node package manager BUT even if you are not using node for your server. It can be useful to keep tracks of dependencies for building your application (like webpack) and for downloading modules. Its also useful for running various scripts and deployment, its a bit of an overkill to use both npm and weback though you will probably have an easier time setting up the webpack enviornment if you have a package.json (config file for npm) with each of your project.
Bower is one I actually don't use for small projects. But its basically a repository for javascript libraries (among other things), so you can easily say, write "bower install jquery" and you've downloaded jQuery for your current project.
Again, there are many other tools out there, these are just some of the ones I like, but, check some of them out. They can help you replaces your previous pipeline. Don't think of client and server side code as being the same, they are decoupled and having a strong separation between them might help you a lot.

Related

Are node.js and VisualStudio really the only well maintained options to build typescript?

I work in eclipse on windows.
I have a workspace containing several projects, one of which contains typescript and sass (scss) files. I had a working build chain that produced a solid output of CSS and JS files. However I created this a while back, and I never really liked the way it was set up to begin with. Now external circumstances force me to rethink this chain, and I want to rebuild it more robust.
I previously used webpack via nodejs, triggered from a package.json from inside eclipse.
I don't like this, because I dislike the idea of a build chain that depends on an ecosystem, which is difficult to upgrade safely (without a clear strategy of reverting to a stable state, in case of failure or incompatibility). This is exactly what happened to me, and why I want to leave this setup.
What I would like to have is a:
fixed version of a (preferably more atomic) transpiler that I can try to upgrade/update manually, but always know to have the old version to fall back on.
less 'messy' chain, with as few individual pieces as possible.
still maintained solution.
What I had in mind was a Maven based chain, but those approaches always seems to rely on other tools, which in turn use nodejs. I'd rather use a separate build chain to build SASS, and have a robust typescript build chain in trade.
The official TypeScript compiler is the only TypeScript compiler that provides type checking. It's a Node.js program.
Babel and some similar tools can also transpile TypeScript into JavaScript but all they do is to strip type annotations (after all TypeScript syntax is just JavaScript + some modern ECMAScript features + type annotations). It's very useful and fast for production builds but basically defeats the purpose of using TypeScript in the first place, which is presumably type checking. Besides, all of the tools of this type that I'm aware of are also Node.js programs.
What this means is that you're already going to need Node.js for the compiler itself.
Coming from mainly C/C++ programming I also disliked the idiosyncrasy of JS build tools and tried hard to avoid them. But it is what it is: You're on your own if you want to use tools like make, maven, or similar. It is also slower. TypeScript (or Babel) compiler is an in process plugin for Webpack, but it's gonna be an external command if you use a generic build system. This adds overhead and causes the compilers to do some extra work in some settings. Finally, extremely useful Webpack features like the watch mode and the dev server are not easily implementable with a traditional build system.
Besides, I don't think your objections are warranted: It is, in fact, very easy to revert an npm package to a stable state if you use version control (which you should be doing anyway) and a package-lock.json file. Then it's a simple matter of git checkout stable-branch && npm ci. Here, ci stands for "clean install" and it installs all the packages with the version numbers in package-lock.json. You can even install a checkout hook that runs npm ci when there are changes in the package-lock.json (which you should commit to your version control system).
This way, everyone in your team and every build of your application (be it your local development version, your colleagues' development versions, staging servers, or production server, or whatever) will have the exact same npm packages for a given git commit (or equivalent thereof in other version control systems).

Benefit of using the newer Laravel Mix over Elixir

Since Laravel 5.4, the default method to compile assets is using Laravel Mix, instead of elixir.
I know that "Mix" uses WebPack by default to compile the assets.
What benefits does this method bring?
The Mix also allows you to compile without WebPack, and this always
produces files that are smaller in size and work the same.
This is entirely incorrect. Have you attempted to configure any of the myriad options to optimize your bundle at all?
Start with the webpack-bundle-analyzer plugin. This will give you an idea where your bloat is, what is duplicated, and where to start trimming your application.
Between uglifying, chunking, deduplification, minification, etc. you'll have in the end compiled resources that are far from "large".
Now I'll grant you it has a steep learning curve comparative to other tools, regardless, WebPack is an incredibly powerful tool, you need to take the time to learn it's configuration capabilities.
Care to elaborate on what you mean by working the same? I run several production applications as well as a number of smaller personal projects. I never seem to get different results after each build.
But in the end it's just a tool. A tool like any other tool, you use what you feel comfortable with and what fits for the job. There's a reason it as selected as the default, though. And it isn't because the maintainers are ignorant by any means.

Can I use Angular2 to build a JavaScript Universal Windows Platform App?

JavaScript newbie here. I'm new to Angular2 and am currently learning about things like module-loaders (there's so many!), etc, so bear with me since my space of "unknown unknowns" is probably quite large.
I'm interested in creating a JavaScript based "Packaged Web App" for windows ("Packaged" in the sense that the JS is included in the Universal Windows Platform app).
One constraint to keep in mind is that I have severe limitations on the size of my packaged app. The smaller, the better.
With that in mind, I have a few specific questions that will hopefully expose the extent of my ignorance:
Without resorting to Electron or Ionic2, is it possible (also, is it a good idea) to create the offline experience in Angular2 and then only manually include the resulting transpiled .js files in my Visual Studio project?
How hard is it to manage the dependencies for these transpiled files? Are they entirely self contained?
Roughly how large would the minimum set of manually imported files end up being? When I use NPM to install angular2, it winds up being ~80mb - a large portion of this (most?) looks like dev tools, test infrastructure, etc. What's the minimal set of angular dependencies needed for the client app to work?
Thanks!
Without resorting to Electron or Ionic2, is it possible (also, is it a good idea) to create the offline experience in Angular2 and then only manually include the resulting transpiled .js files in my Visual Studio project?
Yes, it is possible. TypeScript will be compiled to javascript codes, which will be consumed by your project. So eventually, it is compiled js codes that will be necessary for your project.
But, if you are so worried about your project's size, then I suggest you using Angular 1, which is only JS codes. And for the minimum size of Angular 1 and its dependency jquery. There is a compressed version of Angular 1 (angular.min.js: 164kb) and jquery(jquery-3.1.1.min.js: 85kb).
Answering my own question here:
Yes, it's possible. You can copy over the transpiled .js files and then simply point the webview control at the generated index.html. With that being said, it's a pretty kludgy dev experience since you're constantly working around VS.
The dependencies are handled for you - it's all in the minified/uglified js files.
I haven't investigated tree-shaking yet, but it looks like I can get away with ~0.5Mb with a skeleton project.

How to use webpack for development Angular2 / typescript without running build each time?

I am using ASP.Net Core, Angular2 and Typescript and connected all together with webpack using the tutorial from Angular2 team here. That all seems to work but now I need to build each time I change a file.
Original tutorial uses system.js and that loads tons of files of course, but I just use static file middleware and no build is required for development. That is very convinient, but I cannot figure out how to do the same with webpack. It seems that webpack can only bundle everything together without an option to just load everything separately so I need to run the build each time.
Is it possible to do something so that webpack "expands" the bundle in some easy way?
P.S. I would like not to use webpack dev server and some auto-build on save and so on since the complexity is rather high alredy. So ideal solution is that I have bundles for production but direct code loaded for development like in good old days with standard mvc bundling.
Really, the best way would be to use webpack dev server. There's really not much setup involved, it's just a different command you run instead of webpack:
npm install webpack-dev-server
webpack-dev-server webpack.config.js
Then you just point script sources to http://localhost:8080/webpack-dev-server/your-bundle-name.js" in your application` tags.
This is by far the best option as you get instant incremental recompile and live-reload.
While I would strongly encourage you to use webpack-dev-server you can also just use plain webpack in watch mode:
webpack ---watch
There is no way to "expand the bundle" (and really no need to). In all likelyhood you are using webapack for more than just bundling, so you'd still require to re-build if you change a typescript file, for example. Webpack dev server or webpack in watch mode do very quick incremental compiles, and most people will just leave them running while developing.

What's the point of letting server to compile and minify assets when we can do it manually?

Disclaimer: I know almost nothing about servers. Sorry if this question doesn't make sense in the first place.
I'm building my project in Node.js with CoffeeScript and Stylus and some other compiled stuff. Until now I've made a script to compile my code into regular JavaScript and CSS, then run it. I'm planning to upload the compiled assets to the production server, so there's no trace of CoffeeScript or Stylus anywhere afterward.
But I know that it's possible to directly run server-side CoffeeScript (coffee app.coffee), and that there are middlewares in Node which compile and minify client-side CoffeeScript and Stylus on the fly.
My question is, why let the server do it each time, instead of compiling the code ourselves? Wouldn't the first option add more strain on the server for no reason?
Thanks.
It would not add strain on the server. An educated guess would tell me that one it compiles and minifies your CoffeeScript it caches the result for each subsequent request.
Manual processes introduce risk and probability of error.

Resources