Are node.js and VisualStudio really the only well maintained options to build typescript? - windows

I work in eclipse on windows.
I have a workspace containing several projects, one of which contains typescript and sass (scss) files. I had a working build chain that produced a solid output of CSS and JS files. However I created this a while back, and I never really liked the way it was set up to begin with. Now external circumstances force me to rethink this chain, and I want to rebuild it more robust.
I previously used webpack via nodejs, triggered from a package.json from inside eclipse.
I don't like this, because I dislike the idea of a build chain that depends on an ecosystem, which is difficult to upgrade safely (without a clear strategy of reverting to a stable state, in case of failure or incompatibility). This is exactly what happened to me, and why I want to leave this setup.
What I would like to have is a:
fixed version of a (preferably more atomic) transpiler that I can try to upgrade/update manually, but always know to have the old version to fall back on.
less 'messy' chain, with as few individual pieces as possible.
still maintained solution.
What I had in mind was a Maven based chain, but those approaches always seems to rely on other tools, which in turn use nodejs. I'd rather use a separate build chain to build SASS, and have a robust typescript build chain in trade.

The official TypeScript compiler is the only TypeScript compiler that provides type checking. It's a Node.js program.
Babel and some similar tools can also transpile TypeScript into JavaScript but all they do is to strip type annotations (after all TypeScript syntax is just JavaScript + some modern ECMAScript features + type annotations). It's very useful and fast for production builds but basically defeats the purpose of using TypeScript in the first place, which is presumably type checking. Besides, all of the tools of this type that I'm aware of are also Node.js programs.
What this means is that you're already going to need Node.js for the compiler itself.
Coming from mainly C/C++ programming I also disliked the idiosyncrasy of JS build tools and tried hard to avoid them. But it is what it is: You're on your own if you want to use tools like make, maven, or similar. It is also slower. TypeScript (or Babel) compiler is an in process plugin for Webpack, but it's gonna be an external command if you use a generic build system. This adds overhead and causes the compilers to do some extra work in some settings. Finally, extremely useful Webpack features like the watch mode and the dev server are not easily implementable with a traditional build system.
Besides, I don't think your objections are warranted: It is, in fact, very easy to revert an npm package to a stable state if you use version control (which you should be doing anyway) and a package-lock.json file. Then it's a simple matter of git checkout stable-branch && npm ci. Here, ci stands for "clean install" and it installs all the packages with the version numbers in package-lock.json. You can even install a checkout hook that runs npm ci when there are changes in the package-lock.json (which you should commit to your version control system).
This way, everyone in your team and every build of your application (be it your local development version, your colleagues' development versions, staging servers, or production server, or whatever) will have the exact same npm packages for a given git commit (or equivalent thereof in other version control systems).

Related

PHP-library installation via composer

Some PHP-libraries can be used after installing them via composer.
What this mean?
Is that only way to use those libraries or is there a way to use them copying the code in to correct location and referring them in the code?
Examples:
mPDF can be used (only?) via composer
https://mpdf.github.io/
PHPMailer can be used just copying files in correct location and referring to them
https://github.com/PHPMailer/PHPMailer
The thing with libraries is, that they might require other libraries and those libraries might require other libraries and so on. So downloading them and putting them in some location will be tedious and when two libraries require the same library in a different version you will run into problems. Composer will solve this issue for you by figuring out which libraries are needed to resolve all requirements and make sure to download a version that fits all or raise an error that the current collection of libraries contains incompatible libraries and which ones.
The other problem is finding the right location to store those libraries since PHP has to figure out where each class is stored you will either have to add require/include statements to your code and libraries which is tedious and will complicate future updates, e.g. when classes are renamed or removed. A way around this is is having a shared lib directory, but then you will run into problems when you have multiple projects requiring different library versions.
For libraries composer is the de facto standard and you will always need/want to install libraries in your project with it. It takes care of resolving correct versions, autoloading and updating making it incredibly helpful, especially if you were still around when composer was not a thing. I would use it even if I need no libraries just for the autoloading and the ability to later add libraries, when my project grows/changes.
edit: Even PHPMailer provides a composer.json even though it does not require other libraries, but if you install it via composer you can make sure that your system fulfills the requirements (PHP version & installed extensions) which you might miss otherwise, leading to a possibly long debugging session figuring out why some feature won't work.
edit You can use composer for projects on shared hosting as well. Instead of running the command on the server, you must then run it on your local machine or build server for the actual server. You can copy your project including the vendor folder to your shared host and things should work. The vendor folder contains all libraries and the autoload.php and can be copied along with your code.
In order to do that reliably, in your composer.json under config you can specify the platform you will run your code on. You should define the correct PHP version at the very least, but the installed extensions as well, to make sure you don't accidentally install libraries where you don't have the required extension. When you run composer install or composer update it will use these platform details as a basis to download libraries that match them. This is especially important when you have PHP 7 installed, but your host does not have it yet.
When running composer on a separate server than the one that uses code, a few options will not work like --apcu-autoloader, but you probably don't use them anyway.
If you run composer on your local machine and copy stuff over, you can improve the experience a bit by adding a few options to your composer install:
composer install --no-dev --prefer-dist --classmap-authoritative
You can get details about these options in the documentation. The important things are:
--no-dev, to possibly reduce the number of libraries downloaded to the ones needed for production (because we only want to run the project on our server not develop on it).
--prefer-dist (why is explained in the docs)
--classmap-authoritative or --optimize-autoloader, (the first might not work with some projects/libraries) but it will improve autoloading making your application a teeny, tiny bit faster in production
The first 2 options, you should not run if you copy your development environment, as they will not provide all dependencies for development. Maybe setup a second project that is just used for checking out the latest changes from git, running tests to make sure everything works, then removing vendor & running that command, (possibly make some changes to the config for prod) and finally copy things to your shared hosting environment. If you use something like gitlab that provides CI-capabilities, you could also do these steps on the ci-server and let that copy stuff, but it takes some time to set things up.

Specifying to contributors what version of go to use in a go project

In node projects, package.json lets contributors know what version of node they can use to either consume or contribute to a specific project. Python uses venv to control the version used in collaborative development environments, and many other languages have similar constructs as well.
Does go have a standardized process that allows you to do something similar?
No, but Go has the 1 promise of compatibility - they try very hard not to break any extant software built on 1.x, even at the expense of leaving an ugly API or unwanted behaviour (though this is rare). This means you don't really have to worry about specifying which version of Go you use. Go 2 is not even on the horizon, so for the foreseeable future, you don't have to worry about this. There are a few new features, but most go users upgrade (because of the stable upgrade path).
https://golang.org/doc/go1compat
Re which dependencies you have, at present the only solution is to put your dependencies into a vendor folder, but I think you were asking about the language specifically.

SASS Rendering in Go

I am beginning to use Go for web development, but I am having issues with asset management. I would prefer to have a tool like Rails' Asset Pipeline for managing (and compressing) css/js files (as well as SASS), but I am still able to work with css and js files.
While I am able to work with css and js, I am not able to work with SASS. Is there a way to use SASS in a Golang project? I am not using a framework.
Thank you!
I'm not familiar with Ruby on Rails but, I assume, that ruby on rails gave you some sort of tools for managing the source to distribution client-side asset transition (polyfills, transpiling, minification, compiling of SASS/SCSS to CSS, compiling of XScript to JavaScript ... etc).
While a web development framework might do that to try and ease in developers quickly (I assume rails does that, not ruby) its not exactly the way Go does stuff.
Go is a language, not a framework + language, just a compiler, a few build tools and a set of standards for how to write, test, document and indent stuff (with the indent,test and document part being optional).
A go server, at least the way I built servers with go, is somewhat decoupled from the client. It server static assets when they are needed (e.g. it serves the minified JavaScript and the stylsheets and the html, and jsons with info from the databases... etc), but it doesn't really care about what those are, its a server. The go toolchain is made for building golang applications (e.g. said server), but its not made for building client-side web applications (those consisting of js, css and html).
Now, you may use a framework similar to rails written in go that helps "pack up" css, js, html. But I'm unaware if there are any.
You may use a compiler which turns go into client-side code (i.e. javascript) https://github.com/gopherjs/gopherjs , if you enjoy the go toolchian and want to use it for client-side development. But, go-like performance isn't something this gives you AND you are working with a subset of go. Its really just a different way to write javascript.
However, what you most likely need in your case is a "build-chain" for your client side. Here there are 3 tools which (in my opinion) stand out in 2016:
npm
webpack
bower
I could write an essay about using this tools but here's the summary:
Webpack is used to create a "pipeline" for your code which does thing like, calling babel on javascript, compiling sass to css, minifying assets, allowing js to be written with import syntax... etc, really, its a swis army knife in your js development arsenal and probably matches the functionality of whatever you were using before.
Npm is the node package manager BUT even if you are not using node for your server. It can be useful to keep tracks of dependencies for building your application (like webpack) and for downloading modules. Its also useful for running various scripts and deployment, its a bit of an overkill to use both npm and weback though you will probably have an easier time setting up the webpack enviornment if you have a package.json (config file for npm) with each of your project.
Bower is one I actually don't use for small projects. But its basically a repository for javascript libraries (among other things), so you can easily say, write "bower install jquery" and you've downloaded jQuery for your current project.
Again, there are many other tools out there, these are just some of the ones I like, but, check some of them out. They can help you replaces your previous pipeline. Don't think of client and server side code as being the same, they are decoupled and having a strong separation between them might help you a lot.

Any existing pure PHP "make" tools?

Let me elaborate on the question...
I have a custom CMS (built on codeigniter FTW) that includes many different types of modules.
Every time we have a new project come through the door, it is a variation and amalgamation of a few of the existing modules.
Sometimes a project comes through with requirements that are not satisfied by the existing modules, in that case I will write a new module...
All the modules are separated out in folders and the code is VC-ed using GIT. Every module has it's own Model, View, Controller, SQL and Javascript files. All the dependencies are also separated and folder-ed nicely...
The next step for me is to create some sort of installer script that will take me through the "scaffolding" process step by step, allowing me to choose from the existing modules. A glorified "makefile" if you may...
Rather than rolling my own, does anyone know of any such thing out in the wild.
I know of Apache ANT (java), what I need is something in pure PHP with very low or no dependencies...
I would like something as simple as running a git pull and then php make.php
Thanks.
The "Ant-like" alternative I am aware of in PHP land is phing it is written in PHP and it will allow you to perform several tasks for packaging, deploying and testing your web applications. The documentation is a great starting point if you want to hit the ground running.
It is can also be extended to define new tasks if needed (examples and explanations are provided in the documentation)
Reading through the doco it appears to be possible to install Phing without PEAR as documented here you would have to correctly setup the environment on each machine you wish to use Phing on. I can not confirm this method though as I use PEAR for all my installs.

How can I update Perl on Windows without losing modules?

At work I'm using Perl 5.8.0 on Windows.
When I first put Perl on, I went to CPAN, downloaded all the sources, made a few changes (in the .MAK file(?) to support threads, or things like that), and did nmake / nmake test / nmake install. Then, bit by bit, I've downloaded individual modules from CPAN and done the nmake dance.
So, I'd like to upgrade to a more recent version, but the new one must not break any existing scripts. Notably, a bunch of "use" modules that I've installed must be installed in the new version.
What's the most reliable (and easiest) way to update my current version, ensuring that everything I've done with the nmake dance will still be there after updating?
As others noted, start by installing the new perl in a separate place. I have several perls installed, each completely separate from all of the others.
To do that, you'll have to configure and compile the sources yourself. When you run configure, you'll get a chance to specify the installer. I gave detailed instructions for this in an "Compiling My Own Perl" in the Spring 2008 issue of The Perl Review. There's also an Item in Effective Perl Programming that shows you how to do it.
Now, go back to your original distribution and run cpan -a to create an autobundle file. This is a Pod document that lists all of the extra stuff you've installed, and CPAN.pm understands how to use that to reinstall everything.
To install things in the new perl, use that perl's path to start CPAN.pm and install the autobundle file you created. CPAN.pm will get the right installation paths from that perl's configuration.
Watch the output to make sure things go well. This process won't install the same versions of the modules, but the latest versions.
As for Strawberry Perl, there's a "portable" version you can install somewhere besides the default location. That way you could have the new perl on removable media. You can test it anywhere you like without disturbing the local installation. I don't think that's quite ready for general use though. The Berrybrew tool might help you manage that.
Good luck, :)
I would seriously consider looking at using Strawberry Perl.
You can install a second version of Perl in a different location. You'll have to re-install any non-core modules into the new version. In general, different versions of Perl are not binary compatible, which could be an issue if you have any program-specific libraries that utilize XS components. Pure Perl modules shouldn't be affected.
If you stay within the 5.8 track, all installed modules that contain XS (binary) extensions will continue to work, as binary compatibility is guaranteed within the same 5.8 series. If you moved to 5.10 then you would have to recompile any modules that contain XS components.
All you need to do is ensure that the new build lists the previous include directories in its #INC array (which is used to look for modules).
By the sounds of it, I think you're on Windows, in which case the current #INC paths can be viewed with
perl -le "print for #INC"
Make sure you target your new Perl version in another directory. It will happily coexist
with the previous version, and this will allow you to choose which Perl installation gets used; it's just a question of getting your PATH order sorted out. As soon as a Perl interpreter is started up, it knows where to look for the rest of its modules.
Strawberry Perl is probably the nicest distribution on Windows these days for rolling your own.
I think the answer to this involves virtualisation of some kind:
Set up an exact copy of your current live machine. Upgrade Perl, using the same directory locations and structures as you're using at the moment.
Go through your scripts testing them on the new image.
Once you're happy, flip the switch.
The thinking behind this is that there's probably all sorts of subtle dependencies and assumptions you haven't thought of. While unlikely, the latest version of a particular module (possibly even a core module, although that's even more unlikely) might have a subtle difference compared to the one you were using. Unless you've exhaustively gone through your entire codebase, there's quite possibly a particular module that's required only under certain circumstances.
You can try and spot this by building a list of all your scripts - a list that you should have anyway, by dint of all your code being under version control (you are using version control, e.g. Subversion, yes?) - and iterating through it, running perl -c on each script. e.g. this script. That sort of automated test is invaluable: you can set it running, go away for a coffee or whatever, and come back to check whether everything worked. The first few times you'll probably find an obscure module that you'd forgotten about, which is fine: the whole point of automating this is so that you don't have to do the drudge-work of checking every single script.
When I did it I installed the newer one into a separate directory. There's a bit of added confusion running two versions, but it definitely helps make sure everything's working first, and provides a quick way of switching back to the old one in a pinch. I also set up Apache to run two separate services, so I could monkey around with the newer Perl in one service without touching the production one on the old Perl.
It's probably a lot wiser, in hindsight, to install on a separate computer, and do your testing there. Record every configuration change you need to make.
I am not sure about building it yourself—I always just used prepackaged binaries for Windows.
I'm not sure I understand exactly what you're asking. Do you have a list of changes you made to the 5.8 makefile? Or is the question how to obtain such a list? Are you also asking how to find out which packages above the base install you've obtained from CPAN? Are you also asking how to test that your custom changes won't break those packages if you get them from CPAN again?
Why don't you use ActivePerl and its "ppm" tool to (re)install modules?

Resources