I'm trying to use two libraries:
D3 (a Javascript graphics library)
NVD3 (a graph-specific D3 extension)
I'm also trying to use the Require Optimizer, and that's where things get rough. D3 is AMD-compliant, so if it detects that you're using Require it will define a module and NOT export a global d3 variable. NVD3 has no AMD support, but Require has "shims" for that case.
However, if I "shim" NVD3 and give that shim a dependency of D3, it doesn't work, because NVD3 expects there to be a global d3 variable, which D3 won't make in a Require environment. So, to get around this issue I made a new module (d3Shim), which simply requires D3, then registers it as a global variable:
define(['d3'], function(d3) {
return window.d3 = d3;
});
I made NVD3 depend on d3Shim, and then everything worked ... in normal Require-land. When I tried to use the require-optimizer to merge everything together in a single file I found NVD3 started breaking again, because of a lack of a d3 variable.
It turns out the optimizer does this for shims:
*shimmed code here*
define(*shim path*, [], function(){});
So, it doesn't matter what dependencies I have, the NVD3 code will run before the d3shim module can do its magic.
My question is, is there any way I can use these two libraries with the Require Optimizer, without having to alter either file? If necessary I can always edit NVD3's code to make it an AMD module, but for obvious reasons it's bad practice to edit 3rd party libraries, so I'd prefer a non-editing solution.
Related
Is it possible or is there a function that works similar to debug.getupvalues / debug.getupvalue in the lua library that I could use, as I won't be able to use either soon and I depend on them slightly to keep parts of the code I have working.
Also if I could get the function code for debug.getupvalue it would be a great help as I could just use that as a function instead of using the debug library anymore, although I doubt it is code in Lua.
And before you say it, yes I know the debug library is the most undependable library in all of Lua but it made my code work and I would like to find a way to stop using it before it's too late.
The debug library is not meant to be used in production code (as opposed to tests and unusual debugging situations). There are 3 possible solutions. Two of them require changes to the code where the closures are defined. The other would require you to know C:
Add more closures in the same scope as the upvalues that will give you the access that you need.
Use tables instead of closures.
Write a C library that makes use of lua_getupvalue.
To see the source code of debug.getupvalue, download Lua 5.3.5 and look at src/ldblib.c, line 260. lua_getupvalue is in src/lapi.c, line 1222.
There exist a few webpack bundle analysis scripts that show a list of included modules along with their sizes. However, Scala.js emits one big module for all the Scala code, so those tools can't look into it.
As both a library author and an end user of other Scala.js libraries I want to know how much various Scala packages / classes contribute to my bundle size.
I understand that Scala.js optimization makes it impossible to say that a given library weights exactly X Kb, so I'm interested in a solution that looks at a specific bundle.
Currently the best I can do is search for "com.package.name in the generated JS file, and judge the density of result indicators on the scroll bar, but that's obviously tremendously suboptimal.
So my question is, are there any tools or even half baked scripts that could improve on what I'm doing?
Finally found a good solution – with the right sbt config source-map-explorer npm package does exactly what I need based on sourcemaps (of course!). It shows the direct impact of individual Scala classes on bundle size.
Source: Scala.js issue #3556
This might not be precise as you probably want,
but to help measure the approximated size of each js file, you could inspect the binaries(
with .sjsir extensions after your run a task like fastOptJS) in target/scala-2.1.2/classes generated by scalac compiler and compile each of them to javascript.
To compile a binary to javascript you can use the code provided in this answer.
I am trying to implement an SCSS component into my AngularCLI project (from Codepen: https://codepen.io/lbebber/pen/LELBEo).
When I run the following SCSS transform, transform:translate3d(cos(0.1)*115px,sin(0.1)*115px,0);
I get the following build error:
Module build failed:
transform:translate3d(cos(0.1)*115px, sin(0.1)*115px, 0);
^
Undefined operation: "cos(0.1) times 115px".
I read up on SASS/SCSS numeric conversions, but found this is not the issue - as I tried replicating this in the Codepen, and his code is working just fine, no flawed conversion logic.
I can only suspect this is an issue with my AngularCLI configuration, something isn't registering correctly, and the cosine is being interpreted as a string instead of its numeric calculation. When hardcoding the numbers for cosine/sine, I get a valid build and see the UI functioning as expected.
Do I need to configure the AngularCLI project in a way that lets the SCSS process the numeric values for cosine/sine before stepping into the equation as a string? If so, how?
Much appreciation for anyone that has the Angular-fu to figure out how to get this to work.
As we have found out it’s because your CodePen example uses the Compass Math Helpers functions which you don’t have/use.
You could instead for example import mathsass. It should cover the same amout of functionality.
I was trying to use stemkoski's particle engine in my own project (his example of using the particle engine can be found here ). I have received an error of three.min.js:474 THREE.ShaderMaterial: attributes should now be defined in THREE.BufferGeometry instead. After tracking down the source, I noticed that it is the version of threejs he used in his library that is different from mine. Whereas I used http://threejs.org/build/three.min.js, he used a different version (looking into the file, I believe it's version 60 )
What I have tried so far:
Used the three.min.js from threejs.org/build but change the following in the ParticleEngine.js
this.particleGeometry = new THREE.Geometry();
to
this.particleGeometry = new THREE.BufferGeometry();
This pretty much gives me the same errors
Used his version of three.js instead, says renderer.setPixelRatio and geometry.scale they are not functions (Because I used these two functions in my own project)
I solved this by using the squarefeet/ShaderParticleEngine instead of the stemkoski and it works well. I gave anoter piece of information in a previous answer but it was deleted by Brad as considered as not helpful. I believe it was as it says that updating three.js to r79 was not a solution.
In the singularity demo, there are a few samples using a push and pull mixin. When used they adjust content placement across the gutter. Looking high and low, I could not find any documented reference to these mixins. Are they part of the long term feature set or a leftover? If not, what is the recommended way to do this sort of gutter shift?
Found the answer here
Push/Pull are currently non-documented features that are being deprecated in Singularity 1.2.0. They make sense when using float output method and symmetric grids, but don't make sense for any other combination we can think of. Instead, we recommend utilizing the isolation output style (or, if you're feeling adventurous, the new calc output style in Singularity Extras)
The other option is to write your own mixin with the use of the grid-span, column-span, and gutter-span functions.
Having read about the deprecation of push/pull in Ver 1.2.0, I have just swapped my pull spans for isolation spans. It was so easy, I can't imagine why I hadn't used that method before. A lot of Singularitygs has to be learnt by trial and error, but perhaps the next set of docs will be better. I have also seen that they are removing the grid-overlay and grid-toggle from Ver 1.2.0. That's a shame. I got them working beautifully despite the lack of documentation. Still, the background-grid works well.