Bower and Modernizr - modernizr

I'm aware that I can create a custom build build of Modernizr to detect the features I care about. But is there a way to use Modernizr installed with Bower to detect a specific feature or set of features (such as SVG support) without including the entire library?
Basically when I do
bower install modernizr
I will get the entire library which is more than I need.

That's not the responsibility of Bower.
You can use grunt-modernizr to detect which tests you need and build a custom Modernizr version.

Related

How to use Mockery v0.0.0-dev in Golang?

I am trying to generate mocks in Golang using mockery, and the repo requires v0.0.0-dev.
I ran brew install mockery but that only installs v2.15.0, and thus cannot generate mocks with v0.0.0-dev. How do I use/install mockery v0.0.0-dev? There is not much info online about this
v0.0.0-dev is the "_defaultSemVer" used by mockery when debug.ReadBuildInfo has no embedded build information.
In your case, the binary installed does include said build information, hence the 2.15.0, which is the latest release, as expected.
You should change the dependency to use an actual version (or, if you have to, use the #latest).
Not v0.0.0-dev, which depends on how mockery was built.

How to install an extra software package in a buildpack? [duplicate]

I'm currently developping a Spring Native application, it's building using paketo buildpack and generating a Docker image.
I was wondering if it's possible to customize the generated Docker image by adding third party tools (like a Datadog agent for example).
Also, for now the generated container image is installed locally, is it possible to send it directly in another Docker repo ?
I'm currently developping a Spring Native application, it's building using paketo buildpack and generating a Docker image. I was wondering if it's possible to customize the generated Docker image by adding third party tools (like a Datadog agent for example).
This applies to Spring Boot apps, but really also any other app you can build with buildpacks.
There are a couple of options:
You can customize the base image that you use (called a stack).
You can add additional buildpacks which will perform more customizations during the build.
#2 is obviously easier if there is a buildpack that provides the functionality that you require. In regards to Datadog specifically, the Paketo buildpack now has a Datadog Buildpack you can use with Java and Node.js apps.
It's more work, but you can also create a buildpack if you are looking to add specific functionality. I wouldn't recommend this if you have one application that needs the functionality, but if you have lots of applications it can be worth the effort.
A colleague of mine put this basic sample buildpack together, which installs and configures a fictitious APM agent. It is a pretty concise example of this scenario.
#1 is also possible. You can create your own base image and stack. The process isn't that hard, especially if you base it on a well-known and trusted image that is getting regular updates. The Paketo team also has the jam create-stack command which you can use to streamline the process.
What's more difficult with both options is that you need to keep them up-to-date. That requires some CI to watch for software updates & publish new versions of your buildpack or stack. If you cannot commit to this, then both are a bad idea because your customization will get out of date and potentially cause security problems down the road.
UPDATE
You can bundle dependencies with your application. This option works well if you have static binaries you need to include, perhaps a cli you call to from your application.
In this case, you'd just create a folder in your project called binaries/ (or whatever you want) and place the static binaries in there (make sure to download versions compatible with the container image you're using, Paketo is Ubuntu Bionic at the time I write this). Then when you call the cli commands from your application, simply use the full path to them. That would be /workspace/binaries or /workspace/<path to binaries in your project>.
You can use the apt buildpack to install packages with apt. This is a generic buildpack that you provide a list of apt packages to and it will install them.
This can work in some cases, but the main drawback is that buildpacks don't run as root, so this buildpack cannot install these packages into their standard locations. It attempts to work around this by setting env variables like PATH, LD_LIBRARY_PATH, etc to help other applications find the packages that have been installed.
This works ok most of the time, but you may encounter situations where an application is not able to locate something that you install with the apt buildpack. Worth noting if you see problems when trying this approach.
END OF UPDATE
For what it's worth, this is a common scenario that is a bit painful to work through. Fortunately, there is an RFC that should make the process easier in the future.
Also, for now the generated container image is installed locally, is it possible to send it directly in another Docker repo ?
You can docker push it or you can add the --publish flag to pack build and it will send the image to whatever registry you tell it to use.
https://stackoverflow.com/a/28349540/1585136
The publish flag works the same way, you need to name your image [REGISTRYHOST/][USERNAME/]NAME[:TAG].
For me what worked was in my build.gradle file (I'm using kotlin) I added this:
bootBuildImage {
val ecrRepository: String? by project
buildpacks = listOf("urn:cnb:builder:paketo-buildpacks/java", "urn:cnb:builder:paketo-buildpacks/datadog")
imageName = "$ecrRepository:${project.version}"
environment = mapOf("BP_JVM_VERSION" to "17.*", "BP_DATADOG_ENABLED" to "true")
isPublish = true
docker {
val ecrPassword: String? by project
publishRegistry {
url = ecrRepository
username = "AWS"
password = ecrPassword
}
}
}
notice the buildpacks part where I added first the base default oci and then the datadog oci. I also added on the environment the BP_DATADOG_ENABLED to true, so that it adds the agent.

installing electron app is too slow because of native dependencies that need installing into user end pc

I have an electron app with 2 package.json files.
The root/package.json has all devDependencies, and the root/app/package.json has all dependencies which is necessary for app running.
So I package app folder using electron-packager, then build installer for windows using inno setup.
But when I install the app, because the node_modules in app has too many dependencies, the installer is so slow in order to extract all contents from node_modules.
Other apps cost 3-10s for installing, but mine 25-35s.
So what should I do for this? Maybe I can bundle the js using webpack before packaging?
Thanks.
You should absolutely use something like webpack (or equivalent) to bundle your application. Webpack does an excellent job at tree-shaking your dependencies and only keeping the resultant necessary modules.
I have already posted a possible solution for electron projects, including a build process approach that leads to installation building. My particular recommendation leaned on utilizing Wix for MSI deployment but the build process items are still applicable (steps 1-6) for anyone wanting to understand a possible process for the items important to doing this work (even if you use another installer). Hope this helps:
https://stackoverflow.com/a/46474978/3946706
Are you packaging a web app into electron? The slow packaging time is probably because of bundling web node modules into the electron app which is not necessary.
https://medium.com/#hellobharadwaj/electron-plus-angular-react-why-use-2-different-package-json-files-361ae47d07f3

Does CKEditor have tools to automatically download plugins and dependencies

When building CKEditor I specify plugins in build-config.js, however I have to manually download plugins and their dependencies from addon's page and put them into /plugins page before running build.sh script. Does CKEditor have any tool that can do it automatically, like npm for example?
CKEditor 4 - no, it pre-dates most of the packaging tools on the market, or at least the time that they got really popular.
You can also generate a custom build online, through CKBuilder, either choosing plugins from the list of "Available Plugins" or uploading your build-config.js there (a button in the top-right corner).
I'm aware this is not the same level of build automation that npm offers, but hope this will help. CKEditor 5 will be much better aligned to more modern building tools.

Loading dependencies at runtime with bundler

I have an application with many optional components, all with their own complex dependencies. For example, some deployments might want to use LDAP functionality and will need to load ldap-related gems. But many will not, and those that don't should not have to install ldap-related gems.
How can I use Bundler to load these dependencies depending on which components users (deployers) have enabled?
I don't want to to force deployers to manually edit their Gemfiles. It has to be possible to enabled/disable components via the application's UI.
Just including every possible dependency in the Gemfile is not ideal. Some of the rarely used components require a lot of complicated native compilation. Another solution might be to have the application edit its own Gemfile. But this is kind of awkward and would likely require a restart every time components are changed.
Is there a way in Bundler to dynamically load gems in runtime? If not, are there alternatives that provide something like Bundler's sandboxing but allow for dynamic loading?
You could provide multiple Gemfiles and use bundle install --gemfile to use the specific gemfile and only install the Gems you need for that deployment.
In your application you could then use Bundle.setup with the appropriate groups of the previously installed Gemfile to just load the appropriate Gems
Sure thats not a nice and easy way, but should give you the functionality you want.
See
Bundler Setup
bundle install

Resources