We use composer to install 3rd party libraries. Dependencies can have their own requirements (other libraries, but also enabled extensions). When we use Composer to install dependencies on the server, it displays errors that particular extensions are not enabled.
Is it possible to get list of requirements in advance? Like a list of extensions that need to be enabled for the given project.
As you said: Dependencies can have their own requirements (other libraries, but also enabled extensions).
A list of extensions as you requested would depend on the list of packages you required in your project. This is why there is no such list, but the composer installer will warn you about incompatibilities.
Maybe the clue/graph-composer package is what you are searching for. It shows all dependencies from your project's composer.json
Related
Can Go modules be built as an executable program? Or, are they meant to be published as libraries for code reuse?
Building an executable and publishing a library are not mutually exclusive (note that modules are not compiled, packages are).
A module is a collection of related Go packages that are versioned together as a single unit.
Modules record precise dependency requirements and create reproducible builds.
https://github.com/golang/go/wiki/Modules#modules
Whether these packages contain a main package or not is irrelevant.
They're intended to work as packages, like something you would install from NPM for a JavaScript project, or from PIP in a Python project.
In the pyproject.toml configuration file of Poetry you can specify both dependencies and dev-dependencies, but the documentation does not state clearly what the difference is.
I guess from the name the dev-dependencies will not be installed by a release build, but I didn't notice any difference. How do you use these fields correctly, for example exclude the dev-dependencies in a build?
Your assumption is right. The best use case for dev-dependencies is when you creating a library with optional dependencies. For instance, you are developing ORM which should work with MySQL, PostgreSQL, etc. You have to test that your code is working with all of these RDBMS. You put it into dev-dependencies. But for one, who installs your library, these dependencies are optional and they wouldn't be installed automatically.
Commonly, all libraries that are used for testing or building your application are presented at dev-dependencies.
How do you use these fields correctly, for example exclude the dev-dependencies in a build?
poetry install has a no-dev flag for exactly that scenario.
My projects depends on two packages, A and B, and they both depend on some-library, unfortunately in incompatible versions:
A depends on lib # 1.0
B depends on lib # 2.0
This is unresolvable by Composer because PHP can only load a single version of a class / interface at runtime.
What are my options? I am fine with "ugly" workarounds as long as they are automated. Doing fragile and manual work like forking A and upgrading its usage of some-library is something I'd like to avoid at all costs.
There are no workarounds. By design, Composer assumes that your project uses consistent dependencies and everything is maintained valid.
As mentioned in the comments, you can prefix the dependencies to isolate the part of the project that needs a particular library version. In this way, the prefixed code is consistent in Composer terms and you can continue developing with the latest versions.
These are a couple of prefixers that I can recommend:
humbug/php-scoper is a well-known tool that can prefix code based on Search & Replace steps (Finders and Patchers).
PHP-Prefixer is an automated online service to apply prefixes to Composer dependencies, based on composer.json definitions. You define the new namespace and prefix for your project.
Disclaimer: I'm the lead PHP-Prefixer developer.
This link https://docs.ckeditor.com/#!/guide/dev_example_setups strongly encourages use of the CKEditor builder in order to manage plugins, and strongly discourages installing plugins manually.
Manual Download and Installation of Additional Plugins (Not Recommended)
Although at a first glance it looks like the simplest way
of adding plugins to CKEditor, it is not only inefficient but also may
result in a headache when trying to add plugin A, that requires plugin
B, that requires plugin C (...and so on).
In a brief summary it involves the following steps:
Downloading the predefined package (Basic/Standard/Full) from the
Download page.
Downloading additional plugins manually from the
Add-ons Repository.
Downloading plugins required by additional plugins
manually.
Enabling additional plugins manually through
CKEDITOR.config.extraPlugins.
Then it tauts its Builder:
Using Builder (Recommended)
Using Builder to build a bundle with all required plugins is highly recommended in case of using customized
packages, especially those with additional third-party plugins.
Refer to the Installing Plugins – Online Builder Installation article
for information about building a custom editor package.
But in visiting the page for Builder, unless I'm missing something, it only builds a completely brand new CKEditor from scratch. I can't find any way to keep my current configuration and add plugins.
If there is literally no way to add existing plugins except by doing it manually, I find the language in the docs strongly discouraging this very strange (since 99.9% of the time I'll want to add plugins on the fly and not at the very beginning).
So is manually adding plugins the way to go if I already have other stuff set up?
Most crucial point to keep up with builder is to keep your CKEditor up-to-date. Imaging pain of updating all plugins manually.
Just build our base CKEditor - with ckeditor itself and plugins which 100% you will use in your project.
But no one stops you from adding your own or community plugins. You just need to configure and keep config.js in your ckeditor folder. so you will have your extra plugin configured on each ckeditor update.
I'm in need of a dependency manager that is not tied to a particular language or build system. I've looked into several excellent tools (Gradle, Bazel, Hunter, Biicode, Conan, etc.), but none satisfy my requirements (see below). I've also used Git Submodules and Mercurial Subrepos.
My needs are well described in a presentation by Daniel Pfeifer at Meeting C++ 2014. To summarize the goals of this dependency tool (discussed #18:55 of the linked video):
Not just a package manager
Supports pre-built or source dependencies
Can download or find locally - no unnecessary downloads
Fetches using a variety of methods (i.e. download, or VCS clones, etc.)
Integrated with the system installer - can check if lib is installed
No need to adapt source code in any way
No need to adapt the build system
Cross-platform
Further requirements or clarifications I would add:
Suitable for third-party and/or versioned dependencies, but also capable of specifying non-versioned and/or co-developed dependencies (probably specified by a git/mercurial hash or tag).
Provides a mechanism to override the specified fetching behavior to use some alternate dependency version of my choosing.
No need to manually set up a dependency store. I'm not opposed to a central dependency location as a way to avoid redundant or circular dependencies. However, we need the simplicity of cloning a repo and executing some top-level build script that invokes the dependency manager and builds everything.
Despite the requirement that I should not have to modify my build system, obviously some top-level build must wield the dependency manager and then feed those dependencies to the individual builds. The requirement means that the individual builds should not be aware of the dependency manager. For example, if using CMake for a C++ package, I should not need to modify its CMakeLists.txt to make special functional calls to locate dependencies. Rather, the top-level build manager should invoke the dependency manager to retrieve the dependencies and then provide arguments CMake can consume in traditional ways (i.e find_package or add_subdirectory). In other words, I should always have the option of manually doing the work of the top-level build and dependency manager and the individual build should not know the difference.
Nice-to-have:
A way to interrogate the dependency manager after-the-fact to find where a dependency was placed. This would allow me to create VCS hooks to automatically update the hash in dependency metadata of co-developed source repo dependencies. (Like submodules or subrepos do).
After thoroughly searching the available technologies, comparing against package managers in various languages (i.e. npm), and even having a run at my own dependency manager tool, I have settled on Conan. After diving deep into Conan, I find that it satisfies most of my requirements out of the box and is readily extensible.
Prior to looking into Conan, I saw BitBake as the model of what I was looking for. However, it is linux only and is heavily geared toward embedded linux distros. Conan has essentially the same recipe features as bb and is truly cross-platform
Here are my requirements and what I found with Conan:
Not just a package manager
Supports pre-built or source dependencies
Conan supports classic release or dev dependencies and also allows you to package source. If binaries with particular configurations/settings do not exist in the registry (or "repository", in Conan parlance), a binary will be built from source.
Can download or find locally - no unnecessary downloads
Integrated with the system installer - can check if lib is installed
Conan maintains a local registry as a cache. So independent projects that happen to share dependencies don't need to redo expensive downloads and builds.
Conan does not prevent you from finding system packages instead of the declared dependencies. If you write your build script to be passed prefix paths, you can change the path of individual dependencies on the fly.
Fetches using a variety of methods (i.e. download, or VCS clones, etc.)
Implementing the source function of the recipe gives full control over how a dependency is fetched. Conan supports the recipes that do the download/clone of source or can "snapshot" the source, packaging it with the recipe itself.
No need to adapt source code in any way
No need to adapt the build system
Conan supports a variety of generators to make dependencies consumable by your chosen build system. The agnosticism from a particular build system is Conan's real win and ultimately what makes dependency management from the likes of Bazel, Buckaroo, etc. cumbersome.
Cross-platform
Python. Check.
Suitable for third-party and/or versioned dependencies, but also capable of specifying non-versioned and/or co-developed dependencies (probably specified by a git/mercurial hash or tag).
Built with semver in mind, but can use any string identifier as version. Additionally has user and channel to act as namespaces for package versions.
Provides a mechanism to override the specified fetching behavior to use some alternate dependency version of my choosing.
You can prevent the fetch of a particular dependency by not including it in the install command. Or you can modify or override the generated prefix info to point to a different location on disk.
No need to manually set up a dependency store. I'm not opposed to a central dependency location as a way to avoid redundant or circular dependencies. However, we need the simplicity of cloning a repo and executing some top-level build script that invokes the dependency manager and builds everything.
Despite the requirement that I should not have to modify my build system, obviously some top-level build must wield the dependency manager and then feed those dependencies to the individual builds. The requirement means that the individual builds should not be aware of the dependency manager. For example, if using CMake for a C++ package, I should not need to modify its CMakeLists.txt to make special functional calls to locate dependencies. Rather, the top-level build manager should invoke the dependency manager to retrieve the dependencies and then provide arguments CMake can consume in traditional ways (i.e find_package or add_subdirectory). In other words, I should always have the option of manually doing the work of the top-level build and dependency manager and the individual build should not know the difference.
Conan caches dependencies in a local registry. This is seamless. The canonical pattern you'll see in Conan's documentation is to add some Conan-specific calls in your build scripts, but this can be avoided. Once again, if you write your build scripts to consumer prefix paths and/or input arguments, you can pass the info in and not use Conan at all. I think the Conan CMake generators could use a little work to make this more elegant. As a fallback, Conan lets me write my own generator.
A way to interrogate the dependency manager after-the-fact to find where a dependency was placed. This would allow me to create VCS hooks to automatically update the hash in dependency metadata of co-developed source repo dependencies. (Like submodules or subrepos do).
The generators point to these locations. And with the full capability of Python, you can customize this to your heart's content.
Currently co-developing dependent projects is the biggest question mark for me. Meaning, I don't know if Conan has something out of the box to make tracking commits easy, but I'm confident the hooks are in there to add this customization.
Other things I found in Conan:
Conan provides the ability to download or build toolchains that I need during development. It uses Python virtualenv to make enabling/disabling these custom environments easy without polluting my system installations.