What is the difference between deps and dev-deps in Poetry? - python-poetry

In the pyproject.toml configuration file of Poetry you can specify both dependencies and dev-dependencies, but the documentation does not state clearly what the difference is.
I guess from the name the dev-dependencies will not be installed by a release build, but I didn't notice any difference. How do you use these fields correctly, for example exclude the dev-dependencies in a build?

Your assumption is right. The best use case for dev-dependencies is when you creating a library with optional dependencies. For instance, you are developing ORM which should work with MySQL, PostgreSQL, etc. You have to test that your code is working with all of these RDBMS. You put it into dev-dependencies. But for one, who installs your library, these dependencies are optional and they wouldn't be installed automatically.
Commonly, all libraries that are used for testing or building your application are presented at dev-dependencies.
How do you use these fields correctly, for example exclude the dev-dependencies in a build?
poetry install has a no-dev flag for exactly that scenario.

Related

What are the workarounds when Composer dependencies conflict in versions?

My projects depends on two packages, A and B, and they both depend on some-library, unfortunately in incompatible versions:
A depends on lib # 1.0
B depends on lib # 2.0
This is unresolvable by Composer because PHP can only load a single version of a class / interface at runtime.
What are my options? I am fine with "ugly" workarounds as long as they are automated. Doing fragile and manual work like forking A and upgrading its usage of some-library is something I'd like to avoid at all costs.
There are no workarounds. By design, Composer assumes that your project uses consistent dependencies and everything is maintained valid.
As mentioned in the comments, you can prefix the dependencies to isolate the part of the project that needs a particular library version. In this way, the prefixed code is consistent in Composer terms and you can continue developing with the latest versions.
These are a couple of prefixers that I can recommend:
humbug/php-scoper is a well-known tool that can prefix code based on Search & Replace steps (Finders and Patchers).
PHP-Prefixer is an automated online service to apply prefixes to Composer dependencies, based on composer.json definitions. You define the new namespace and prefix for your project.
Disclaimer: I'm the lead PHP-Prefixer developer.

How the MANIFEST file is being generated in my AEM osgi bundle?

I created one maven project with aem-project-archetype version 13.
After installing the bundle to AEM, I am getting error in felix console that 3 of the imported bundles could not be resolved.
I am trying to find out that from where these are being included into my manifest file which is inside the target/MANIFEST folder.
so that i could modify the versions of the respective bundles.
the error i am geting in felix console, my bundle is in installed state, not active
org.apache.sling.api.resource,version=[2.10,3) -- Cannot be resolved
org.apache.sling.api.servlets,version=[2.2,3) -- Cannot be resolved
org.apache.sling.models.annotations,version=[1.4,2) -- Cannot be resolved
When you're developing AEM applications, the OSGI bundle (and Manifest) is typically generated via the Felix maven-bundle-plugin.
The plugin writes your package imports based on the java packages you import in your all of your Java code. If you are importing from a maven dependency, say Sling the version for that import will be the package version from Sling.
The issue you are having here could be one of two
The package you are importing does not exists in OSGI (AEM Instance)
There a mismatch between the version in your maven dependencies and the version in OSGI (AEM instance). Hence OSGI cannot resolve the version you are importing.
2. is likely the case because sling is always bundled with AEM.
What can you do to debug/fix?
You can go to http://localhost:4502/system/console/depfinder
and try your packages there to see what version is actually exported
in OSGI.
Check wha versions of your dependencies you have in your pom.xml
and make sure the OSGI version is within the range specified by the
manifest imports. You can use maven dependency tree to list all
your dependencies and their versions.
This specific to AEM, if you are using uber-jar make sure you are
using the correct version for the correct AEM instance you're
running.
Note that in manifest imports the range [2.10,3) means it accepts all versions between 2.10.0 and 3.0.0 but NOT including 3.0.0. In my experience, Maven bundle plugin will always write the range where the min is your maven dependency package version and the max is the next major version.
Changing the imports manually:
This is not recommended and has very specific use cases, but you could manually tell the bundle plugin what version to add to the imports. See import-package instruction in the bundle plugin docs
These imports are based on the code you compiled against. There are not some nasty little tidbits that are there to pester you. Their purpose is to verify that what you run against is compatible with what you compiled against. I assume that the runtime has a lower version than you require. This implies that your compile path as setup in Maven has later versions than your runtime. If you could run your code you would likely run into Class Not Found Exception or No Such Method errors.
And maybe not. But then you might have the worse situation that things are not correct (promises made during compilation might not be fulfilled) and problems might happen much later after damage has been done.
This stuff is there for a very good reason. They are like earth pin on plugs, they protect you.
How to fix this? Take a look at your dependencies. Your must ensure you compile against a version that is lower or equal then what is present in your runtime. You can first look at the versions in your POM. If these versions are not there then look at the compile path Maven uses.
Replacing the numbers in the manifest is like sawing off the earth pin on a plug because otherwise it won't fit in the wall ... bad idea.

removing extra jars dependencies from java project

I am working on migrating multi module java project into maven. Now for most of them i migrated to maven.
Finally i am aware my project have lot of unnecessary jars included, and i want to clean them up.
I know maven has plugin command, mvn dependency:analyze. Which works very well.
dependency:analyze analyzes the dependencies of this project and determines which are: used and declared; used and undeclared; unused and declared. based on static code analysis.
Now my question is that, how can i remove reported unused and declared dependency for cleanup purpose. It could be possible those jars were getting used at runtime and my code will compile perfectly fine after removing but blow up at runtime.
An example: mycode compile with one of opensource library antisamy.jar but it require batik.jar at runtime. And mvn dependency:analyze reports me to remove batik.jar.
IS my understanding correct or i need expert inputs here.
Your understanding seems to be correct.
But I'm not sure why you'd think that there is a tool that could cover all the bases.
Yes, if you use stuff by reflection, no tool can reliably detect the fact that you depend on this class or the other.
For example consider this snippet:
String myClassName = "com." + "example." + "SomeClass";
Class.forName(myClassName);
I don't think you can build a tool that can crawl through the code and extract all such references.
I'd use a try-and-fail approach instead, which would consist of:
remove all dependencies that dependency:analyze says are superfluous
whenever you find one that was actually used, you just add it back
This could work well because I expect that the number of dependencies that are actually used by reflection to be extremely small.

How can I handle split packages in automatic modules?

I am currently testing to migrate an existing application to Jigsaw Modules. One of my modules uses ElasticSearch along with its Groovy Plugin.
org.elasticsearch:elasticsearch
org.elasticsearch.module:lang-groovy
Unfortunately, they share a split package, so mvn install gives me:
x reads package org.elasticsearch.script.groovy from both lang.groovy and elasticsearch
once for each required module in the descriptor, where x is the name of each module.
I assume that a newer elasticsearch version will have eliminated the split package by the time Java 9 is final, but is there generally a way to handle split packages in legacy dependencies?
I was hoping to be able to have those on the the classpath instead of the module path, but after reading this conversation on the mailing list it seems that there is no way to tell the Maven compiler to do so.
maven 3.3.9 -
maven-compiler-plugin 3.6.0 -
jdk9-ea+149 -
elasticsearch 2.3.3
After some more testing, I think there are a few options which should tackle many (but definitely not all) 3rd party split package situations.
Clean up dependencies - maybe a dependency isn't actually needed or can be replaced by a newer (or more distinct) JAR
Restructure your own module into two modules, each reading the package from one of both 3rd party modules (if possible/reasonable)
wrap one of the 3rd party modules (or both) in a simple module, which does nothing but explicitly export only the package(s) that are actually needed by your module.
Depending on the situation, one of these options might be a good fit to resolve the split package problem. But none of them can handle situations in which a coherent piece of code actually needs to access classes from both parts of the split package.

Language/Platform/Build-Independent Dependency Manager

I'm in need of a dependency manager that is not tied to a particular language or build system. I've looked into several excellent tools (Gradle, Bazel, Hunter, Biicode, Conan, etc.), but none satisfy my requirements (see below). I've also used Git Submodules and Mercurial Subrepos.
My needs are well described in a presentation by Daniel Pfeifer at Meeting C++ 2014. To summarize the goals of this dependency tool (discussed #18:55 of the linked video):
Not just a package manager
Supports pre-built or source dependencies
Can download or find locally - no unnecessary downloads
Fetches using a variety of methods (i.e. download, or VCS clones, etc.)
Integrated with the system installer - can check if lib is installed
No need to adapt source code in any way
No need to adapt the build system
Cross-platform
Further requirements or clarifications I would add:
Suitable for third-party and/or versioned dependencies, but also capable of specifying non-versioned and/or co-developed dependencies (probably specified by a git/mercurial hash or tag).
Provides a mechanism to override the specified fetching behavior to use some alternate dependency version of my choosing.
No need to manually set up a dependency store. I'm not opposed to a central dependency location as a way to avoid redundant or circular dependencies. However, we need the simplicity of cloning a repo and executing some top-level build script that invokes the dependency manager and builds everything.
Despite the requirement that I should not have to modify my build system, obviously some top-level build must wield the dependency manager and then feed those dependencies to the individual builds. The requirement means that the individual builds should not be aware of the dependency manager. For example, if using CMake for a C++ package, I should not need to modify its CMakeLists.txt to make special functional calls to locate dependencies. Rather, the top-level build manager should invoke the dependency manager to retrieve the dependencies and then provide arguments CMake can consume in traditional ways (i.e find_package or add_subdirectory). In other words, I should always have the option of manually doing the work of the top-level build and dependency manager and the individual build should not know the difference.
Nice-to-have:
A way to interrogate the dependency manager after-the-fact to find where a dependency was placed. This would allow me to create VCS hooks to automatically update the hash in dependency metadata of co-developed source repo dependencies. (Like submodules or subrepos do).
After thoroughly searching the available technologies, comparing against package managers in various languages (i.e. npm), and even having a run at my own dependency manager tool, I have settled on Conan. After diving deep into Conan, I find that it satisfies most of my requirements out of the box and is readily extensible.
Prior to looking into Conan, I saw BitBake as the model of what I was looking for. However, it is linux only and is heavily geared toward embedded linux distros. Conan has essentially the same recipe features as bb and is truly cross-platform
Here are my requirements and what I found with Conan:
Not just a package manager
Supports pre-built or source dependencies
Conan supports classic release or dev dependencies and also allows you to package source. If binaries with particular configurations/settings do not exist in the registry (or "repository", in Conan parlance), a binary will be built from source.
Can download or find locally - no unnecessary downloads
Integrated with the system installer - can check if lib is installed
Conan maintains a local registry as a cache. So independent projects that happen to share dependencies don't need to redo expensive downloads and builds.
Conan does not prevent you from finding system packages instead of the declared dependencies. If you write your build script to be passed prefix paths, you can change the path of individual dependencies on the fly.
Fetches using a variety of methods (i.e. download, or VCS clones, etc.)
Implementing the source function of the recipe gives full control over how a dependency is fetched. Conan supports the recipes that do the download/clone of source or can "snapshot" the source, packaging it with the recipe itself.
No need to adapt source code in any way
No need to adapt the build system
Conan supports a variety of generators to make dependencies consumable by your chosen build system. The agnosticism from a particular build system is Conan's real win and ultimately what makes dependency management from the likes of Bazel, Buckaroo, etc. cumbersome.
Cross-platform
Python. Check.
Suitable for third-party and/or versioned dependencies, but also capable of specifying non-versioned and/or co-developed dependencies (probably specified by a git/mercurial hash or tag).
Built with semver in mind, but can use any string identifier as version. Additionally has user and channel to act as namespaces for package versions.
Provides a mechanism to override the specified fetching behavior to use some alternate dependency version of my choosing.
You can prevent the fetch of a particular dependency by not including it in the install command. Or you can modify or override the generated prefix info to point to a different location on disk.
No need to manually set up a dependency store. I'm not opposed to a central dependency location as a way to avoid redundant or circular dependencies. However, we need the simplicity of cloning a repo and executing some top-level build script that invokes the dependency manager and builds everything.
Despite the requirement that I should not have to modify my build system, obviously some top-level build must wield the dependency manager and then feed those dependencies to the individual builds. The requirement means that the individual builds should not be aware of the dependency manager. For example, if using CMake for a C++ package, I should not need to modify its CMakeLists.txt to make special functional calls to locate dependencies. Rather, the top-level build manager should invoke the dependency manager to retrieve the dependencies and then provide arguments CMake can consume in traditional ways (i.e find_package or add_subdirectory). In other words, I should always have the option of manually doing the work of the top-level build and dependency manager and the individual build should not know the difference.
Conan caches dependencies in a local registry. This is seamless. The canonical pattern you'll see in Conan's documentation is to add some Conan-specific calls in your build scripts, but this can be avoided. Once again, if you write your build scripts to consumer prefix paths and/or input arguments, you can pass the info in and not use Conan at all. I think the Conan CMake generators could use a little work to make this more elegant. As a fallback, Conan lets me write my own generator.
A way to interrogate the dependency manager after-the-fact to find where a dependency was placed. This would allow me to create VCS hooks to automatically update the hash in dependency metadata of co-developed source repo dependencies. (Like submodules or subrepos do).
The generators point to these locations. And with the full capability of Python, you can customize this to your heart's content.
Currently co-developing dependent projects is the biggest question mark for me. Meaning, I don't know if Conan has something out of the box to make tracking commits easy, but I'm confident the hooks are in there to add this customization.
Other things I found in Conan:
Conan provides the ability to download or build toolchains that I need during development. It uses Python virtualenv to make enabling/disabling these custom environments easy without polluting my system installations.

Resources