Listing external wheel as platform specific dependency in pyproject.toml - pip

I'm trying to list lxml as a dependency in my Python package. Specifically, another package requires it, but lxml is a pain in the ass to install on Windows, while it can be easily achieved in other platforms. The workaround I arrived at involves downloading a custom wheel file as described here before configuring the rest, but I don't want this dependency to be checked on other platforms, as its Windows specific.
I've configured the dependencies section of my pyproject.toml as such:
dependencies = [
'lxml # https://download.lfd.uci.edu/pythonlibs/archived/lxml-4.9.0-cp311-cp311-win_amd64.whl#egg=lxml-4.9.0',
'some-other-package',
]
and I'm able to build it with setuptools with no problems. However when I modify that line to include the Windows conditional:
'lxml # https://download.lfd.uci.edu/pythonlibs/archived/lxml-4.9.0-cp311-cp311-win_amd64.whl#egg=lxml-4.9.0; sys_platform == "win32"',
it fails with the following error:
DESCRIPTION:
Project dependency specification according to PEP 508
GIVEN VALUE:
"lxml # https://download.lfd.uci.edu/pythonlibs/archived/lxml-4.9.0-cp311-cp311-win_amd64.whl#egg=lxml-4.9.0; sys_platform == \"win32\""
OFFENDING RULE: 'format'
DEFINITION:
{
"$id": "#/definitions/dependency",
"title": "Dependency",
"type": "string",
"format": "pep508"
}
...
ValueError: invalid pyproject.toml config: `project.dependencies[{data__dependencies_x}]`.
configuration error: `project.dependencies[{data__dependencies_x}]` must be pep508
Having read PEP 508 (and the accompanying PEP 631), the environment marker appears to be valid, so what am I doing wrong?

Based on a similar setuptools bug report, it seems like for a dependency specification to be PEP 508-conform in the case where it contains a direct reference URL, there should be an empty space on each side of the semicolon ;.
Also note, that as far as I know, the #egg=lxml part does not belong in a PEP 508 dependency specification.
So in your case, I guess it should be:
lxml # https://download.lfd.uci.edu/pythonlibs/archived/lxml-4.9.0-cp311-cp311-win_amd64.whl ; sys_platform == "win32".

Related

Bazel & protobuf: how to choose a specific protoc version

I am working in a project using some proto sources that are already compiled using a specific version, I also need to compile some custom protos that are cohabiting in the same project, so the protoc needs to match the one that was used to generate the other ones.
I can see in the pre-generated ones:
#if PROTOBUF_VERSION < 3009000
#if 3009002 < PROTOBUF_MIN_PROTOC_VERSION
In mines:
#if PROTOBUF_VERSION < 3017000
#if 3017000 < PROTOBUF_MIN_PROTOC_VERSION
I don't quite understand which protoc is being used, the one installed on my system is 3.19.4.
Also this is my WORKSPACE:
http_archive(
name = "rules_proto",
sha256 = "66bfdf8782796239d3875d37e7de19b1d94301e8972b3cbd2446b332429b4df1",
strip_prefix = "rules_proto-4.0.0",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/rules_proto/archive/refs/tags/4.0.0.tar.gz",
"https://github.com/bazelbuild/rules_proto/archive/refs/tags/4.0.0.tar.gz",
],
)
load("#rules_proto//proto:repositories.bzl", "rules_proto_dependencies", "rules_proto_toolchains")
rules_proto_dependencies()
rules_proto_toolchains()
http_archive(
name = "com_github_grpc_grpc",
urls = [
"https://github.com/grpc/grpc/archive/refs/tags/v1.44.0.tar.gz",
],
sha256 = "8c05641b9f91cbc92f51cc4a5b3a226788d7a63f20af4ca7aaca50d92cc94a0d",
strip_prefix = "grpc-1.44.0",
)
load("#com_github_grpc_grpc//bazel:grpc_deps.bzl", "grpc_deps")
grpc_deps()
load("#com_github_grpc_grpc//bazel:grpc_extra_deps.bzl", "grpc_extra_deps")
grpc_extra_deps()
The error I am currently getting is:
In file included from cc/tensorflow/plugin_primeclient/grappler/grappler.cc:7:
bazel-out/aarch64-fastbuild/bin/cc/tensorflow/plugin/protos/graph.pb.h:12:2: error: #error This file was generated by a newer version of protoc which is
12 | #error This file was generated by a newer version of protoc which is
| ^~~~~
bazel-out/aarch64-fastbuild/bin/cc/tensorflow/plugin/protos/graph.pb.h:13:2: error: #error incompatible with your Protocol Buffer headers. Please update
13 | #error incompatible with your Protocol Buffer headers. Please update
| ^~~~~
bazel-out/aarch64-fastbuild/bin/cc/tensorflow/plugin/protos/graph.pb.h:14:2: error: #error your headers.
14 | #error your headers.
| ^~~~~
I'll try and describe the general process I take when tracking dependency problems down in Bazel, as it seems to be a regular problem that you'll probably run into again.
Before Bazel does anything to do with the build it's going to look in your WORKSPACE file to see if it needs to fetch any dependencies. It might not seem like an important detail, but Bazel handles WORKSPACE dependencies from top to bottom. We can use this behaviour to override the protobuf version used. Checkout the maybe macro if your interested in how this works.
So in your WORKSPACE file, the first dependency that you have is 'rules_proto' # version 4.0.0 with the http_archive. Then you are loading two macros from 'rules_proto' here;
load("#rules_proto//proto:repositories.bzl", "rules_proto_dependencies", "rules_proto_toolchains")
rules_proto_dependencies()
rules_proto_toolchains()
So first things first let's head over to the rules_proto releases page and find your specific release. Then click on the little hash on that page (circled in red).
Then click "browse files" in the top right;
This will allow you to browse the state of that repository at that specific version. Now as you loaded "repositories.bzl" you'll want to open that up and inspect it (ctrl-F search for protobuf). You'll find that it calls another private macro i.e.
protobuf_workspace(name = "com_google_protobuf")
If we look for where that macro was loaded you'll see that you'll need to follow that through to the 'proto/private/dependencies.bzl';
load("//proto/private:dependencies.bzl", "dependencies", "maven_dependencies", "protobuf_workspace")
After opening that up and searching again for protobuf you'll find the line that specifies the protobuf version;
"com_github_protocolbuffers_protobuf": {
#...
},
So by the looks of it you are using an older version of protobuf with Bazel than what is installed on your system. So in order to override the protobuf version in Bazel you need to simply add it as a dependency in your WORKSPACE before the rules_proto repository. e.g.
# file: //:WORKSPACE
http_archive(
name = "com_github_protocolbuffers_protobuf",
# TODO: Leave this empty and Bazel will tell you what to put here when you build.
sha256 = "",
# Note: Same version as your system deps.
strip_prefix = "protobuf-3.19.4",
urls = [
# Note: Same version as your system deps.
"https://github.com/protocolbuffers/protobuf/releases/download/v3.19.4/protobuf-all-3.19.4.tar.gz"
],
)
http_archive(
name = "rules_proto",
# The rest of the WORKSPACE...

Requesting specific tagged version for locally developed composer package

I am developing a package for a Laravel project on my local machine. I have also spun up a Laravel app so I can manually test the package. My package is located at /home/me/packages/me/my-package and a commit (git) has been tagged with '0.1'.
I want to be able to switch between tagged versions and use specific versions in different projects but having issues.
In my main apps composer file, I am requiring the package like so:
...
"require" : {
"me/my-package" : "0.1"
}
...
"repositories" : [
{
"type": "path",
"url": "/home/me/packages/me/my-package"
}
]
This results in an error:
Problem 1
- Root composer.json requires me/my-package 0.1, found me/my-package[dev-main] but it does not match the constraint.
I have also tried:
"require" : {
"me/my-package" : "dev-main#0.1"
}
(This was an idea taken from How to use a specific tag/version with composer and a private git repository?). This goes through without any errors but:
$ composer show | grep me/my-package
me/my-package dev-main My Package
What is the correct way install a specific version of a package when developing it locally?
Probably the only thing why you hit this message is that you have "type": "path" and not "type": "vcs".
This is that Composer will only refer one version and only one version dev-main. The reason is:
If the package [path repository] is a local VCS repository, the version may be inferred by the branch or tag that is currently checked out. (ref)
You have the main branch checked out at /home/me/packages/me/my-package (/home/me/packages/me/my-package/.git/HEAD content is ref: refs/heads/main and /home/me/packages/me/my-package/.git/refs/heads/main points to the git revision) and composer will only take that one.
You should have no problem to make that change from path to vcs given:
You already have a (git) repository at /home/me/packages/me/my-package (looks so by your question)
You know the absolute path on your local system to that repository (again, looks so by your question: /home/me/packages/me/my-package).
Given these two points, Composer is able to obtain the VCS tagged versions from that path. So basically only the change of the "type":
"repositories" : [
{
"type": "vcs",
"url": "/home/me/packages/me/my-package"
}
]
Just take care that "url" contains the absolute path (and there is a git repository at that place). Likely already all set in your case, just saying.
Git is very prominent that's why I mentioned it here, for other types of VCS Composer also has options at hand. The details - also for git etc. - are available here:
VCS - Repositories (getcomposer.org)

How load a yaml file inside an ansible custom module

I have an ansible custom module, that have a configuration file in YAML format.
Now the question is how should I load that YAML file inside the module?
NOTE as I understand I can't simply use something like PyYAML since ansible will run my module on the node that it is configuring and maybe that system does not have PyYAML installed.
NOTE Also ansible itself have ansible.parsing.utils.yaml.from_yaml it is not usable by the modules.
So funny as it may sound, I don't know how to load a YAML file in custom ansible module. Please help
It's a great question. It does sound funny and you'd expect a simple answer but as far as i can see these are the facts.
The latest development branch of ansible has /lib/ansible/module_utils/common/yaml.py which can be used by modules because it is under module_utils. see here
If you look at the source code all it's doing is import yaml as _yaml, which you could do yourself inside your custom module. My understanding is this is using PyYAML, which is documented here. (someone correct me if I"m wrong! I don't fully understand the comment in that file stating "preferring the YAML compiled C extensions...")
Anyway, if your target machine does not have PyYAML you can always add a task to ensure its there. e.g.
- name: Install PyYAML python package
pip:
name: pyyaml
and then use it in your own module with:
from yaml import load, dump
try:
from yaml import CLoader as Loader, CDumper as Dumper
except ImportError:
from yaml import Loader, Dumper
# ...
data = load(stream, Loader=Loader)
# ...
output = dump(data, Dumper=Dumper)

sphinx autodoc creates blank page on readthedocs, but correctly includes module docstring locally

I'm getting different results from autodoc when I run sphinx locally (versions 1.6.6 or 2.0.1 on Anaconda Python 3.6.8 for Mac) than when I run it on readthedocs.org (according to their log it's Sphinx version 1.8.5, and probably Python 2.7 since it's launched with python rather than python3).
The difference is in the results from the following file, Shady.Text.rst, which contains no more than:
Shady.Text Sub-module
=====================
.. automodule:: Shady.Text
Now, this sub-module happens to contain only a module-level docstring and no member docstrings—that's as intended, so the corresponding html page should contain the module docstring and no more. And this is exactly what happens when I run make html locally. However the result at https://shady.readthedocs.io/en/latest/source/Shady.Text.html is content-free (header only, no module docstring).
FWIW my autodoc-related entries in conf.py are:
autoclass_content = 'both'
autodoc_member_order = 'groupwise'
What am I doing wrong?
Thanks #StevePiercy for drawing my attention to the crucial lines in the raw log file:
WARNING: autodoc: failed to import module u'Text' from module u'Shady'; the module executes module level statement and it might call sys.exit().
WARNING: autodoc: failed to import module u'Video' from module u'Shady'; the module executes module level statement and it might call sys.exit().
(I had searched the 9000-line log file for .Text, because Text on its creates too many hits, but it hadn't occurred to me to search it for 'Text' in quotes).
To me, the message is misleading: the problem is not that "the module executes module level statements" because that per se is allowed. I wasted some time after noting that some module-level statements seemed to be allowed in other sub-modules, and tried to bundle the offending module-level statements into a class decorator thinking maybe sphinx's mysterious module-level-statement-detector would miss them then...)
No, the problem is that not the fact that the module-level statements exist and might call sys.exit(), but the fact that they did indirectly call sys.exit() during sphinx's compilation procedure. This was a quirk of the way I handle missing dependencies, which should probably be re-thought, but I could work around it for now by avoiding my sys.exit() call when os.environ.get('READTHEDOCS') is truthy.

Yocto parallel configuration packages files conflict

I have a base package that gives my functionality (wireguard-tools, taken from internet).
This package includes no configuration files for the network interfaces (as it should).
Then I created a few packages with this configuration files that are only to be deployed one for each respective image (e.g. image-1 includes wireguard-1-conf while image-2 includes wireguard-2-conf).
I would like to setup SystemD, but I can only do this when I have an interface configured, and it will only happen when the *-conf package is installed.
Unfortunately, the SystemD service file ("wg-quick#.service") is deployed by wireguard-tools package and my dependent package, the *-conf one, cannot see it:
ERROR: Function failed: SYSTEMD_SERVICE_wireguard-1-conf value wg-quick#wg0.service does not exist
I managed to do a dirty workaround but I feel dirtiest doing this in my *-conf recipe:
do_install_append () {
touch ${D}${systemd_system_unitdir}/wg-quick#wg0.service
pkg_postinst_${PN} () {
rm -f $D/${systemd_system_unitdir}/wg-quick#wg0.service
How should I proceed to make it work "the right way"?
Is there an elegant way of making "wg-quick#.service" from wireguard-tools accessible to *-conf?
Thanks in advance.
Additional Info
My *-conf recipes inherit systemd and include wireguard-tools dependency:
inherit systemd
...
DEPENDS_${PN} = "wireguard-tools"
RDEPENDS_${PN} = "wireguard-tools"
I so nothing else worth to mention in my recipes.

Resources