Poetry clears all [metadata.files] during "add" command - python-poetry

I already have poetry.lock file, which contains [metadata.files] section filled with packages, files and hashes:
[metadata.files]
amqp = [
{file = "amqp-5.1.1-py3-none-any.whl", hash = "sha256:6f0956d2c23d8fa6e7691934d8c3930eadb44972cbbd1a7ae3a520f735d43359"},
{file = "amqp-5.1.1.tar.gz", hash = "sha256:2c1b13fecc0893e946c65cbd5f36427861cffa4ea2201d8f6fca22e2a373b5e2"},
]
anyascii = [
{file = "anyascii-0.3.1-py3-none-any.whl", hash = "sha256:8707d3185017435933360462a65e2c70a4818490745804f38a5ca55e59eb56a0"},
{file = "anyascii-0.3.1.tar.gz", hash = "sha256:dedf57728206e286c91eed7c759505a5e45c8cd01367dd40c2f7248bb15c11f6"},
]
...
Whe I add a dependency:
poetry add Unidecode#^1.3.4
a package is installed, lock file is written.
But the section becomes "cleared":
[metadata.files]
amqp = []
anyascii = []
...an so on
Is that OK?

Related

Can I generate multiple plugins in a single config file for Telegraf using dj-wasabi/ansible-telegraf role?

There is a telegraf_plugins_extra property I want to use to generate nested plugins. With my example below I am am able to generate inputs.snmp plugin. What do I write in variables to generate [[inputs.snmp.field]]?
For example I am trying to write playbook that generates network config file 50-swh.conf
[[inputs.snmp]]
agents = [ "ip address1", "ip_address2" ]
version = 2
sec_name = "coolview"
auth_protocol = "SHA"
sec_level = "public"
auth_password = "abcd"
priv_protocol = "AES"
priv_password = "abcd"
[[inputs.snmp.field]]
name = "hostname"
oid = "RFC1213-MIB::sysName.0"
is_tag = true
[[inputs.snmp.field]]
name = "uptime"
> ....... #more variables
[[inputs.snmp.table]]
[[inputs.snmp.table.field]]
I managed to write mine inside host inventory:
telegraf_plugins_extra:
50-swh:
plugin: snmp
config:
- agents = [ "ip address1", "ip_address2" ]
- version = "2"
- sec_name = "coolview"
- auth_protocol = "AES"
- sec_level = "public"
- auth_password = "abcd"
- priv_protocol = "AES"
- priv_password = "abcd"
I can declare another config file variable like 51-swh with format plugin: snmp.field but want to be able to put all into a single file.

bazel workspace for non-bazel packages

I am using a package graphviz as part of a service and to use it I begin the bazel WORKSPACE file like this
new_local_repository(
name = "graphviz",
path = "/usr/local/Cellar/graphviz/2.49.1",
build_file_content = """
package(default_visibility = ["//visibility:public"])
cc_library(
name = "headers",
srcs = glob(["**/*.dylib"]),
hdrs = glob(["**/*.h"])
)
"""
)
...
The problem with it is its depends upon graphviz being downloaded, pre-installed and present in the path /usr/local/Cellar/graphviz/2.49.1. Is there a way to make it part of the bazel build process such that if its not present it will be fetched and put in the right place?
You can use http_archive to download one of graphviz's release archives:
https://docs.bazel.build/versions/main/repo/http.html#http_archive
From https://graphviz.org/download/source/ the 2.49.1 release is available at https://gitlab.com/api/v4/projects/4207231/packages/generic/graphviz-releases/2.49.1/graphviz-2.49.1.tar.gz
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "graphviz",
url = "https://gitlab.com/api/v4/projects/4207231/packages/generic/graphviz-releases/2.49.1/graphviz-2.49.1.tar.gz",
strip_prefix = "graphviz-2.49.1",
sha256 = "ba1aa7a209025cb3fc5aca1f2c0114e18ea3ad29c481d75e4d445ad44e0fb0f7",
build_file_content = """
package(default_visibility = ["//visibility:public"])
cc_library(
name = "headers",
srcs = glob(["**/*.dylib"]),
hdrs = glob(["**/*.h"])
)
""",
)
To answer the "if its not present" part of the question, I'm not aware of a straight forward way to accomplish automatically switching between something that's locally installed and downloading something. http_archive will always download the archive, and new_local_repository will always use something local.
There's the --override_repository flag, which replaces a repository with a local one, e.g. --override_repository=graphviz=/usr/local/Cellar/graphviz/2.49.1 would effectively replace the http_archive with a local_repository pointing to that path. However, bazel would then expect there to be a WORKSPACE file and BUILD file already at that location (i.e., there's no way to specify build_file_content)
You could specify both repository rules in the WORKSPACE file, and then use some indirection, a Starlark flag, and a select() to switch between the repositories using a command-line flag. It's a little involved though, and also not automatic. Something like this:
WORKSPACE:
http_archive(
name = "graphviz-download",
...,
)
new_local_repository(
name = "graphviz-installed",
...,
)
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "bazel_skylib",
urls = [
"https://github.com/bazelbuild/bazel-skylib/releases/download/1.1.1/bazel-skylib-1.1.1.tar.gz",
"https://mirror.bazel.build/github.com/bazelbuild/bazel-skylib/releases/download/1.1.1/bazel-skylib-1.1.1.tar.gz",
],
sha256 = "c6966ec828da198c5d9adbaa94c05e3a1c7f21bd012a0b29ba8ddbccb2c93b0d",
)
load("#bazel_skylib//:workspace.bzl", "bazel_skylib_workspace")
bazel_skylib_workspace()
BUILD (e.g. in //third_party/graphviz):
load("#bazel_skylib//rules:common_settings.bzl", "bool_flag")
bool_flag(
name = "use-installed-graphviz",
build_setting_default = False,
)
config_setting(
name = "installed",
flag_values = {
":use-installed-graphviz": "True",
}
)
alias(
name = "headers",
actual = select({
":installed": "#graphviz-installed//:headers",
"//conditions:default": "#graphviz-download//:headers",
})
)
Then your code depends on //third_party/graphviz:headers, by default the alias will point to the downloaded version, and the flag --//third_party/graphviz:use-installed-graphviz will switch it to the installed version:
$ bazel cquery --output build //third_party/graphviz:headers
alias(
name = "headers",
actual = "#graphviz-download//:headers",
)
$ bazel cquery --output build //third_party/graphviz:headers --//third_party/graphviz:use-installed-graphviz
alias(
name = "headers",
actual = "#graphviz-installed//:headers",
)
Another option is to write (or find) a custom repository rule that combines the functionality of http_archive and local_repository, but that would probably be a fair bit of work.
Generally I think most people just use an http_archive and download the dependencies, and if there are specific needs for being offline or caching, there's --distdir for using already-downloaded artifacts for remote repository rules: https://docs.bazel.build/versions/main/guide.html#distribution-files-directories
Edit: An example of using rules_foreign_cc with graphviz:
WORKSPACE:
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "rules_foreign_cc",
sha256 = "69023642d5781c68911beda769f91fcbc8ca48711db935a75da7f6536b65047f",
strip_prefix = "rules_foreign_cc-0.6.0",
url = "https://github.com/bazelbuild/rules_foreign_cc/archive/0.6.0.tar.gz",
)
load("#rules_foreign_cc//foreign_cc:repositories.bzl", "rules_foreign_cc_dependencies")
# This sets up some common toolchains for building targets. For more details, please see
# https://bazelbuild.github.io/rules_foreign_cc/0.6.0/flatten.html#rules_foreign_cc_dependencies
rules_foreign_cc_dependencies()
http_archive(
name = "graphviz",
url = "https://gitlab.com/api/v4/projects/4207231/packages/generic/graphviz-releases/2.49.1/graphviz-2.49.1.tar.gz",
strip_prefix = "graphviz-2.49.1",
sha256 = "ba1aa7a209025cb3fc5aca1f2c0114e18ea3ad29c481d75e4d445ad44e0fb0f7",
build_file_content = """\
filegroup(
name = "all_srcs",
srcs = glob(["**"]),
visibility = ["//visibility:public"],
)
""",
)
BUILD:
load("#rules_foreign_cc//foreign_cc:defs.bzl", "configure_make")
# see https://bazelbuild.github.io/rules_foreign_cc/0.6.0/configure_make.html
configure_make(
name = "graphviz",
lib_source = "#graphviz//:all_srcs",
out_shared_libs = ["libcgraph.so"], # or other graphviz libs
)
cc_binary(
name = "foo",
srcs = ["foo.c"],
deps = [":graphviz"],
)
foo.c:
#include "graphviz/cgraph.h"
int main() {
Agraph_t *g;
g = agopen("G", Agdirected, NULL);
agclose(g);
return 0;
}
Usage:
$ bazel build foo
INFO: Analyzed target //:foo (0 packages loaded, 2 targets configured).
INFO: Found 1 target...
Target //:foo up-to-date:
bazel-bin/foo
INFO: Elapsed time: 0.229s, Critical Path: 0.06s
INFO: 7 processes: 5 internal, 2 linux-sandbox.
INFO: Build completed successfully, 7 total actions

Trying to write a custom rule

I'm trying to write a custom rule for gqlgen. The idea is to run it to generate Go code from a GraphQL schema.
My intended usage is:
gqlgen(
name = "gql-gen-foo",
schemas = ["schemas/schema.graphql"],
visibility = ["//visibility:public"],
)
"name" is the name of the rule, on which I want other rules to depend; "schemas" is the set of input files.
So far I have:
load(
"#io_bazel_rules_go//go:def.bzl",
_go_context = "go_context",
_go_rule = "go_rule",
)
def _gqlgen_impl(ctx):
go = _go_context(ctx)
args = ["run github.com/99designs/gqlgen --config"] + [ctx.attr.config]
ctx.actions.run(
inputs = ctx.attr.schemas,
outputs = [ctx.actions.declare_file(ctx.attr.name)],
arguments = args,
progress_message = "Generating GraphQL models and runtime from %s" % ctx.attr.config,
executable = go.go,
)
_gqlgen = _go_rule(
implementation = _gqlgen_impl,
attrs = {
"config": attr.string(
default = "gqlgen.yml",
doc = "The gqlgen filename",
),
"schemas": attr.label_list(
allow_files = [".graphql"],
doc = "The schema file location",
),
},
executable = True,
)
def gqlgen(**kwargs):
tags = kwargs.get("tags", [])
if "manual" not in tags:
tags.append("manual")
kwargs["tags"] = tags
_gqlgen(**kwargs)
My immediate issue is that Bazel complains that the schemas are not Files:
expected type 'File' for 'inputs' element but got type 'Target' instead
What's the right approach to specify the input files?
Is this the right approach to generate a rule that executes a command?
Finally, is it okay to have the output file not exist in the filesystem, but rather be a label on which other rules can depend?
Instead of:
ctx.actions.run(
inputs = ctx.attr.schemas,
Use:
ctx.actions.run(
inputs = ctx.files.schemas,
Is this the right approach to generate a rule that executes a command?
This looks right, as long as gqlgen creates the file with the correct output name (outputs = [ctx.actions.declare_file(ctx.attr.name)]).
generated_go_file = ctx.actions.declare_file(ctx.attr.name + ".go")
# ..
ctx.actions.run(
outputs = [generated_go_file],
args = ["run", "...", "--output", generated_go_file.short_path],
# ..
)
Finally, is it okay to have the output file not exist in the filesystem, but rather be a label on which other rules can depend?
The output file needs to be created, and as long as it's returned at the end of the rule implementation in a DefaultInfo provider, other rules will be able to depend on the file label (e.g. //my/package:foo-gqlgen.go).

How to add TXT record using dnsruby

I am trying to add a TXT record using dnsruby library. What could be the best implemetation?
below is the sample code i am trying.
require 'dnsruby'
key = "mydns key"
resolver = Dnsruby::Resolver.new("dnsserverip")
update = Dnsruby::Update.new("zonename")
resolver.tsig = "update.zone", key, 'hmac-sha512'
update.add("domain.com", "TXT", 300, "This my txt content")
response = resolver.send_message(update)
puts response.answer

Why does the file get removed when commiting second time via rugged?

I want to store text files in a Git repo. I am using Ruby rugged gem 0.19.0 for this. The problem is that adding a second file f2 seems to automatically delete the first one f1. I have isolated the code to reproduce this (basically code straight from rugged gem docs):
require 'rugged'
def commit(file_name, message, content)
user = { email: 'email', name: 'name', time: Time.now }
repo = Rugged::Repository.new('repo')
oid = repo.write(content, :blob)
index = repo.index
index.add(path: file_name, oid: oid, mode: 0100644)
options = {}
options[:tree] = index.write_tree(repo)
options[:author] = user
options[:committer] = user
options[:message] = message
options[:parents] = repo.empty? ? [] : [ repo.head.target ].compact
options[:update_ref] = 'HEAD'
Rugged::Commit.create(repo, options)
end
Rugged::Repository.init_at('repo', :bare)
commit('f1', 'create f1', 'f1 content')
commit('f2', 'create f2', 'f2 content')
After running above code and cloning the bare repo created, the git log --name-status shows that second commit removes f1 file.
How do I fix this to not mess with files stored previously in the repo?
Rugged README
oid = repo.write("This is a blob.", :blob)
index = repo.index
index.read_tree(repo.head.target.tree) # notice
index.add(:path => "README.md", :oid => oid, :mode => 0100644)
but repo.head.target is a string in 0.19.0
oid = repo.write(content, :blob)
index = repo.index
begin
commit = repo.lookup(repo.head.target)
tree = commit.tree
index.read_tree(tree)
rescue
end
index.add(path: file_name, oid: oid, mode: 0100644)
It works

Resources