Trying to write a custom rule - go

I'm trying to write a custom rule for gqlgen. The idea is to run it to generate Go code from a GraphQL schema.
My intended usage is:
gqlgen(
name = "gql-gen-foo",
schemas = ["schemas/schema.graphql"],
visibility = ["//visibility:public"],
)
"name" is the name of the rule, on which I want other rules to depend; "schemas" is the set of input files.
So far I have:
load(
"#io_bazel_rules_go//go:def.bzl",
_go_context = "go_context",
_go_rule = "go_rule",
)
def _gqlgen_impl(ctx):
go = _go_context(ctx)
args = ["run github.com/99designs/gqlgen --config"] + [ctx.attr.config]
ctx.actions.run(
inputs = ctx.attr.schemas,
outputs = [ctx.actions.declare_file(ctx.attr.name)],
arguments = args,
progress_message = "Generating GraphQL models and runtime from %s" % ctx.attr.config,
executable = go.go,
)
_gqlgen = _go_rule(
implementation = _gqlgen_impl,
attrs = {
"config": attr.string(
default = "gqlgen.yml",
doc = "The gqlgen filename",
),
"schemas": attr.label_list(
allow_files = [".graphql"],
doc = "The schema file location",
),
},
executable = True,
)
def gqlgen(**kwargs):
tags = kwargs.get("tags", [])
if "manual" not in tags:
tags.append("manual")
kwargs["tags"] = tags
_gqlgen(**kwargs)
My immediate issue is that Bazel complains that the schemas are not Files:
expected type 'File' for 'inputs' element but got type 'Target' instead
What's the right approach to specify the input files?
Is this the right approach to generate a rule that executes a command?
Finally, is it okay to have the output file not exist in the filesystem, but rather be a label on which other rules can depend?

Instead of:
ctx.actions.run(
inputs = ctx.attr.schemas,
Use:
ctx.actions.run(
inputs = ctx.files.schemas,
Is this the right approach to generate a rule that executes a command?
This looks right, as long as gqlgen creates the file with the correct output name (outputs = [ctx.actions.declare_file(ctx.attr.name)]).
generated_go_file = ctx.actions.declare_file(ctx.attr.name + ".go")
# ..
ctx.actions.run(
outputs = [generated_go_file],
args = ["run", "...", "--output", generated_go_file.short_path],
# ..
)
Finally, is it okay to have the output file not exist in the filesystem, but rather be a label on which other rules can depend?
The output file needs to be created, and as long as it's returned at the end of the rule implementation in a DefaultInfo provider, other rules will be able to depend on the file label (e.g. //my/package:foo-gqlgen.go).

Related

bazel workspace for non-bazel packages

I am using a package graphviz as part of a service and to use it I begin the bazel WORKSPACE file like this
new_local_repository(
name = "graphviz",
path = "/usr/local/Cellar/graphviz/2.49.1",
build_file_content = """
package(default_visibility = ["//visibility:public"])
cc_library(
name = "headers",
srcs = glob(["**/*.dylib"]),
hdrs = glob(["**/*.h"])
)
"""
)
...
The problem with it is its depends upon graphviz being downloaded, pre-installed and present in the path /usr/local/Cellar/graphviz/2.49.1. Is there a way to make it part of the bazel build process such that if its not present it will be fetched and put in the right place?
You can use http_archive to download one of graphviz's release archives:
https://docs.bazel.build/versions/main/repo/http.html#http_archive
From https://graphviz.org/download/source/ the 2.49.1 release is available at https://gitlab.com/api/v4/projects/4207231/packages/generic/graphviz-releases/2.49.1/graphviz-2.49.1.tar.gz
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "graphviz",
url = "https://gitlab.com/api/v4/projects/4207231/packages/generic/graphviz-releases/2.49.1/graphviz-2.49.1.tar.gz",
strip_prefix = "graphviz-2.49.1",
sha256 = "ba1aa7a209025cb3fc5aca1f2c0114e18ea3ad29c481d75e4d445ad44e0fb0f7",
build_file_content = """
package(default_visibility = ["//visibility:public"])
cc_library(
name = "headers",
srcs = glob(["**/*.dylib"]),
hdrs = glob(["**/*.h"])
)
""",
)
To answer the "if its not present" part of the question, I'm not aware of a straight forward way to accomplish automatically switching between something that's locally installed and downloading something. http_archive will always download the archive, and new_local_repository will always use something local.
There's the --override_repository flag, which replaces a repository with a local one, e.g. --override_repository=graphviz=/usr/local/Cellar/graphviz/2.49.1 would effectively replace the http_archive with a local_repository pointing to that path. However, bazel would then expect there to be a WORKSPACE file and BUILD file already at that location (i.e., there's no way to specify build_file_content)
You could specify both repository rules in the WORKSPACE file, and then use some indirection, a Starlark flag, and a select() to switch between the repositories using a command-line flag. It's a little involved though, and also not automatic. Something like this:
WORKSPACE:
http_archive(
name = "graphviz-download",
...,
)
new_local_repository(
name = "graphviz-installed",
...,
)
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "bazel_skylib",
urls = [
"https://github.com/bazelbuild/bazel-skylib/releases/download/1.1.1/bazel-skylib-1.1.1.tar.gz",
"https://mirror.bazel.build/github.com/bazelbuild/bazel-skylib/releases/download/1.1.1/bazel-skylib-1.1.1.tar.gz",
],
sha256 = "c6966ec828da198c5d9adbaa94c05e3a1c7f21bd012a0b29ba8ddbccb2c93b0d",
)
load("#bazel_skylib//:workspace.bzl", "bazel_skylib_workspace")
bazel_skylib_workspace()
BUILD (e.g. in //third_party/graphviz):
load("#bazel_skylib//rules:common_settings.bzl", "bool_flag")
bool_flag(
name = "use-installed-graphviz",
build_setting_default = False,
)
config_setting(
name = "installed",
flag_values = {
":use-installed-graphviz": "True",
}
)
alias(
name = "headers",
actual = select({
":installed": "#graphviz-installed//:headers",
"//conditions:default": "#graphviz-download//:headers",
})
)
Then your code depends on //third_party/graphviz:headers, by default the alias will point to the downloaded version, and the flag --//third_party/graphviz:use-installed-graphviz will switch it to the installed version:
$ bazel cquery --output build //third_party/graphviz:headers
alias(
name = "headers",
actual = "#graphviz-download//:headers",
)
$ bazel cquery --output build //third_party/graphviz:headers --//third_party/graphviz:use-installed-graphviz
alias(
name = "headers",
actual = "#graphviz-installed//:headers",
)
Another option is to write (or find) a custom repository rule that combines the functionality of http_archive and local_repository, but that would probably be a fair bit of work.
Generally I think most people just use an http_archive and download the dependencies, and if there are specific needs for being offline or caching, there's --distdir for using already-downloaded artifacts for remote repository rules: https://docs.bazel.build/versions/main/guide.html#distribution-files-directories
Edit: An example of using rules_foreign_cc with graphviz:
WORKSPACE:
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "rules_foreign_cc",
sha256 = "69023642d5781c68911beda769f91fcbc8ca48711db935a75da7f6536b65047f",
strip_prefix = "rules_foreign_cc-0.6.0",
url = "https://github.com/bazelbuild/rules_foreign_cc/archive/0.6.0.tar.gz",
)
load("#rules_foreign_cc//foreign_cc:repositories.bzl", "rules_foreign_cc_dependencies")
# This sets up some common toolchains for building targets. For more details, please see
# https://bazelbuild.github.io/rules_foreign_cc/0.6.0/flatten.html#rules_foreign_cc_dependencies
rules_foreign_cc_dependencies()
http_archive(
name = "graphviz",
url = "https://gitlab.com/api/v4/projects/4207231/packages/generic/graphviz-releases/2.49.1/graphviz-2.49.1.tar.gz",
strip_prefix = "graphviz-2.49.1",
sha256 = "ba1aa7a209025cb3fc5aca1f2c0114e18ea3ad29c481d75e4d445ad44e0fb0f7",
build_file_content = """\
filegroup(
name = "all_srcs",
srcs = glob(["**"]),
visibility = ["//visibility:public"],
)
""",
)
BUILD:
load("#rules_foreign_cc//foreign_cc:defs.bzl", "configure_make")
# see https://bazelbuild.github.io/rules_foreign_cc/0.6.0/configure_make.html
configure_make(
name = "graphviz",
lib_source = "#graphviz//:all_srcs",
out_shared_libs = ["libcgraph.so"], # or other graphviz libs
)
cc_binary(
name = "foo",
srcs = ["foo.c"],
deps = [":graphviz"],
)
foo.c:
#include "graphviz/cgraph.h"
int main() {
Agraph_t *g;
g = agopen("G", Agdirected, NULL);
agclose(g);
return 0;
}
Usage:
$ bazel build foo
INFO: Analyzed target //:foo (0 packages loaded, 2 targets configured).
INFO: Found 1 target...
Target //:foo up-to-date:
bazel-bin/foo
INFO: Elapsed time: 0.229s, Critical Path: 0.06s
INFO: 7 processes: 5 internal, 2 linux-sandbox.
INFO: Build completed successfully, 7 total actions

Precision gets lost for big number in telegraf

Precision gets lost for big number.
I am using tail input plugin to read file and data inside a file is in json format.
Below is the configuration
[inputs.tail]]
files = ["E:/Telegraph/MSTCIVRRequestLog_*.json"]
from_beginning = true
name_override = "tcivrrequest"
data_format = "json"
json_strict = true
[[outputs.file]]
files = ["E:/Telegraph/output.json"]
data_format = "json"
Input file contains
{"RequestId":959011990586458245}
Expected Output
{"fields":{"RequestId":959011990586458245},"name":"tcivrrequest","tags":{},"timestamp":1632994599}
Actual Output
{"fields":{"RequestId":959011990586458200},"name":"tcivrrequest","tags":{},"timestamp":1632994599}
Number 959011990586458245 converted into 959011990586458200(check last few digits).
Already Tried Below things but not worked
json_string_fields = ["RequestId"]
[[processors.converter]]
[processors.converter.fields]
string = [""RequestId""]"
precision = "1s"
json_int64_fields = ["RequestId"]
character_encoding = "utf-8"
json_strict = true
I was able to reproduce this with the json parser as well. My suggestion would be to move to the json_v2 parser with a config like the following:
[[inputs.file]]
files = ["metrics.json"]
data_format = "json_v2"
[[inputs.file.json_v2]]
[[inputs.file.json_v2.field]]
path = "RequestId"
type = "int"
I was able to get a result as follows:
file RequestId=959011990586458245i 1651181595000000000
The newer parser is generally more accurate and flexible for simple cases like the one you provided.
Thanks!

Use of current toolchain in bazel rule

I've use the below bazel rule to build static libraries with bazel:
def _cc_static_library_impl(ctx):
cc_deps = [dep[CcInfo] for dep in ctx.attr.deps]
libraries = []
for cc_dep in cc_deps:
for link_input in cc_dep.linking_context.linker_inputs.to_list():
for library in link_input.libraries:
libraries += library.pic_objects
args = ["r", ctx.outputs.out.path] + [f.path for f in libraries]
ctx.actions.run(
inputs = libraries,
outputs = [ctx.outputs.out],
executable = "/usr/bin/ar",
arguments = args,
)
return [DefaultInfo()]
cc_static_library = rule(
implementation = _cc_static_library_impl,
attrs = {
"deps": attr.label_list(providers = [CcInfo]),
},
outputs = {"out": "lib%{name}.a"},
)
How can I extract the command to use from the current toolchain instead of using the hardcoded /usr/bin/ar? I've based the rule on what I've found on the internet and I have very limited knowledge about this. This example seems to do something related:
https://github.com/bazelbuild/rules_cc/blob/main/examples/my_c_archive/my_c_archive.bzl
This is the relevant part of my_c_archive:
archiver_path = cc_common.get_tool_for_action(
feature_configuration = feature_configuration,
action_name = CPP_LINK_STATIC_LIBRARY_ACTION_NAME,
)
That gets you the path, and then you need to add cc_toolchain.all_files to your ctx.actions.run call, so it ends up looking like this:
ctx.actions.run(
inputs = depset(
direct = [libraries],
transitive = [
cc_toolchain.all_files,
],
),
outputs = [ctx.outputs.out],
executable = archiver_path,
arguments = args,
)
However, you'll also notice that my_c_archive builds up a command line and environment variables to call the archiver with. A simple toolchain won't have anything to pass in either of those, but a more complex one may not work correctly without adding them (for example, passing -m32 or setting PATH).
Part of the starlark implementation of cc_import in _create_archive_action in cc_import.bzl is a good place to start for handling all the intricacies. It creates an action to produce a static library from a set of object files, with flexibility to work with many toolchains.

Bazel docker_image copy extra folder

I am currently using bazel for my GoLang app.
container_image(
name = "my-golang-app",
base = "#ubuntu_base//image",
cmd = ["/bin/my-golang-app"],
directory = "/bin/",
files = [":my-golang-app"],
tags = [
"manual",
VERSION,
],
visibility = ["//visibility:public"],
)
Except my-golang-app folder I need to copy, while building this image, another folder named my-new-folder and everything in it.
How does one do that with bazel? I can't seem to find the solution in the bazel docs.
dockerfile {
run 'mkdir -p /usr/local/bin'
// add the lines below
add {
from 'docker/my-new-folder/'
into '/'
}
Use pkg_tar like the container_image docs reference. Something like this:
pkg_tar(
name = "my-files",
srcs = glob(["my-new-folder/**"]),
strip_prefix = ".",
)
container_image(
name = "my-golang-app",
files = [":my-golang-app"],
<everything else you already have>,
tars = [":my-files"],
)
pkg_tar has various options to control where your files end up, if you want something beyond just taking an entire directory. For more complicated arrangements, I find multiple pkg_tar rules for various directories linked together via deps a helpful pattern.

Terraform creating lambda before zip is ready

I'm trying to create a Terraform module that will build my JS lambdas, zip them and deploy them. This however proves to be problematic
resource "null_resource" "build_lambda" {
count = length(var.lambdas)
provisioner "local-exec" {
command = "mkdir tmp"
working_dir = path.root
}
provisioner "local-exec" {
command = var.lambdas[count.index].code.build_command
working_dir = var.lambdas[count.index].code.working_dir
}
}
data "archive_file" "lambda_zip" {
count = length(var.lambdas)
type = "zip"
source_dir = var.lambdas[count.index].code.working_dir
output_path = "${path.root}/tmp/${count.index}.zip"
depends_on = [
null_resource.build_lambda
]
}
/*******************************************************
* Lambda definition
*******************************************************/
resource "aws_lambda_function" "lambda" {
count = length(var.lambdas)
filename = data.archive_file.lambda_zip[count.index].output_path
source_code_hash = filebase64sha256(data.archive_file.lambda_zip[count.index].output_path)
function_name = "${var.application_name}-${var.lambdas[count.index].name}"
description = var.lambdas[count.index].description
handler = var.lambdas[count.index].handler
runtime = var.lambdas[count.index].runtime
role = aws_iam_role.iam_for_lambda.arn
memory_size = var.lambdas[count.index].memory_size
depends_on = [aws_iam_role_policy_attachment.lambda_logs, aws_cloudwatch_log_group.log_group, data.archive_file.lambda_zip]
}
The property source_code_hash = filebase64sha256(data.archive_file.lambda_zip[count.index].output_path) , although technically not obligatory, is necessary, or the existing lambda will never get overriden as Terraform will think that it is still the same version of lambda and will skip the deployment altogether. Unfortunately it looks like the method filebase64sha256 is evaluated before the creation of any resource. This means that there is no zip for the hash calculation and so I get the error
Error: Error in function call
on modules\api-gateway-lambda\main.tf line 35, in resource "aws_lambda_function" "lambda":
35: source_code_hash = filebase64sha256(data.archive_file.lambda_zip[count.index].output_path)
|----------------
| count.index is 0
| data.archive_file.lambda_zip is tuple with 1 element
Call to function "filebase64sha256" failed: no file exists at tmp\0.zip.
If i manually place a zip in the right location, I can see that the whole thing starts working and the zip eventually gets overriden by a new one, but the hash in this case must come from the previous zip.
What is the right way to execute the whole thing in the right order?
The archive_file data source has its own output_base64sha256 attribute which can give you that same result without asking Terraform to read a file that doesn't exist yet:
source_code_hash = data.archive_file.lambda_zip[count.index].output_base64sha256
The data source will populate this at the same time it creates the file, and because your lambda function depends on the data source it will therefore always be available before the lambda function configuration is evaluated.

Resources