AsyncAPI: Only generate payload - go

Is it possible to skip generation of specific files using asyncapi-generator?
I am using the Go generator but I only need the payload.go. Right now it always generates all files:
handlers.go payloads.go publishers.go router.go server.go subscribers.go
The command I am using is:
$ docker run --rm -it \
-v ${PWD}/asyncapi.yaml:/app/asyncapi.yml \
-v ${PWD}/output:/app/output \
asyncapi/generator -o /app/output /app/asyncapi.yml #asyncapi/go-watermill-template --force-write

You cannot selectively generate only selected files yet. I encourage you to join the related discussion on GitHub
From what I understand is that you are interested only in models generation. So maybe you should just use directly the Modelina tool that is used there in go-watermill-template.
Modelina is already integrated with AsyncAPI CLI and you can do asyncapi generate models golang asyncapi.yml

Related

The line import openapiclient "github.com/GIT_USER_ID/GIT_REPO_ID" is added when generating from openapi-generator prompts login

I am using openapi generator to generate my rest api client. It generates the line
openapiclient "github.com/GIT_USER_ID/GIT_REPO_ID"
In my imports but I can't for the life of me understand why. Running a go mod vendor prompts me to sign in while this line is in place. What is this trying to import? I'm on an enterprise github, which would complicate things. The example README says to add this line but provides no explanation of what it is doing https://github.com/OpenAPITools/openapi-generator/blob/master/samples/openapi3/client/petstore/go/go-petstore/README.md#:~:text=%22github.com/GIT_USER_ID/GIT_REPO_ID%22
You can pass those to the OpenAPI Generator client as parameters:
openapi-generator-cli generate \
-i openapi.yaml
-g go \
-p packageName=mypackage \
-o /src \
--git-repo-id my-go-lib/v1 --git-user-id user1 \
that will result in
openapiclient "github.com/user1/my-go-lib/v1"

zap (docker) api scan against graphql specifying include or exclude queries

Thank you in advance for your time on this.
Is there a way to tell zap api scan, using docker run -i owasp/zap2docker-stable zap-api-scan.py, what queries and/or mutations from my graphql schema to hit during scan and which to exclude from the scan or do I need to set up my schema file to only include what I want scanned?
My problem is that the schema I am trying to scan is massive. I only want to scan like 15 mutations out of about 200...
Something like:
docker run -i owasp/zap2docker-stable zap-api-scan.py \
-t https://mytarget.com -f graphql \
-f graphql \
-schema schema-file.graphql \
--include-mutations file-with-list-of-mutations-to-include
The packaged scans are quite flexible, and do allow you to specify exactly which scan rules to run and which 'strength' to use for each rule.
However there are limits to what you can easily acheive, so you might want to look at the Automation Framework which is much more flexible.

How to set variables using terragrunt before_hook

I need to use some gcloud commands in order to create a Redis instance on GCP as terraform does not support some options that I need.
I'm trying this:
terraform {
# Before apply, run script.
before_hook "create_redis_script" {
commands = ["apply"]
execute = ["REDIS_REGION=${local.module_vars.redis_region}", "REDIS_PROJECT=${local.module_vars.redis_project}", "REDIS_VPC=${local.module_vars.redis_vpc}", "REDIS_PREFIX_LENGHT=${local.module_vars.redis_prefix_lenght}", "REDIS_RESERVED_RANGE_NAME=${local.module_vars.redis_reserved_range_name}", "REDIS_RANGE_DESCRIPTION=${local.module_vars.redis_range_description}", "REDIS_NAME=${local.module_vars.redis_name}", "REDIS_SIZE=${local.module_vars.redis_size}", "REDIS_ZONE=${local.module_vars.redis_zone}", "REDIS_ALT_ZONE=${local.module_vars.redis_alt_zone}", "REDIS_VERSION=${local.module_vars.redis_version}", "bash", "../../../scripts/create-redis-instance.sh"]
}
The script is like this:
echo "[+]Creating IP Allocation Automatically using <$REDIS_VPC-network\/$REDIS_PREFIX_LENGHT>"
gcloud compute addresses create $REDIS_RESERVED_RANGE_NAME \
--global \
--purpose=VPC_PEERING \
--prefix-lenght=$REDIS_PREFIX_LENGHT \
--description=$REDIS_RANGE_DESCRIPTION \
--network=$REDIS_VPC
The error I get is:
terragrunt apply
5b35d0bf15d0a0d61b303ed32556b85417e2317f
5b35d0bf15d0a0d61b303ed32556b85417e2317f
5b35d0bf15d0a0d61b303ed32556b85417e2317f
ERRO[0002] Hit multiple errors:
Hit multiple errors:
exec: "REDIS_REGION=us-east1": executable file not found in $PATH
ERRO[0002] Unable to determine underlying exit code, so Terragrunt will exit with error code 1
I encountered the same issue and resigned myself to pass the values as parameters instead of environment variables.
It involves to modify the script and is a far less clearer declaration, but it works :|

Run ansible-lint through subdirectories within a gitlab role

I am trying to add a validation step to a gitlab repo holding a single ansible role (with no playbook).
The structure of the role looks like :
.gitlab-ci.yml
tasks/
templates/
files/
vars/
handlers/
With the gitlab-ci looking like :
stages:
- lint
job-lint:
image:
name: cytopia/ansible-lint:latest
entrypoint: ["/bin/sh", "-c"]
stage: lint
script:
- ansible-lint --version
- ansible-lint . -x 106 tasks/*.yml
I need to skip the naming rule, thus ignoring rule 106.
Otherwise, I would like all files at the root repo to be checked. Since there is no playbook, lint has to be given the files that need to be checked... or at least, that is what I understoodd : I may have this point wrong. But anyway, if I give no name, lint does return ok but actually performs no check.
My problem is that I don't know how to tell him to check all the yaml in a recursive way, or even within a subdirectory. The above code returns an error :
ansible-lint: error: unrecognized arguments: tasks/deploy.yml tasks/localhost.yml tasks/main.yml tasks/managedata.yml tasks/psqlconf.yml
Any idea on how to check all the files from a subdirectory or through the whole role?
PS : I am using cytopia image for ansible-lint, but I have no problem using another, provided it's hosted on dockerhub.
You should certainly be able to pass multiple YAML files as arguments to ansible-lint. I have version 4.1.1a0, and I'm able to use it like this, for example:
anisble-lint -x 106 roles/*/tasks/*.yml
I notice that you seem to have placed a . before your -x 106; that looks like an error. It doesn't look like ansible-lint will accept a directory name as an argument (it doesn't cause it to fail; it just doesn't accomplish anything).
I've tried this both with a locally installed ansible-lint and using the cytopia/ansible-lint image, which appears to perform identically:
docker run --rm -v $PWD:/src -w /src cytopia/ansible-lint -x 106 roles/*/tasks/*.yml
If you want to check all the yaml files, you can use find with exec option, something like this:
find ./ -not -name ".gitlab-ci.yml" -name "*.yml" | xargs ansible-lint -x 106
However ansible-lint -x 106 ./ should work, are you sure that your role really has errors? I've tested it both on ansible-galaxy init generated roles (with meta and all that stuff) and roles which were containing only tasks directory, and it worked every time.
EDIT: I tried creating an error in existing role, replacing "present" with "latest" in package install task
$ ansible-galaxy install geerlingguy.nfs
$ cd ~/.ansible/roles/geerlingguy.nfs
$ sed -i "s/present/latest/g" tasks/setup-RedHat.yml
$ ansible-lint ./
Examining tasks/main.yml of type tasks
Examining tasks/setup-Debian.yml of type tasks
Examining tasks/setup-RedHat.yml of type tasks
Examining handlers/main.yml of type handlers
Examining meta/main.yml of type meta
[403] Package installs should not use latest
tasks/setup-RedHat.yml:2
Task/Handler: Ensure NFS utilities are installed.
and it actually worked, so you may want to run a verbose output to see if actually works, maybe individual yaml file rules are different from whole roles.
When i ran my find-based check i got a lot of extra [204] Lines should be no longer than 160 chars

Apache Thrift 0.9.1 generate golang code , param -r don't work

I use
`thrift-0.9.1 -r -gen go aaa.thrift`
to generate golang code
(note: aaa.thrfit include bbb.thrift which defined "Body" struct)
the param -r seems doesn't work, can't find "Body" struct in ttypes.go,
but when I try to use
`thrift-0.9.1 -r -gen java aaa.thrift`
has "Body.java",
how can I generate golang code which included files?
(note:from https://github.com/apache/thrift)
I know the reason, namespace go service.demo lead to the problem
$ cd thrift
$ cd trunk
$ cd tutorial
$ thrift -r -gen go tutorial.thrift
works perfectly for me.

Resources