Import protobuf file from GitHub repository - go

I currently have two protobuf repos: api and timestamp:
timestamp Repo:
- README.md
- timestamp.proto
- timestamp.pb.go
- go.mod
- go.sum
api Repo:
- README.md
- protos/
- dto1.proto
- dto2.proto
Currently, timestamp contains a reference to a timestamp object that I want to use in api but I'm not sure how the import should work or how I should modify the compilation process to handle this. Complicating this process is the fact that the api repo is compiled to a separate, downstream repo for Go called api-go.
For example, consider dto1.proto:
syntax = "proto3";
package api.data;
import "<WHAT GOES HERE?>";
option go_package = "github.com/my-user/api/data"; // golang
message DTO1 {
string id = 1;
Timestamp timestamp = 2;
}
And my compilation command is this:
find $GEN_PROTO_DIR -type f -name "*.proto" -exec protoc \
--go_out=$GEN_OUT_DIR --go_opt=module=github.com/my-user/api-go \
--go-grpc_out=$GEN_OUT_DIR --go-grpc_opt=module=github.com/my-user/api-go \
--grpc-gateway_out=$GEN_OUT_DIR --grpc-gateway_opt logtostderr=true \
--grpc-gateway_opt paths=source_relative --grpc-gateway_opt
generate_unbound_methods=true \{} \;
Assuming I have a definition in timestamp for each of the programming languages I want to compile api into, how would I import this into the .proto file and what should I do to ensure that the import doesn't break in my downstream repo?

There is no native notion of remote import paths with protobuf. So the import path has to be relative to some indicated local filesystem base path (specified via -I / --proto_path).
Option 1
Generally it is easiest to just have a single repository with protobuf definitions for your organisation - e.g. a repository named acme-contract
.
└── protos
└── acme
├── api
│ └── data
│ ├── dto1.proto
│ └── dto2.proto
└── timestamp
└── timestamp.proto
Your dto1.proto will look something like:
syntax = "proto3";
package acme.api.data;
import "acme/timestamp/timestamp.proto";
message DTO1 {
string id = 1;
acme.timestamp.Timestamp timestamp = 2;
}
As long as you generate code relative to the protos/ dir of this repository, there shouldn't be an issue.
Option 2
There are various alternatives whereby you continue to have definitions split over various repositories, but you can't really escape the fact that imports are filesystem relative.
Historically that could be handled by manually cloning the various repositories and arranging directories such that the path are relative, or by using -I to point to various locations that might intentionally or incidentally contain the proto files (e.g. in $GOPATH). Those strategies tend to end up being fairly messy and difficult to maintain.
buf makes things somewhat easier now. If you were to have your timestamp repo:
.
├── buf.gen.yaml
├── buf.work.yaml
├── gen
│ └── acme
│ └── timestamp
│ └── timestamp.pb.go
├── go.mod
├── go.sum
└── protos
├── acme
│ └── timestamp
│ └── timestamp.proto
├── buf.lock
└── buf.yaml
timestamp.proto looking like:
syntax = "proto3";
package acme.timestamp;
option go_package = "github.com/my-user/timestamp/gen/acme/timestamp";
message Timestamp {
int64 unix = 1;
}
buf.gen.yaml looking like:
version: v1
plugins:
- name: go
out: gen
opt: paths=source_relative
- name: go-grpc
out: gen
opt:
- paths=source_relative
- require_unimplemented_servers=false
- name: grpc-gateway
out: gen
opt:
- paths=source_relative
- generate_unbound_methods=true
... and everything under gen/ has been generated via buf generate.
Then in your api repository:
.
├── buf.gen.yaml
├── buf.work.yaml
├── gen
│ └── acme
│ └── api
│ └── data
│ ├── dto1.pb.go
│ └── dto2.pb.go
└── protos
├── acme
│ └── api
│ └── data
│ ├── dto1.proto
│ └── dto2.proto
├── buf.lock
└── buf.yaml
With buf.yaml looking like:
version: v1
name: buf.build/your-user/api
deps:
- buf.build/your-user/timestamp
breaking:
use:
- FILE
lint:
use:
- DEFAULT
dto1.proto looking like:
syntax = "proto3";
package acme.api.data;
import "acme/timestamp/timestamp.proto";
option go_package = "github.com/your-user/api/gen/acme/api/data";
message DTO1 {
string id = 1;
acme.timestamp.Timestamp timestamp = 2;
}
and buf.gen.yaml the same as in the timestamp repo.
The code generated via buf generate will depend on the timestamp repository via Go modules:
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.28.1
// protoc (unknown)
// source: acme/api/data/dto1.proto
package data
import (
timestamp "github.com/your-user/timestamp/gen/acme/timestamp"
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
// <snip>
Note that if changes are made to dependencies you'll need to ensure that both buf and Go modules are kept relatively in sync.
Option 3
If you prefer not to leverage Go modules for importing generated pb code, you could also look to have a similar setup to Option 2, but instead generate all code into a separate repository (similar to what you're doing now, by the sounds of it). This is most easily achieved by using buf managed mode, which will essentially make it not require + ignore any go_modules directives.
In api-go:
.
├── buf.gen.yaml
├── go.mod
└── go.sum
With buf.gen.yaml containing:
version: v1
managed:
enabled: true
go_package_prefix:
default: github.com/your-user/api-go/gen
plugins:
- name: go
out: gen
opt: paths=source_relative
- name: go-grpc
out: gen
opt:
- paths=source_relative
- require_unimplemented_servers=false
- name: grpc-gateway
out: gen
opt:
- paths=source_relative
- generate_unbound_methods=true
You'd then need to generate code for each respective repo (bushed to BSR):
$ buf generate buf.build/your-user/api
$ buf generate buf.build/your-user/timestamp
After which you should have some generated code for both:
.
├── buf.gen.yaml
├── gen
│ └── acme
│ ├── api
│ │ └── data
│ │ ├── dto1.pb.go
│ │ └── dto2.pb.go
│ └── timestamp
│ └── timestamp.pb.go
├── go.mod
└── go.sum
And the imports will be relative to the current module:
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.28.1
// protoc (unknown)
// source: acme/api/data/dto1.proto
package data
import (
timestamp "github.com/your-user/api-go/gen/acme/timestamp"
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
// <snip>
All in all, I'd recommend Option 1 - consolidating your protobuf definitions into a single repository (including vendoring 3rd party definitions) - unless there is a particularly strong reason not to.

Related

Package files into specific folder of application bundle when deploying to AWS Lambda via Serverless Framework

Context
I am using the aws-node-typescript example of the Serverless Framework. My goal is to integrate Prisma into it.
So far, I have:
Created the project locally using serverless create
Set up a PostgreSQL database on Railway
Installed prisma, ran prisma init, created a basic User model and ran prisma migrate dev successfully
Created a second users function by copying the existing hello function
Deployed the function using serverless deploy
Now in my function, when I instantiate PrismaClient, I get an internal server error and the function logs this error: "ENOENT: no such file or directory, open '/var/task/src/functions/users/schema.prisma'"
My project structure looks as follows:
.
├── README.md
├── package-lock.json
├── package.json
├── prisma
│ ├── migrations
│ │ ├── 20221006113352_init
│ │ │ └── migration.sql
│ │ └── migration_lock.toml
│ └── schema.prisma
├── serverless.ts
├── src
│ ├── functions
│ │ ├── hello
│ │ │ ├── handler.ts
│ │ │ ├── index.ts
│ │ │ ├── mock.json
│ │ │ └── schema.ts
│ │ ├── index.ts
│ │ └── users
│ │ ├── handler.ts
│ │ └── index.ts
│ └── libs
│ ├── api-gateway.ts
│ ├── handler-resolver.ts
│ └── lambda.ts
├── tsconfig.json
└── tsconfig.paths.json
Also, here's the handler for the users function:
ts
import { formatJSONResponse } from '#libs/api-gateway';
import { middyfy } from '#libs/lambda';
import { PrismaClient } from '#prisma/client'
const users = async (event) => {
console.log(`Instantiating PrismaClient inside handler ...`)
const prisma = new PrismaClient()
return formatJSONResponse({
message: `Hello, ${event.queryStringParameters.name || 'there'} welcome to the exciting Serverless world!`,
event,
});
};
export const main = middyfy(users);
The problem arises because in order to instantiate PrismaClient, the schema.prisma file needs to be part of the application bundle. Specifically, it needs to be in /var/task/src/functions/users/ as indicated by the error message.
I already adjusted the package.patterns option in my serverless.ts file to look as follows:
package: { individually: true, patterns: ["**/*.prisma"] },
Question
This way, the bundle that's uploaded to AWS Lambda includes the prisma directory in its root, here's the .serverless folder after I ran sls package (I've unzipped users.zip here so that you can see its contents):
.
├── cloudformation-template-create-stack.json
├── cloudformation-template-update-stack.json
├── hello.zip
├── serverless-state.json
├── users
│ ├── prisma
│ │ └── schema.prisma
│ └── src
│ └── functions
│ └── users
│ ├── handler.js
│ └── handler.js.map
└── users.zip
I can also confirm that the deployed version of my AWS Lambda has the same folder structure.
How can I move the users/prisma/schema.prisma file into users/src/functions/users using the patterns in my serverless.ts file?
I found a (pretty ugly) solution. If anyone can think of a more elegant one, I'm still very open to it and happy to give you the points for a correct answer.
Solving the "ENOENT: no such file or directory, open '/var/task/src/functions/users/schema.prisma'" error
To solve this, I just took a very naive approach and manually copied over the schema.prisma file from the prisma directory into the src/functions/users. Here's the file structure I now had:
.
├── README.md
├── package-lock.json
├── package.json
├── prisma
│ ├── migrations
│ │ ├── 20221006113352_init
│ │ │ └── migration.sql
│ │ └── migration_lock.toml
│ └── schema.prisma
├── serverless.ts
├── src
│ ├── functions
│ │ ├── hello
│ │ │ ├── handler.ts
│ │ │ ├── index.ts
│ │ │ ├── mock.json
│ │ │ └── schema.ts
│ │ ├── index.ts
│ │ └── users
│ │ ├── schema.prisma
│ │ ├── handler.ts
│ │ └── index.ts
│ └── libs
│ ├── api-gateway.ts
│ ├── handler-resolver.ts
│ └── lambda.ts
├── tsconfig.json
└── tsconfig.paths.json
This is obviously a horrible way to solve this, because I now have two Prisma schema files in different locations and have to make sure I always update the one in src/functions/users/schema.prisma after changing the original one in prisma/schema.prisma to keep them in sync.
Once I copied this file and redeployed, the schema.prisma file was in place in the right location in the AWS Lambda and the error went away and PrismaClient could be instantiated.
I then added a simple Prisma Client query into the handler:
const users = async (event) => {
console.log(`Instantiating PrismaClient inside handler ...`)
const prisma = new PrismaClient()
const userCount = await prisma.user.count()
console.log(`There are ${userCount} users in the database`)
return formatJSONResponse({
message: `Hello, ${event.queryStringParameters.name || 'there'} welcome to the exciting Serverless world!`,
event,
});
};
export const main = middyfy(users);
... and encountered a new error, this time, about the query engine:
Invalid `prisma.user.count()` invocation:
Query engine library for current platform \"rhel-openssl-1.0.x\" could not be found.
You incorrectly pinned it to rhel-openssl-1.0.x
This probably happens, because you built Prisma Client on a different platform.
(Prisma Client looked in \"/var/task/src/functions/users/libquery_engine-rhel-openssl-1.0.x.so.node\")
Searched Locations:
/var/task/.prisma/client
/Users/nikolasburk/prisma/talks/2022/serverless-conf-berlin/aws-node-typescript/node_modules/#prisma/client
/var/task/src/functions
/var/task/src/functions/users
/var/task/prisma
/tmp/prisma-engines
/var/task/src/functions/users
To solve this problem, add the platform \"rhel-openssl-1.0.x\" to the \"binaryTargets\" attribute in the \"generator\" block in the \"schema.prisma\" file:
generator client {
provider = \"prisma-client-js\"
binaryTargets = [\"native\"]
}
Then run \"prisma generate\" for your changes to take effect.
Read more about deploying Prisma Client: https://pris.ly/d/client-generator
Solving the Query engine library for current platform \"rhel-openssl-1.0.x\" could not be found. error
I'm familiar enough with Prisma to know that Prisma Client depends on a query engine binary that has to be built specifically for the platform Prisma Client will be running on. This can be configured via the binaryTargets field on the generator block in my Prisma schema. The target for AWS Lamda is rhel-openssl-1.0.x.
So I adjusted the schema.prisma file (in both locations) accordingly:
generator client {
provider = "prisma-client-js"
binaryTargets = ["native", "rhel-openssl-1.0.x"]
}
After that, I ran npx prisma generate to update the generated Prisma Client in node_modules.
However, this hadn't resolved the error yet, the problem still was the Prisma Client couldn't find the query engine binary.
So I followed the same approach as for the schema.prisma file when it was missing:
I manually copied it into src/functions/users (this time from its location inside node_modules/.prisma/libquery_engine-rhel-openssl-1.0.x.so.node)
I added the new path to the package.patterns property in my serverless.ts:
package: {
individually: true,
patterns: ["**/*.prisma", "**/libquery_engine-rhel-openssl-1.0.x.so.node"],
},
After I redeployed and tested the function, another error occured:
Invalid `prisma.user.count()` invocation:
error: Environment variable not found: DATABASE_URL.
--> schema.prisma:11
|
10 | provider = \"postgresql\"
11 | url = env(\"DATABASE_URL\")
|
Validation Error Count: 1
Solving the Environment variable not found: DATABASE_URL. error
This time, it was pretty straightforward and I went into the AWS Console at https://us-east-1.console.aws.amazon.com/lambda/home?region=us-east-1#/functions/aws-node-typescript-dev-users?tab=configure and added a DATABASE_URL env var via the Console UI, pointing to my Postgres instance on Railway:
I usually lurk but the answer above lead me to come up with a reasonably elegant solution I felt I should share for the next poor sap to come along and try to integrate serverless and prisma in Typescript (though, I bet this solution and process would work in other build systems).
I was using the example aws-nodejs-typescript template, which is plagued with a bug which required me to apply the fix here by patching the local node_modules serverless package.
I then had to essentially walk through #nburk's answer to get myself up and running which is, as stated, inelegant.
In my travels trying to understand prisma's behavior, requirement of a platform-specific binary, and how to fix it, figured that if I could manually side-load the binary into the build folder post-compile I could get the serverless bundler to zip it up.
I came across the 'serverless-plugin-scripts' plugin, which allows us to do exactly this via serverless lifecycle hooks.
I put this in my serverless.ts:
plugins: ['serverless-esbuild', 'serverless-plugin-scripts'],
I put the following in package.json:
"scripts": {
"test": "echo 'Error: no test specified' && exit 1",
"postbuild": "yarn fix-scrape-scheduler",
"fix-scrape-scheduler": "cp ../../node_modules/.prisma/client/schema.prisma .esbuild/.build/src/functions/schedule-scrapes/. && cp ../../node_modules/.prisma/client/libquery_engine-rhel-openssl-1.0.x.so.node .esbuild/.build/src/functions/schedule-scrapes/."
},
and this also in my serverless.ts:
scripts: {
hooks: {
'before:package:createDeploymentArtifacts': 'yarn run postbuild'
}
}
This causes the 'serverless-plugin-scripts' plugin to call my post-build yarn script and fixup the .build folder that esbuild creates. I imagine that if your build system (such as webpack or something) creates the build dir under a different name (such as lib), this process could be modified accordingly.
I will have to create a yarn script to do this for each function that is individually packaged, however this is dynamic, precludes the need to keep multiple copies of schema.prisma in source, and copies the files from the dynamically generated .prisma folder in node_modules.
Note, I am using yarn workspaces here, so the location of your node_modules folder will vary based on your repo setup.
Also, I did run into this error Please make sure your database server is running at which was remedied by making sure the proper security groups were whitelisted outbound for the lambda function, and inbound for RDS. Also make sure to check your subnet ACLs and route-tables.
We ran into the same issue recently. But our context is slightly different: the path to the schema.prisma file was /var/task/node_modules/.prisma/client/schema.prisma.
We solved this issue by using Serverless Package Configuration.
serverless.yml
service: 'your-service-name'
plugins:
- serverless-esbuild
provider:
# ...
package:
include:
- 'node_modules/.prisma/client/schema.prisma' # <-------- this line
- 'node_modules/.prisma/client/libquery_engine-rhel-*'
This way only the src folder containing the lambda functions and the node_modules folder containing these two Prisma files were packaged and uploaded to AWS.
Although the use of serverless.package.include and serverless.package.exclude is deprecated in favor of serverless.package.patterns, this was the only way to get it to work.
An option is to use webpack with the copy-webpack-plugin and change the structure of your application, put all handlers inside the handlers folder.
Folders structure:
.
├── handlers/
│ ├── hello.ts
│ └── ...
└── services/
├── hello.ts
└── ...
webpack.config.js:
/* eslint-disable #typescript-eslint/no-var-requires */
const path = require("path");
// const nodeExternals = require("webpack-node-externals");
const CopyPlugin = require("copy-webpack-plugin");
const slsw = require("serverless-webpack");
const { isLocal } = slsw.lib.webpack;
module.exports = {
target: "node",
stats: "normal",
entry: slsw.lib.entries,
// externals: [nodeExternals()],
mode: isLocal ? "development" : "production",
optimization: { concatenateModules: false },
resolve: { extensions: [".js", ".ts"] },
output: {
libraryTarget: "commonjs",
filename: "[name].js",
path: path.resolve(__dirname, ".webpack"),
},
module: {
rules: [
{
test: /\.tsx?$/,
loader: "ts-loader",
exclude: /node_modules/,
},
],
},
plugins: [
new CopyPlugin({
patterns: [
{
from: "./prisma/schema.prisma",
to: "handlers/schema.prisma",
},
{
from: "./node_modules/.prisma/client/libquery_engine-rhel-openssl-1.0.x.so.node",
to: "handlers/libquery_engine-rhel-openssl-1.0.x.so.node",
},
],
}),
],
};
If you need to run the npx prisma generate before assembling the package you can use the plugin serverless-scriptable-plugin (put before webpack):
plugins:
- serverless-scriptable-plugin
- serverless-webpack
custom:
scriptable:
hooks:
before:package:createDeploymentArtifacts: npx prisma generate
webpack:
includeModules: false
Dependences:
npm install -D webpack serverless-webpack webpack-node-externals copy-webpack-plugin serverless-scriptable-plugin

Golang proto file management and importing

I have 2 grpc services (service1 and service2) that interact with each other and on some cases the rpc response of service1 will consists of a struct defined in service2, after going to several situations where duplication is inevitable, i figure that as the services grow these will be hard to manage, so i restructure the proto files into something like this for now
.
├── app
...
├── proto
│   ├── service1
│   │   ├── service1.access.proto
│   │   ├── service1.proto
│ ├── service2
│ │   ├── service2.access.proto
│ │   └── service2.proto
│   └── model
│   ├── model.service1.proto
│   └── model.service2.proto
└── proto-gen // the protoc generated files
   ├── service1
   │   ├── service1.access.pb.go
   │   └── service1.pb.go
├── service2
│   ├── service2.access.pb.go
│   └── service2.pb.go
   └── model
   ├── model.service1.pb.go
   └── model.service2.pb.go
service1 needs to import the model definition on model/model.service2.proto, so i am importing it like this
import "model/model.service2.proto";
option go_package = "proto-gen/service1";
and i generate the .pb.go files, using this protoc command
ls proto | awk '{print "protoc --proto_path=proto proto/"$1"/*.proto --go_out=plugins=grpc:."}' | sh
the command generates the .pb.go files just fine, but the code on service1.access.pb.go doesn't seem to import the model correctly, and i don't know if it related or not but when i run the app, it throws this error
cannot load model: malformed module path "model": missing dot in first path element
i spent a few hours now googling on how can i properly import another proto file, i can't seem to find any solution
The reason you got that error about model is because the generated files use the go_package of the imported file, and model is not a valid import path. You have to convince the generated file to use the full import path of the package.
This is how I did it for my source tree: I have a similar tree of proto files importing each other. If your module is named, say github.com/myapp, then run protoc with --proto-path=<directory containing github.com>, import other proto files using full path, that is github.com/myapp/proto/service1/service1.proto, and in service1.proto, define go_package = service1. This setup writes the import paths correctly in my case.
Before settling into this solution, I was using go_package=<full path to proto>, so you might give that a try as well.
Building on Burak Serdar, I want to provide my implementation.
Set the package on the proto you want to import similar to this where the location is your full path. My path is generally github.com/AllenKaplan/[project]/[package]/proto/
option go_package = [path];
In the file you wish to import to add an import. My path is generally [package]/proto/[package].proto
import = [path from protoc proto path]
The last part is the protoc command where you must define the protopath in a way that connects the import path and option go_package path
if executing from the github.com/AllenKaplan/[project] directory, I would call
protoc -I. --go_out=./[package]/proto [package]/proto/[package].proto
-I. === --proto_path.
the -I. sets the protopath to the entire project
One note, when calling protoc on your .proto files that you are importing, you will want to add source_relative: to the output will ensure the output is from the root with a set package.
My implementation of the imported protoc when called from github.com/AllenKaplan/[project]/[package]
protoc -I./proto --go_out=paths=source_relative:./proto [package].proto
I was also facing a similar issue while importing. Had changed the .protoc file option package with the following.
option go_package = "./;proto-gen/service1";
The first param means relative path where the code you want to generate.
The path relative to the --go_out , you set in your command.

Error with imports after packaging using setup tools

I am trying to package my project as a library in python using setup tools. Being my first time, I arranged my project in the following directory. Importing the package from its parent directory does not cause any errors and is working fine. But when I install it using setup tools and try to import from a general directory, I'm getting an Import error for importing files inside the packa
The directory structure is as follows:
driver_scoring
│   ├── config
│   │   ├── config.py
│   ├── processing
│   │   ├── data_management.py
│   │   ├── scoring_components.py
│   │   └── validation.py
│   ├── __init__.py
│   ├── pipeline.py
│   ├── predict.py
│   ├── train_pipeline.py
│   └── VERSION
├── MANIFEST.in
├── requirements.txt
└── setup.py
And in my init.py file, I have included the following code:
import os
from driver_scoring.config import config
with open(os.path.join(config.PACKAGE_ROOT, 'VERSION')) as version_file:
__version__ = version_file.read().strip()
And I am getting the following error:
1 import os
2
----> 3 from driver_scoring.config import config
4
5 with open(os.path.join(config.PACKAGE_ROOT, 'VERSION')) as version_file:
ModuleNotFoundError: No module named 'driver_scoring.config'
And my setup.py is as follows:
NAME = 'driver_scoring'
DESCRIPTION = ''
URL = ''
EMAIL = ''
AUTHOR = 'Nevin Baiju'
REQUIRES_PYTHON = '>=3.6.0'
def list_reqs(fname='requirements.txt'):
with open(fname) as fd:
return fd.read().splitlines()
here = os.path.abspath(os.path.dirname(__file__))
try:
with io.open(os.path.join(here, 'README.md'), encoding='utf-8') as f:
long_description = '\n' + f.read()
except FileNotFoundError:
long_description = DESCRIPTION
ROOT_DIR = Path(__file__).resolve().parent
PACKAGE_DIR = ROOT_DIR / NAME
about = {}
with open(PACKAGE_DIR / 'VERSION') as f:
_version = f.read().strip()
about['__version__'] = _version
setup(
name=NAME,
version=about['__version__'],
description=DESCRIPTION,
long_description=long_description,
long_description_content_type='text/markdown',
author=AUTHOR,
author_email=EMAIL,
python_requires=REQUIRES_PYTHON,
url=URL,
packages=find_packages(exclude=('tests',)),
package_data={'driver_scoring': ['VERSION']},
install_requires=list_reqs(),
extras_require={},
include_package_data=True,
license='MIT',
classifiers=[
'License :: OSI Approved :: MIT License',
'Programming Language :: Python',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: Implementation :: CPython',
'Programming Language :: Python :: Implementation :: PyPy'
],
)
I am not getting any errors when I try to import it from the project's parent directory and I am able to run the project as desired. However, when I try to install and import the project from the installed directory, I am getting the import error.
The issue was solved by myself. I had forgotten to add the init.py file in the config folder and that is what caused the import errors.

How to use "internal" packages?

I try understand how to organize go code using "internal" packages. Let me show what the structure I have:
project/
internal/
foo/
foo.go # package foo
bar/
bar.go # package bar
main.go
# here is the code from main.go
package main
import (
"project/internal/foo"
"project/internal/bar"
)
project/ is outside from GOPATH tree. Whatever path I try to import from main.go nothing works, the only case working fine is import "./internal/foo|bar". I think I do something wrong or get "internal" package idea wrong in general. Could anybody make things clearer, please?
UPDATE
The example above is correct the only what I need was to place project/ folder under $GOPATH/src. So the thing is import path like the project/internal/foo|bar is workable if we only import it from project/ subtree and not from the outside.
With modules introduction in Go v1.11 and above you don't have to specify your project path in $GOPATH/src
You need to tell Go about where each module located by creating go.mod file. Please refer to go help mod documentation.
Here is an example of how to do it:
project
| go.mod
| main.go
|
\---internal
+---bar
| bar.go
| go.mod
|
\---foo
foo.go
go.mod
project/internal/bar/go.mod
module bar
go 1.14
project/internal/bar/bar.go
package bar
import "fmt"
//Bar prints "Hello from Bar"
func Bar() {
fmt.Println("Hello from Bar")
}
project/internal/foo/go.mod
module foo
go 1.14
project/internal/foo/foo.go
package foo
import "fmt"
//Foo prints "Hello from Foo"
func Foo() {
fmt.Println("Hello from Foo")
}
project/main.go
package main
import (
"internal/bar"
"internal/foo"
)
func main() {
bar.Bar()
foo.Foo()
}
Now the most important module
project/go.mod
module project
go 1.14
require internal/bar v1.0.0
replace internal/bar => ./internal/bar
require internal/foo v1.0.0
replace internal/foo => ./internal/foo
Couple things here:
You can have any name in require. You can have project/internal/bar if you wish. What Go think it is URL address for the package, so it will try to pull it from web and give you error
go: internal/bar#v1.0.0: malformed module path "internal/bar": missing dot in first path element
That is the reason why you need to have replace where you tell Go where to find it, and that is the key!
replace internal/bar => ./internal/bar
The version doesn't matter in this case. You can have v0.0.0 and it will work.
Now, when you execute your code you will have
Hello from Bar
Hello from Foo
Here is GitHub link for this code example
The packages have to be located in your $GOPATH in order to be imported. The example you gave with import "./internal/foo|bar" works because it does a local-import. internal only makes it so code that doesn't share a common root directory to your internal directory can't import the packages within internal.
If you put all this in your gopath then tried to import from a different location like OuterFolder/project2/main.go where OuterFolder contains both project and project2 then import "../../project/internal/foo" would fail. It would also fail as import "foo" or any other way your tried due to not satisfying this condition;
An import of a path containing the element “internal” is disallowed if
the importing code is outside the tree rooted at the parent of the
“internal” directory.
Now if you had the path $GOPATH/src/project then you could do import "foo" and import "bar" from within $GOPATH/src/project/main.go and the import would succeed. Things that are not contained underneath project however would not be able to import foo or bar.
below way is more scalable, especially when you plan to build multiple binaries
github.com/servi-io/api
├── cmd/
│ ├── servi/
│ │ ├── cmdupdate/
│ │ ├── cmdquery/
│ │ └── main.go
│ └── servid/
│ ├── routes/
│ │ └── handlers/
│ ├── tests/
│ └── main.go
├── internal/
│ ├── attachments/
│ ├── locations/
│ ├── orders/
│ │ ├── customers/
│ │ ├── items/
│ │ ├── tags/
│ │ └── orders.go
│ ├── registrations/
│ └── platform/
│ ├── crypto/
│ ├── mongo/
│ └── json/
The folders inside cmd/ represents the number of binaries you want to build.
for more
Also to check: When you are using your externally imported object types: make sure you are prefixing them with the namespace they're in. As a golang newbie, I didn't know I had to do that, and was wondering why is VS Code just removing my import (for not being used) when I saved. It's because I had to prefix the imported object with the namespace name:
Example:
import (
"myInternalPackageName" // works fine as long as you follow all instructions in this thread
)
//Usage in code:
myInternalPackageName.TheStructName // prefix it, or it won't work.
If you don't put the namespace prefix before the object/struct name, VS code just removes your import for being unused, and then you still have the error: "Can't find TheStructName"... That was very confusing, and I had to do a build without VS code through the command line to understand that.
The point is: I had to specify the package prefix when actually using the struct from that internal package.
If you don't want to use the qualifier prefix when using imported objects, use:
import . "thePath" // Can use contents without prefixing.
Reference: What does the '.' (dot or period) in a Go import statement do?

Qmake configuration using Buildroot

I’ve tried to add a package to Buildroot that uses Qt and Boost. The package uses qmake to generate a Makefile, this part seems to be working, however I get an error when I build saying:
Could not find qmake configuration file qws/linux-arm-g++.
Error processing project file: MsgDisplay.pro
The contents of my package is laid out like this:
DummyPgm
├── main.cpp
├── MsgDisplay.pri
├── MsgDisplay.pro
├── MsgDisplay.pro.user
├── MsgHandler.cpp
├── MsgHandler.h
├── MsgServer.cpp
├── MsgServer.h
├── Tcp
│ ├── TcpAddrPort.cpp
│ ├── TcpAddrPort.h
│ ├── TcpServer.cpp
│ ├── TcpServer.h
│ ├── TcpSocket.cpp
│ └── TcpSocket.h
└── Tools
├── Banner.cpp
├── Banner.h
├── IoExt.h
├── SeparateArgumentList.cpp
├── SeparateArgumentList.h
└── SysTypes.h
2 directories, 20 files
I have added a package directory, dummypgm, which contains Config.in and dummypgm.mk files. The contents of the files are:
Config.in:
config BR2_PACKAGE_DUMMYPGM
bool "dummypgm"
help
Foo Software.
http://www.foo.com
dummypgm.mk:
DUMMYPGM_VERSION = 0.1.0
DUMMYPGM_SOURCE = DummyPgm-$(DUMMYPGM_VERSION).tar.gz
define DUMMYPGM_CONFIGURE_CMDS
(cd $(#D); $(QT_QMAKE) MsgDisplay.pro)
endef
define DUMMYPGM_BUILD_CMDS
$(MAKE) -C $(#D)
endef
$(eval $(generic-package))
Since the package is hosted locally, I’ve simply put the DummyPgm-0.1.0.tar.gz in the dl directory.
I’ve also added the following to package/Config.in:
source "package/dummypgm/Config.in"
I’m a little lost as to why this doesn’t work, if anyone could help me I would be very grateful. Also, is there any way to call $(eval $(qmake-package)) or something?
Are you using Qt4 or Qt5 ? Your package/dummypgm/Config.in should have a depends on on one of them, and your dummypgm.mk should have a DUMMYPGM_DEPENDENCIES = qt or DUMMYPGM_DEPENDENCIES = qt5base.
My intuition is that you are using Qt5. In this case, you shouldn't call $(QT_QMAKE), but $(QT5_QMAKE).
Have a look at http://git.buildroot.net/buildroot/tree/package/qextserialport/qextserialport.mk for an example. Note that this example supports both Qt4 and Qt5, probably in your case you only need one of the two.
Also, you should really subscribe to the Buildroot mailing list, you would get a lot more answers than here.

Resources