While I grasped the general idea, I'm having trouble seeing the best practice for managing config envs and managing the DB connection.
Meaning:
If I have the repository (for PostgreSQL for example), should I pass the NewRepository function the DB configuration? Will it not somehow adversely affect the architecture principles (maintenance, testability, etc.)?
How do we handle things like defer db.Close()?
I mean, we'd obviously want it to defer in relation to the scope main function, so it's problematic to move that code into the Repository "class" (unless there's a way to do that with Context?)
On the other hand, calling NewRepository in main scope but then having the db handle the connection outside of it feels kind of strange.
Most of the examples I've found used the main function so it was easy. The question is how do you that correctly when employing the DDD (clean/hexagonal) architecture ? especially so that all the pieces would be "pluggable" without having to change the code "around it".
Here is an example I put together, is there a violation of some principles of the ddd pattern here? or is it actually how these things are done?
1. Shouldn't I handle the defer db.Close() inside the repository itself? maybe with Context I can defer it in relation to the main function scope but inside the repository itself?
2. Should I really pass the config into the NewRepository ?
pkg/main.go :
func main() {
// get configuration stucts via .env file
configuration, err := config.NewConfig()
if err != nil {
panic(err)
}
postgresRepo, err := postgres.NewRepository(configuration.Database)
defer postgresRepo.DB.Close()
myService := autocomplete.NewService(postgresRepo)
handler := rest.NewHandler(myService)
...
...
...
}
pkg/config/config.go:
// Config is a struct that contains configuration variables
type Config struct {
Environment string
Port string
Database *Database
}
// Database is a struct that contains DB's configuration variables
type Database struct {
Host string
Port string
User string
DB string
Password string
}
// NewConfig creates a new Config struct
func NewConfig() (*Config, error) {
env.CheckDotEnv()
port := env.MustGet("PORT")
// set default PORT if missing
if port == "" {
port = "3000"
}
return &Config{
Environment: env.MustGet("ENV"),
Port: port,
Database: &Database{
Host: env.MustGet("DATABASE_HOST"),
Port: env.MustGet("DATABASE_PORT"),
User: env.MustGet("DATABASE_USER"),
DB: env.MustGet("DATABASE_DB"),
Password: env.MustGet("DATABASE_PASSWORD"),
},
}, nil
}
Instead of passing the database configuration into your repository, try passing the database connection. For example:
func main() {
db, err := sql.Open("postgres", "...")
if err != nil {
log.Fatal(err)
}
defer db.Close()
repo := postgres.NewAutocompleteRepo(db)
svc := autocomplete.NewService(repo)
handler := autocomplete.NewHTTPHandler(svc)
}
This will leave the responsibility of connecting to the database outside of the repository and allow for easier testing.
Related
I have build a schema in open api format:
type Test_manifest struct {
metav1.TypeMeta
metav1.ObjectMeta
Spec spec
}
type spec struct {
Policies []string
Resources resources
Results []results
Variables variables
}
This is not complete schema, just a part of it.
And here below is the actual yaml file:
apiVersion: cli.kyverno.io/v1beta1
kind: kyvernotest
metadata:
name: test-check-nvidia-gpus
labels:
foolabel: foovalue
annotations:
fookey: foovalue
I'm trying to validate this incoming yaml file from the user, I can convert this yaml to json and then validate the value of the fields, but I'm not getting how to validate the field itself, I mean if user write name1 rather than name then how to show error on it. Basically how to validate the key.
Here's what I've implemented for value validation:
test := "cmd/cli/kubectl-kyverno/test/test.yaml"
yamlFile, err := ioutil.ReadFile(test)
if err != nil {
fmt.Printf("Error: failed to read file %v", err)
}
policyBytes, err1 := yaml.ToJSON(yamlFile)
if err1 != nil {
fmt.Printf("failed to convert to JSON")
}
tests := &kyvernov1.Test_manifest{}
if err := json.Unmarshal(policyBytes, tests); err != nil {
fmt.Printf("failed to decode yaml")
}
if tests.TypeMeta.APIVersion == "" {
fmt.Printf("skipping file as tests.TypeMeta.APIVersion not found")
}
if tests.TypeMeta.Kind == "" {
fmt.Printf("skipping file as tests.TypeMeta.Kind not found")
} else if tests.TypeMeta.Kind != "KyvernoTest" {
fmt.Printf("skipping file as tests.TypeMeta.Kind is not `KyvernoTest`")
}
Also we want this valiadtion to happen outside the cluster.
Two things come to my mind:
I notice that you are trying to build a k8s API extension manually which is a lot of rework, I will suggest you use a framework which will handle this for you. This is the recommended best practice that is used very frequently. It is too complicated to be done manually. Here are some resources. (kube-builder, operator-sdk). These solution are Open-API based as well. They will let you define your schema in a simple template, and generate all the validation and API + controller code for you.
If you want more validation and sanity testing, typically this is done with the help of an admission controller in your cluster. It intercepts the incoming request, and before it is processed by the API server, performs actions on it. (For verification, compliance, policy enforcement, authentication etc.) You can read more about admission controllers here.
Im using the following code which works as expected, I use from the cli gcloud auth application-default login and enter my credentials and I was able to run the code successfully from my macbook.
Now I need to run this code in my CI and we need to use different approach , what should be the approach to get the client_secret
and client_id or service account / some ENV variable, what is the way for doing it via GO code?
import "google.golang.org/api/compute/v1"
project := "my-project"
region := "my-region"
ctx := context.Background()
c, err := google.DefaultClient(ctx, compute.CloudPlatformScope)
if err != nil {
log.Fatal(err)
}
computeService, err := compute.New(c)
if err != nil {
log.Fatal(err)
}
req := computeService.Routers.List(project, region)
if err := req.Pages(ctx, func(page *compute.RouterList) error {
for _, router := range page.Items {
// process each `router` resource:
fmt.Printf("%#v\n", router)
// NAT Gateways are found in router.nats
}
return nil
}); err != nil {
log.Fatal(err)
}
Since you're using Jenkins you probably want to start with how to create a service account. It guides you on creating a service account and exporting a key to be set as a var in another CI/CD system.
Then refer to the docs from the client library on how to create a new client with source credential.
e.g.
client, err := storage.NewClient(ctx, option.WithCredentialsFile("path/to/keyfile.json"))
If you provided no source, it would attempt to read the credentials locally and act as the service account running the operation (not applicable in your use case).
Many CIs support the export of specific env vars. Or your script / conf can do it too.
But if you want to run in a CI why you need such configuration? Integration tests?
Some services can be used locally for unit/smoke testing. Like pubsub, there is a way to run a fake/local pubsub to perform some tests.
Or perhaps I did not understand your question, in this case can you provide an example?
I'd like to create Signed URLs to Google Cloud Storage resources from an app deployed using CloudRun.
I set up CloudRun with a custom Service Account with the GCS role following this guide.
My intent was to use V4 Signing to create Signed URLs from CloudRun. There is a guide for this use-case where a file service_account.json is used to generate JWT config. This works for me on localhost when I download the file from google's IAM. I'd like to avoid having this file committed in the repository use the one that I provided in CloudRun UI.
I was hoping that CloudRun injects this service account file to the app container and makes it accessible in GOOGLE_APPLICATION_CREDENTIALS variable but that's not the case.
Do you have a recommendation on how to do this? Thank you.
As you say, Golang Storage Client Libraries require a service account json file to sign urls.
There is currently a feature request open in GitHub for this but you should be able to work this around with this sample that I found here:
import (
"context"
"fmt"
"time"
"cloud.google.com/go/storage"
"cloud.google.com/go/iam/credentials/apiv1"
credentialspb "google.golang.org/genproto/googleapis/iam/credentials/v1"
)
const (
bucketName = "bucket-name"
objectName = "object"
serviceAccount = "[PROJECTNUMBER]-compute#developer.gserviceaccount.com"
)
func main() {
ctx := context.Background()
c, err := credentials.NewIamCredentialsClient(ctx)
if err != nil {
panic(err)
}
opts := &storage.SignedURLOptions{
Method: "GET",
GoogleAccessID: serviceAccount,
SignBytes: func(b []byte) ([]byte, error) {
req := &credentialspb.SignBlobRequest{
Payload: b,
Name: serviceAccount,
}
resp, err := c.SignBlob(ctx, req)
if err != nil {
panic(err)
}
return resp.SignedBlob, err
},
Expires: time.Now().Add(15*time.Minute),
}
u, err := storage.SignedURL(bucketName, objectName, opts)
if err != nil {
panic(err)
}
fmt.Printf("\"%v\"", u)
}
Cloud Run (and other compute platforms) does not inject a service account key file. Instead, they make access_tokens available on the instance metadata service. You can then exchange this access token with a JWT.
However, often times, Google’s client libraries and gcloud works out of the box on GCP’s compute platforms without explicitly needing to authenticate. So if you use the instructions on the page you linked (gcloud or code samples) it should be working out-of-the-box.
I am trying to upload a CustomSchema to all Users of a company in GSuite. This Custom Schema contains their Github Usernames, which I extracted with the github API.
The problem is, after running the code, the account in Gsuite is not added.
Relevant code (A connection to GSuite with admin Authentication is established, the map has all user entries. If you still want more code, I can provide you with it - just trying to keep it simple):
for _, u := range allUsers.Users {
if u.CustomSchemas != nil {
log.Printf("%v", string(u.CustomSchemas["User_Names"]))
}else{
u.CustomSchemas = map[string]googleapi.RawMessage{}
}
nameFromGsuite := u.Name.FullName
if githubLogin, ok := gitHubAccs[nameFromGsuite]; ok {
userSchemaForGithub := GithubAcc{GitHub: githubLogin}
jsonRaw, err := json.Marshal(userSchemaForGithub)
if err != nil {
log.Fatalf("Something went wrong logging: %v", err)
}
u.CustomSchemas["User_Names"] = jsonRaw
adminService.Users.Update(u.Id, u)
} else {
log.Printf("User not found for %v\n", nameFromGsuite)
}
}
This is the struct for the json encoding:
type GithubAcc struct {
GitHub string `json:"GitHub"`
}
For anyone stumbling upon this.
Everything in the code snippet is correct. By the way the method is written, I expected that adminService.Users.Update() actually updates the user. Instead, it returns an UserUpdatesCall.
You need to execute that update by calling .Do()
From the API:
Do executes the "directory.users.update" call.
So the solution is to change adminService.Users.Update(u.Id, u)
into adminService.Users.Update(u.Id, u).Do()
In one of my services that happens to be my loadbalancer, I am getting the following error when calling the server method in one of my deployed services:
rpc error: code = Unimplemented desc = unknown service
fooService.FooService
I have a few other services set up with gRPC and they work fine. It just seems to be this one and I am wondering if that is because it is the loadbalancer?
func GetResult(w http.ResponseWriter, r *http.Request) {
conn, errOne := grpc.Dial("redis-gateway:10006", grpc.WithInsecure())
defer conn.Close()
rClient := rs.NewRedisGatewayClient(conn)
result , errTwo := rClient.GetData(context.Background(), &rs.KeyRequest{Key: "trump", Value: "trumpVal"}, grpc.WithInsecure())
fmt.Fprintf(w, "print result: %s \n", result) //prints nil
fmt.Fprintf(w, "print error one: %v \n", errOne) // prints nil
fmt.Fprintf(w, "print error two: %s \n", errTwo) // prints error
}
The error says there is no service called fooService.FooService which is true because the dns name for the service I am calling is called foo-service. However it is the exact same setup for my other services that use gRPC and work fine. Also my proto files are correctly configured so that is not an issue.
server I am calling:
func main() {
lis, err := net.Listen("tcp", ":10006")
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
grpcServer := grpc.NewServer()
newServer := &RedisGatewayServer{}
rs.RegisterRedisGatewayServer(grpcServer, newServer)
if err := grpcServer.Serve(lis); err != nil {
log.Fatalf("failed to serve: %v", err)
}
}
The function I am trying to access from client:
func (s *RedisGatewayServer) GetData(ctx context.Context, in *rs.KeyRequest)(*rs.KeyRequest, error) {
return in, nil
}
My docker and yaml files are all correct also with the right naming and ports.
I had this exact problem, and it was because of a very simple mistake: I had put the call to the service registration after the server start. The code looked like this:
err = s.Serve(listener)
if err != nil {
log.Fatalf("[x] serve: %v", err)
}
primepb.RegisterPrimeServiceServer(s, &server{})
Obviously, the registration should have been called before the server was ran:
primepb.RegisterPrimeServiceServer(s, &server{})
err = s.Serve(listener)
if err != nil {
log.Fatalf("[x] serve: %v", err)
}
thanks #dolan's for the comment, it solved the problem.
Basically we have to make sure, the method value should be same in both server and client (you can even copy the method name from the pb.go file generated from server side)
func (cc *ClientConn) Invoke(ctx context.Context, method string, args, reply interface{}, opts ...CallOption) error {
this invoke function will be there inside all the methods which you have implemented in gRPC service.
for me, my client was connecting to the wrong server port.
My server listens to localhost:50052
My client connects to localhost:50051
and hence I see this error.
I had the same problem - I was able to perform rpc call from Postman to the service whereas apigateway was able to connect to the service but on method call It gave error code 12 unknown service and the reason was in my proto files client was under package X whereas server was under package Y.
Quite silly but yeah making the package for both proto under apigateway and service solved my problem.
There are many scenarios in which this can happen. The underlying issue seems to be the same - the GRPC server is reachable but cannot service client requests.
The two scenarios I faced that are not documented in previous answers are:
1. Client and server are not running the same contract version
A breaking change introduced in the contract can cause this error.
In my case, the server was running the older version of the contract while the client was running the latest one.
A breaking change meant that the server could not resolve the service my client was asking for thus returning the unimplemented error.
2. The client is connecting to the wrong GRPC server
The client reached the incorrect server that doesn't implement the contract.
Consider this scenario if you're running multiple different GRPC services. You might be mistakingly dialing the wrong one.