Writing new key to configuration file with Viper - go

First easy project with Go here.
Based on user input, I need to add new keys to my existing configuration file.
I manage to read it correctly with Viper and use it throughout the application, but WriteConfig doesn't seem to work.
Here's a snippet:
oldConfig := viper.AllSettings()
fmt.Printf("All settings #1 %+v\n\n", oldConfig)
viper.Set("setting1", chosenSetting1)
viper.Set("setting2", chosenSetting2)
newConfig := viper.AllSettings()
fmt.Printf("All settings #2 %+v\n\n", newConfig)
err := viper.WriteConfig()
if err != nil {
log.Fatalln(err)
}
newConfig includes new settings as expected, but WriteConfig doesn't apply changes to the config file.
I've read in Viper repo that writing functions are quite controversial and a bit buggy in terms of treating existing or non-existing files, but I expect them to work in simple cases like this.
I also tried other functions (i.e. SafeWriteConfig) with no success.
I'm using Go 1.16.2 and Viper 1.7.1.
What am I doing wrong?

viper.WriteConfig() // writes current config to predefined path set by 'viper.AddConfigPath()' and 'viper.SetConfigName'
you first need to specify the path to the config file
or try this method bellow
viper.SafeWriteConfigAs("/path/to/my/.config") // will error since it has already been written

try WriteConfigAs(filename) ; you will be able to name the file to write to.
If there is no error in WriteConfig, it's probably that the changes are not written to the file you expect.
viper.ConfigFileUsed() should return the path used by default.

Related

Configuring OTLP exporter through environment variables

Currently I am trying to configure my OTLP exporter using environment variables. This is supposed to be possible as per the official docs.
In particular, I want to focus on the OTEL_EXPORTER_OTLP_ENDPOINT one, which is allowed for the OTLPtrace exporter. According to the comments in their code, the environment variable takes precedence over any other value set in the code.
I wrote a very basic HTTP application in Go, which is instrumented with OpenTelemetry. When I specify the exporter endpoint explicitly in the code like:
exporter, err := otlptrace.New(
context.Background(),
otlptracegrpc.NewClient(
otlptracegrpc.WithInsecure(),
otlptracegrpc.WithEndpoint("My Endpoint"),
),
)
The instrumentation works just fine like that. However if I remove the otlptracegrpc.NewClient configuration from the code, it does not pick up the values set in the environment, which are set like:
OTEL_EXPORTER_OTLP_ENDPOINT="my endpoint"
So when I run this application in my debugger I can see that the exporter client has an empty value as the endpoint, yet I can pick them up within my program as:
exporterEndpoint := os.Getenv("OTEL_EXPORTER_OTLP_ENDPOINT")
This I interpret as the variables being actually present at the time the code is being executed, which was my main fear.
Why is this? Am I missing something here? Should I populate the environment variable differently (I see there are "options" for the environment variable in the official docs, but no examples)?
From what I see from your code, you're trying to contact the OTLP exporter through a gRPC call. If you see, in their documentation they wrote this in line 71:
This option has no effect if WithGRPCConn is used.
This means that you can completely avoid passing this variable at all to the otlptracegrpc.NewClient function. I instantiate a gRPC client with this code and it works:
func newOtlpExporter(ctx context.Context) (trace.SpanExporter, error) {
client := otlptracegrpc.NewClient(otlptracegrpc.WithInsecure(), otlptracegrpc.WithDialOption(grpc.WithBlock()))
exporter, err := otlptrace.New(ctx, client)
if err != nil {
panic(err)
}
return exporter, err
}
Back to your question, you're right with your guess but only if you're sending metrics, traces, and so on through HTTPS calls.
Let me know if this helps to solve the issue or if anything else is needed!
Edit 1
I overlooked this. The comment you linked in the question is taken from the wrong file. The correct line is this: https://github.com/open-telemetry/opentelemetry-go/blob/48a05478e238698e02b4025ac95a11ecd6bcc5ad/exporters/otlp/otlptrace/otlptracegrpc/options.go#L71
As you can see, the comment is clearer and you have only two options:
Provide your own endpoint address
Use the default one which is localhost:0.0.0.0:4317
Let me know if helps!

How do I access custom fields in an error?

Objective
Add a command to dropbox's CLI tool to get the shared link for the given path (file or folder).
The changes are here: github fork.
Background
The dropbox-go-sdk has a function that takes a path, and returns a new shared link, or returns an error containing the existing shared link.
I don't know how to use the error to extract the existing shared link.
Code
on github, and snippet here:
dbx := sharing.New(config)
res, err := dbx.CreateSharedLinkWithSettings(arg)
if err != nil {
switch e := err.(type) {
case sharing.CreateSharedLinkWithSettingsAPIError:
fmt.Printf("%v", e.EndpointError)
default:
return err
}
}
This prints the following:
&{{shared_link_already_exists} <nil> <nil>}found unknown shared link typeError: shared_link_already_exists/...
tracing:
CreateSharedLinkWithSettings --> CreateSharedLinkWithSettingsAPIError --> CreateSharedLinkWithSettingsError --> SharedLinkAlreadyExistsMetadata --> IsSharedLinkMetadata
IsSharedLinkMetadata contains the Url that I'm looking for.
More Info
The API docs point to CreateSharedLinkWithSettings, which should pass back the information in the error including the existing Url.
I struggle to understand how to deal with the error and extract the url from it.
The dbxcli has some code doing a similar operation, but again, not sure how it's working enough to apply it to the code I'm working on. Is it a Struct? Map? I don't know what this thing is called. There's some weird magic err.(type) stuff happening in the code. How do I access the data?
dbx := sharing.New(config)
res, err := dbx.CreateSharedLinkWithSettings(arg)
if err != nil {
switch e := err.(type) {
case sharing.CreateSharedLinkWithSettingsAPIError:
fmt.Printf("%v", e.EndpointError)
// type cast to the specific error and access the field you want.
settingsError := err.(sharing.CreateSharedLinkWithSettingsAPIError)
fmt.Println(settingsError.EndpointError.SharedLinkAlreadyExists.Metadata.Url)
default:
return err
}
}
The question was answered in the comments by #jimb. The answer is you access the fields like any other golang data structure - nothing special.
The errors I got when trying to access the fields were because the fields were not there.
The problem with the code was dependency issues. The code depends on an older version of the go-sdk and I referenced the latest version.
This question serves as a good explanation for how real golang programmers handle errors in their code with examples. I wasn't able to find this online, so I won't close the question.

Using client-go to `kubectl apply` against the Kubernetes API directly with multiple types in a single YAML file

I'm using https://github.com/kubernetes/client-go and all works well.
I have a manifest (YAML) for the official Kubernetes Dashboard: https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
I want to mimic kubectl apply of this manifest in Go code, using client-go.
I understand that I need to do some (un)marshalling of the YAML bytes into the correct API types defined in package: https://github.com/kubernetes/api
I have successfully Createed single API types to my cluster, but how do I do this for a manifest that contains a list of types that are not the same? Is there a resource kind: List* that supports these different types?
My current workaround is to split the YAML file using csplit with --- as the delimiter
csplit /path/to/recommended.yaml /---/ '{*}' --prefix='dashboard.' --suffix-format='%03d.yaml'
Next, I loop over the new (14) parts that were created, read their bytes, switch on the type of the object returned by the UniversalDeserializer's decoder and call the correct API methods using my k8s clientset.
I would like to do this to programmatically to make updates to any new versions of the dashboard into my cluster. I will also need to do this for the Metrics Server and many other resources. The alternative (maybe simpler) method is to ship my code with kubectl installed to the container image and directly call kubectl apply -f -; but that means I also need to write the kube config to disk or maybe pass it inline so that kubectl can use it.
I found this issue to be helpful: https://github.com/kubernetes/client-go/issues/193
The decoder lives here: https://github.com/kubernetes/apimachinery/tree/master/pkg/runtime/serializer
It's exposed in client-go here: https://github.com/kubernetes/client-go/blob/master/kubernetes/scheme/register.go#L69
I've also taken a look at the RunConvert method that is used by kubectl: https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/convert/convert.go#L139 and assume that I can provide my own genericclioptions.IOStreams to get the output?
It looks like RunConvert is on a deprecation path
I've also looked at other questions tagged [client-go] but most use old examples or use a YAML file with a single kind defined, and the API has changed since.
Edit: Because I need to do this for more than one cluster and am creating clusters programmatically (AWS EKS API + CloudFormation/eksctl), I would like to minimize the overhead of creating ServiceAccounts across many cluster contexts, across many AWS accounts. Ideally, the only authentication step involved in creating my clientset is using aws-iam-authenticator to get a token using cluster data (name, region, CA cert, etc). There hasn't been a release of aws-iam-authenticator for a while, but the contents of master allow for the use of a third-party role cross-account role and external ID to be passed. IMO, this is cleaner than using a ServiceAccount (and IRSA) because there are other AWS services the application (the backend API which creates and applies add-ons to these clusters) needs to interact with.
Edit: I have recently found https://github.com/ericchiang/k8s. It's definitely simpler to use than client-go, at a high-level, but doesn't support this behavior.
It sounds like you've figured out how to deserialize YAML files into Kubernetes runtime.Objects, but the problem is dynamically deploying a runtime.Object without writing special code for each Kind.
kubectl achieves this by interacting with the REST API directly. Specifically, via resource.Helper.
In my code, I have something like:
import (
meta "k8s.io/apimachinery/pkg/api/meta"
"k8s.io/cli-runtime/pkg/resource"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/restmapper"
"k8s.io/apimachinery/pkg/runtime"
)
func createObject(kubeClientset kubernetes.Interface, restConfig rest.Config, obj runtime.Object) error {
// Create a REST mapper that tracks information about the available resources in the cluster.
groupResources, err := restmapper.GetAPIGroupResources(kubeClientset.Discovery())
if err != nil {
return err
}
rm := restmapper.NewDiscoveryRESTMapper(groupResources)
// Get some metadata needed to make the REST request.
gvk := obj.GetObjectKind().GroupVersionKind()
gk := schema.GroupKind{Group: gvk.Group, Kind: gvk.Kind}
mapping, err := rm.RESTMapping(gk, gvk.Version)
if err != nil {
return err
}
name, err := meta.NewAccessor().Name(obj)
if err != nil {
return err
}
// Create a client specifically for creating the object.
restClient, err := newRestClient(restConfig, mapping.GroupVersionKind.GroupVersion())
if err != nil {
return err
}
// Use the REST helper to create the object in the "default" namespace.
restHelper := resource.NewHelper(restClient, mapping)
return restHelper.Create("default", false, obj, &metav1.CreateOptions{})
}
func newRestClient(restConfig rest.Config, gv schema.GroupVersion) (rest.Interface, error) {
restConfig.ContentConfig = resource.UnstructuredPlusDefaultContentConfig()
restConfig.GroupVersion = &gv
if len(gv.Group) == 0 {
restConfig.APIPath = "/api"
} else {
restConfig.APIPath = "/apis"
}
return rest.RESTClientFor(&restConfig)
}
I was able to get this working in one of my projects. I had to use much of the source code from kubectl's apply command to get it working correctly.
https://github.com/billiford/go-clouddriver/blob/master/pkg/kubernetes/client.go#L63

Use package file to write to Cloud Storage?

Golang provides the file package to access Cloud Storage.
The package's Create function requires the io.WriteCloser interface. However, I have not found a single sample or documentation showing how to actually save a file to Cloud Storage.
Can anybody help? Is there a higher level implementation of io.WriteCloser that would allow us to store files in Cloud Storage? Any sample code?
We've obviously tried to Google it ourselves but found nothing and now hope for the community to help.
It's perhaps true than the behavior is not well defined in the documentation.
If you check the code: https://code.google.com/p/appengine-go/source/browse/appengine/file/write.go#133
In each call to Write the data is sent to the cloud (line 139). So you don't need to save. (You should close the file when you're done, through.)
Anyway, I'm confused with your wording: "The package's Create function requires the io.WriteCloser interface." That's not true. The package's Create functions returns a io.WriteCloser, that is, a thingy you can write to and close.
yourFile, _, err := Create(ctx, "filename", nil)
// Check err != nil here.
defer func() {
err := yourFile.Close()
// Check err != nil here.
}()
yourFile.Write([]byte("This will be sent to the file immediately."))
fmt.Fprintln(yourFile, "This too.")
io.Copy(yourFile, someReader)
This is how interfaces work in Go. They just provide you with a set of methods you can call, hiding the actual implementation from you; and, when you just depend on a particular interface instead of a particular implementation, you can combine in multiple ways, as fmt.Fprintln and io.Copy do.

How to embed file for later parsing execution use

I am essentially trying to walk through a folder of html files. I want to embed them into the binary file and be able to parse them upon request for template execution purposes. (Please excuse me if im not wording this properly).
Any ideas, tips, tricks or better way of accomplishing this is much appreciated.
// Template Files
type TempFiles struct {
Files map[string]string
}
// Loop through view files and load them
func LoadTempFiles() {
t := new(TempFiles)
// Load template files
filepath.Walk("application/views", func(path string, info os.FileInfo, err error) error {
if !info.IsDir() {
content, _ := ioutil.ReadFile(path)
t.Files[path] = string(content)
}
return nil
})
}
func ViewTemp(w http.ResponseWriter, path string) {
t := new(TempFiles)
temp, err := template.New().Parse(t.Files[path])
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
} else {
temp.Execute(w, nil)
}
}
I do this with most of my Go web apps. I use go-bindata to auto-generate Go source code from all the files I want to embed and then compile them into the binary.
All this is done automatically during build.
One downside is that the current go build tools do not offer a way to hook into the build process, so I use a Makefile for this purpose. When the makefile is invoked, it runs go-bindata to generate the sources for all necessary files, then usually performs some additional code generation bits and bobs (notably, creating a Go source file which lists all the embedded files in a map.. A Table of Contents if you will). It then proceeds to compile the actual program.
This can become a little messy, but you only have to set it all up once.
Another downside, is that the use of a Makefile means the software is not compatible with the go get command. But since most of my web apps are not meant to be shared anyway, this has not been a problem so far.
When it comes to debugging/developing such an application, there is another issue that arises from embedding the static web content: I can't just edit an HTML or CSS file and refresh the browser to see its effects. I would have to stop the server, rebuild it and restart it with every edit. This is obviously not ideal, so I split the Makefile up into a debug and release mode. The release mode does what I described above. The debug mode, however, wil not actually embed the static files. It does generate source files for each of them, but instead of having them contain the actual file data, it contains a stub which simply loads the data from the filesystem.
As far as the server code is concerned, there is no difference in the generated code. All it does is call a function to fetch the contents of a given static file. It does not care whether that content is actually embedded in the binary, or if it's loaded from an external source. So the two build modes are freely interchangeable.
For example, the same generated function to fetch static file content in release and debug mode would look as follows:
Release mode:
func index_html() []byte {
return []byte {
....
}
}
Debug mode:
func index_html() []byte {
data, err := ioutil.ReadFile("index.html")
...
return data
}
The interface in both cases is identical. This allows for easy and care-free development and debugging.
Another tool to consider: Another recent good tool comes from esc: Embedding Static Assets in Go (GitHub repo)
a program that:
can take some directories and recursively embed all files in them in a way that was compatible with http.FileSystem
can optionally be disabled for use with the local file system for local development
will not change the output file on subsequent runs
has reasonable-sized diffs when files changed
is vendoring-friendly
Vendoring-friendly means that when I run godep or party, the static embed file will not change.
This means it must not have any third-party imports (since their import path will be rewritten during goimports, and thus different than what the tool itself produces), or a specifiable location for the needed third-party imports.
It generates nice, gzipped strings, one per file.
There is a simple flag to enable local development mode, which is smart enough to not strip directory prefixes off of filenames (an option in esc that is sometimes needed).
The output includes all needed code, and does not depend on any third-party libraries for compatibility with http.FileSystem.
I made a package that makes switching between debug and production easier. It also provides an http.FileSystem implementation, making it easy to server the files. And it has several ways of adding the files to the binary (generate go code, or append as zip).
https://github.com/GeertJohan/go.rice
Go now has builtin support for this:
package main
import (
"embed"
"os"
)
//go:embed *.html
var content embed.FS
func main() {
b, e := content.ReadFile("index.html")
if e != nil {
panic(e)
}
os.Stdout.Write(b)
}
https://golang.org/pkg/embed

Resources