Set ObjectMeta on Istio-resource with go-client - go

I'm trying to work with Istio from Go, and are using Kubernetes and Istio go-client code.
The problem I'm having is that I can't specify ObjectMeta or TypeMeta in my Istio-ServiceRole object. I can only specify rules, which are inside the spec.
Below you can see what I got working:
import (
v1alpha1 "istio.io/api/rbac/v1alpha1"
)
func getDefaultServiceRole(app nais.Application) *v1alpha1.ServiceRole {
return &v1alpha1.ServiceRole{
Rules: []*v1alpha1.AccessRule{
{
Ports: []int32{2},
},
},
}
}
What I would like to do is have this code work:
func getDefaultServiceRole(app *nais.Application) *v1alpha1.ServiceRole {
return &v1alpha1.ServiceRole{
TypeMeta: metav1.TypeMeta{
Kind: "ServiceRole",
APIVersion: "v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Name: app.Name,
Namespace: app.Namespace,
},
Spec: v1alpha1.ServiceRole{
Rules: []*v1alpha1.AccessRule{
{
Ports: []int32{2},
},
},
},
},
}
Can anyone point me in the right direction?

Ah - this is a pretty painful point: Istio requires Kubernetes CRD wrapper metadata (primarily the name and namespace fields), but those fields are not part of the API objects themselves nor are they represented in the protos. (This is changing with the new MCP API for configuring components - which Galley uses - does encode these fields as protobufs but that doesn't help for your use case.) Instead, you should use the types in istio.io/istio/pilot/pkg/config/kube/crd, which implement the K8s CRD interface.
The easiest way to work with the Istio objects in golang is to use Pilot's libraries, particularly the istio.io/istio/pilot/pkg/model and istio.io/istio/pilot/pkg/config/kube/crd packages, as well as the model.Config struct. You can either pass around the full model.Config (not great because spec has type proto.Message so you need type assertions to extract the data you care about), or pass around the inner object wrap it in a model.Config before you push it. You can use the model.ProtoSchema type to help with conversion to and from YAML and JSON. Pilot only defines ProtoSchema objects for the networking API, the type is public and you can create them for arbitrary types.
So, using your example code I might try something like:
import (
v1alpha1 "istio.io/api/rbac/v1alpha1"
"istio.io/istio/pilot/pkg/model"
)
func getDefaultServiceRole() *v1alpha1.ServiceRole {
return &v1alpha1.ServiceRole{
Rules: []*v1alpha1.AccessRule{
{
Ports: []int32{2},
},
},
}
}
func toConfig(app *nais.Application, role *v1alpha1.ServiceRole) model.Config {
return &model.Config{
ConfigMeta: model.ConfigMeta{
Name: app.Name,
Namespace: app.Namespace,
},
Spec: app,
}
}
type Client model.ConfigStore
func (c Client) CreateRoleFor(app nais.Application, role *v1alpha1.ServiceRole) error {
cfg := toConfig(app, role)
_, err := c.Create(cfg)
return err
}
As a more complete example, we built the Istio CloudMap operator in this style. Here's the core of it that pushes config to K8s with Pilot libraries. Here's the incantation to create an instance of model.ConfigStore to use to create objects. Finally, I want to call out explicitly as it's only implicit in the example: when you call Create on the model.ConfigStore, the ConfigStore relies on the metadata in the ProtoSchema objects used to create it. So be sure to initialize the store with ProtoSchema objects for all of the types you'll be working with.
You can achieve the same using just the K8s client libraries and the istio.io/istio/pilot/pkg/config/kube/crd package, but I have not done it firsthand and don't have examples handy.

Istio now supports:
import (
istiov1alpha3 "istio.io/api/networking/v1alpha3"
istiogov1alpha3 "istio.io/client-go/pkg/apis/networking/v1alpha3"
)
VirtualService := istiogov1alpha3.VirtualService{
TypeMeta: metav1.TypeMeta{
Kind: "VirtualService",
APIVersion: "networking.istio.io/v1alpha3",
},
ObjectMeta: metav1.ObjectMeta{
Name: "my-name",
},
Spec: istiov1alpha3.VirtualService{},
}
Where istiov1alpha3.VirtualService{} is an istio object.

Related

How to show a warning/error when running 'terraform plan'?

I'm building a Terraform plugin/provider (link) which will help users manage their cloud resources e.g. cloud instances, Kubernetes clusters & etc on a cloud platform.
The cloud platform at this moment does not support Kubernetes nodes size change after it gets created. If user wants to change the nodes size, they need to create a new node pool with the new nodes size.
So I'm adding this block in my plugin code, specifically in the Kubernetes cluster update method (link):
if d.HasChange("target_nodes_size") {
errMsg := []string{
"[ERR] Unable to update 'target_nodes_size' after creation.",
"Please create a new node pool with the new node size.",
}
return fmt.Errorf(strings.Join(errMsg, " "))
}
The problem is, the error only appears when I run terraform apply command. What I want is, I want it to show when user runs terraform plan command so they know it early that it's not possible to change the nodes size without creating a new node pool.
How do I make that target_nodes_size field immutable and show the error early in terraform plan output?
The correct thing to do here is to tell Terraform that changes to the resource cannot be done in place and instead requires the recreation of the resource (normally destroy followed by creation but you can reverse that with lifecycle.create_before_destroy).
When creating a provider you can do this with the ForceNew parameter on a schema's attribute.
As an example, the aws_launch_configuration resource is considered immutable from AWS' API side so every non computed attribute in the schema is marked with ForceNew: true.:
func resourceAwsLaunchConfiguration() *schema.Resource {
return &schema.Resource{
Create: resourceAwsLaunchConfigurationCreate,
Read: resourceAwsLaunchConfigurationRead,
Delete: resourceAwsLaunchConfigurationDelete,
Importer: &schema.ResourceImporter{
State: schema.ImportStatePassthrough,
},
Schema: map[string]*schema.Schema{
"arn": {
Type: schema.TypeString,
Computed: true,
},
"name": {
Type: schema.TypeString,
Optional: true,
Computed: true,
ForceNew: true,
ConflictsWith: []string{"name_prefix"},
ValidateFunc: validation.StringLenBetween(1, 255),
},
// ...
If you then attempt to modify any of the ForceNew: true fields then Terraform's plan will show that it needs to replace the resource and at apply time it will automatically do that as long as the user accepts the plan.
For a more complicated example, the aws_elasticsearch_domain resource allows in place version changes but only for specific version upgrade paths (so you can't eg go from 5.4 to 7.8 directly and instead have to go to 5.4 -> 5.6 -> 6.8 -> 7.8. This is done by using the CustomizeDiff attribute on the schema which allows you to use logic at plan time to give a different result than would normally be found from static configuration.
The CustomizeDiff for the aws_elasticsearch_domain elasticsearch_version attribute looks like this:
func resourceAwsElasticSearchDomain() *schema.Resource {
return &schema.Resource{
Create: resourceAwsElasticSearchDomainCreate,
Read: resourceAwsElasticSearchDomainRead,
Update: resourceAwsElasticSearchDomainUpdate,
Delete: resourceAwsElasticSearchDomainDelete,
Importer: &schema.ResourceImporter{
State: resourceAwsElasticSearchDomainImport,
},
Timeouts: &schema.ResourceTimeout{
Update: schema.DefaultTimeout(60 * time.Minute),
},
CustomizeDiff: customdiff.Sequence(
customdiff.ForceNewIf("elasticsearch_version", func(_ context.Context, d *schema.ResourceDiff, meta interface{}) bool {
newVersion := d.Get("elasticsearch_version").(string)
domainName := d.Get("domain_name").(string)
conn := meta.(*AWSClient).esconn
resp, err := conn.GetCompatibleElasticsearchVersions(&elasticsearch.GetCompatibleElasticsearchVersionsInput{
DomainName: aws.String(domainName),
})
if err != nil {
log.Printf("[ERROR] Failed to get compatible ElasticSearch versions %s", domainName)
return false
}
if len(resp.CompatibleElasticsearchVersions) != 1 {
return true
}
for _, targetVersion := range resp.CompatibleElasticsearchVersions[0].TargetVersions {
if aws.StringValue(targetVersion) == newVersion {
return false
}
}
return true
}),
SetTagsDiff,
),
Attempting to upgrade an aws_elasticsearch_domain's elasticsearch_version on an accepted upgrade path (eg 7.4 -> 7.8) will show that it's an in place upgrade in the plan and apply that at apply time. On the other hand if you tried to upgrade via a path that isn't allowed directly (eg 5.4 -> 7.8 directly) then Terraform's plan will show that it needs to destroy the existing Elasticsearch domain and create a new one.

missing permissions for test user

I use a watcherList, which is supported by the official golang kubernetes lib, to get notifications about created, updated and removed services inside a kubernetes namespace. Here the snippet.
func (kc *KubernetesCollector) streamEvents(ctx context.Context) {
kc.debugChannel <- fmt.Sprintf("Start streaming events from kubernetes API")
watchList := cache.NewListWatchFromClient(kc.k8sClient.RESTClient(), "services", kc.k8sNamespace, fields.Everything())
notificationCallbackToAddService := func(svc interface{}) {
service := svc.(*v1.Service)
kc.serviceNotificationChannel <- &serviceNotification{service, "add"}
}
notificationCallbackToDeleteService := func(svc interface{}) {
service := svc.(*v1.Service)
kc.serviceNotificationChannel <- &serviceNotification{service, "remove"}
}
callbacks := cache.ResourceEventHandlerFuncs{
AddFunc: notificationCallbackToAddService,
DeleteFunc: notificationCallbackToDeleteService,
}
_, controller := cache.NewInformer(watchList, &v1.Service{}, time.Second*0, callbacks)
go controller.Run(ctx.Done())
}
In my test I declare the kc.k8sClient over the public api address, which is defined in k8sAPI variable. Additionally I set the bearer token to authenticate against the cluster and skip to verify the insecure ssl certificate.
func TestK8sWatchList(t *testing.T) {
require := require.New(t)
...
k8sConfig, err := clientcmd.BuildConfigFromFlags(k8sAPI, "")
require.NoError(err)
k8sConfig.BearerToken = "<bearerToken>"
k8sConfig.Transport = &http.Transport{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: true,
},
}
k8sClient, err := kubernetes.NewForConfig(k8sConfig)
k8sCollector := NewK8sCollector(k8sClient, k8sNamespace)
...
}
When I execute the test, I receive the following error messages:
go test -v -timeout 500s <replaced>/t1k/pkg/collector -run TestK8sWatchList
=== RUN TestK8sWatchList
11.02.2020 16:55:55 DEBUG: Start streaming events from kubernetes API
E0211 16:55:51.706530 121803 reflector.go:153] pkg/mod/k8s.io/client-go#v0.0.0-20200106225816-7985654fe8ee/tools/cache/reflector.go:105: Failed to list *v1.Service: forbidden: User "system:serviceaccount:t1k:t1k-test-serviceaccount" cannot get path "/namespaces/t1k/services"
E0211 16:55:52.707520 121803 reflector.go:153] pkg/mod/k8s.io/client-go#v0.0.0-20200106225816-7985654fe8ee/tools/cache/reflector.go:105: Failed to list *v1.Service: forbidden: User "system:serviceaccount:t1k:t1k-test-serviceaccount" cannot get path "/namespaces/t1k/services"
E0211 16:55:53.705539 121803 reflector.go:153] pkg/mod/k8s.io/client-go#v0.0.0-20200106225816-7985654fe8ee/tools/cache/reflector.go:105: Failed to list *v1.Service: forbidden: User "system:serviceaccount:t1k:t1k-test-serviceaccount" cannot get path "/namespaces/t1k/services"
I don't understand why I get the error message, because the service account "t1k-test-serviceaccount" has in my opinion all required permissions. Now the defined service account, role and rolebinding for the test user.
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: t1k
name: t1k-test-serviceaccount
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: t1k
name: t1k-test-role
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: t1k
name: t1k-test-rolebinding
subjects:
- name: t1k-test-serviceaccount
kind: ServiceAccount
apiGroup: ""
roleRef:
name: t1k-test-role
kind: Role
apiGroup: rbac.authorization.k8s.io
Additional informations:
kubeadm version 1.15.9
kubectl version 1.17.2
golib versions
k8s.io/api v0.17.2
k8s.io/apimachinery v0.17.2
k8s.io/client-go v0.0.0-20200106225816-7985654fe8ee
k8s.io/utils v0.0.0-20200117235808-5f6fbceb4c31 // indirect
You can check permission of the service account using below command:
kubectl auth can-i list services --namespace t1k --as=system:serviceaccount:t1k:t1k-test-serviceaccount
You don't need to set the token manually...you can use InClusterConfig as in this example.Client-go uses the Service Account token mounted inside the Pod at the /var/run/secrets/kubernetes.io/serviceaccount path when the rest.InClusterConfig() is used.
I found the solution. The k8sClientSet attribut of the KubernetesCollector struct was a pointer. The reflection function of the package pkg/mod/k8s.io/client-go#v0.0.0-20200106225816-7985654fe8ee can not handle pointer objects.
type KubernetesCollector struct {
...
k8sClient *kubernetes.ClientSet
namespace string
...
}
I replaced the k8sClient with the CoreV1Interface from k8s.io/client-go/kubernetes/typed/core/v1. Therefore I changed the call for the ListWatch.
type KubernetesCollector struct {
....
iface corev1.CoreV1Interface
namespace string
....
}
func (kc *KubernetesCollector) start(ctx context.Context) {
watchList := cache.NewListWatchFromClient(kc.iface.RESTClient(), "services", kc.namespace, fields.Everything())
....
}

gRPC connectivity problem: How to figure out if it is the server or the client?

I am reading a Golang book named "Go Blueprints". So one of the chapters is about implementing a micro-service. And the communication with that service could be http or gRPC. I think I did everything right, however I can't get gRPC communication work. When I try to ask the server from the client, I get this error:
rpc error: code = Unimplemented desc = unknown service Vault
My question is how to start debugging this? How can I check if the problem is in the server or in the client?
In your implementation, the service name was wrong when you initialised endpoints for Hash and Validate. It should be pb.Vault instead of Vault. So the New method should look like this:
func New(conn *grpc.ClientConn) vault.Service {
var hashEndpoint = grpctransport.NewClient(
conn, "pb.Vault", "Hash",
vault.EncodeGRPCHashRequest,
vault.DecodeGRPCHashResponse,
pb.HashResponse{},
).Endpoint()
var validateEndpoint = grpctransport.NewClient(
conn, "pb.Vault", "Validate",
vault.EncodeGRPCValidateRequest,
vault.DecodeGRPCValidateResponse,
pb.ValidateResponse{},
).Endpoint()
return vault.Endpoints{
HashEndpoint: hashEndpoint,
ValidateEndpoint: validateEndpoint,
}
}
In general, you should consult the generated .pb.go file of the matching proto on how things are named. As you can see, it is not straightforward and it probably depends on the implementation of the proto generators.
In your case, here is what it looks like:
grpc.ServiceDesc{
ServiceName: "pb.Vault",
HandlerType: (*VaultServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "Hash",
Handler: _Vault_Hash_Handler,
},
{
MethodName: "Validate",
Handler: _Vault_Validate_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "vault.proto",
}

Go structs to OpenAPI to JSONSchema generation automatically

I have a Go struct for which I want to generate an OpenAPI schema automatically. Once I have an OpenAPI definition of that struct I wanna generate JSONSchema of it, so that I can validate the input data that comes and is gonna be parsed into those structs.
The struct looks like the following:
// mySpec: io.myapp.MinimalPod
type MinimalPod struct {
Name string `json:"name"`
// k8s: io.k8s.kubernetes.pkg.api.v1.PodSpec
v1.PodSpec
}
Above struct is clearly an augmentation of what Kubernetes PodSpec is.
Now the approach that I have used is to generate definition part for my struct MinimalPod, the definition for PodSpec will come from upstream OpenAPI spec of Kubernetes. PodSpec has a key io.k8s.kubernetes.pkg.api.v1.PodSpec in the upstream OpenAPI spec, this definition is injected from there in my Properties. Now in my code that parses above struct I have templates of what to do if struct field is string.
If the field has a comment that starts with k8s: ... the next part is Kubernetes object's OpenAPI definition key. In our case the OpenAPI definition key is io.k8s.kubernetes.pkg.api.v1.PodSpec. So I retrieve that field's definition from the upstream OpenAPI definition and embed it into the definition of my struct.
Once I have generated an OpenAPI definition for this struct which is injected in Kubernetes OpenAPI schema's definition with key being io.myapp.MinimalPod. Now I can use the tool openapi2jsonschema to generate JSONSchema out of this one. Which generates a JSONSchema file named MinimalPod.json.
Now jsonschema tool and the file MinimalPod.json can be used for validating input given to my tool parser to see if all fields were given right.
Is this the right approach of doing things, or is there a tool/library and if I feed Go structs to it, it gives me OpenAPI schema? It would be fine if it does not identify where to inject Kubernetes OpenAPI schema from even automatic parsing of Go structs and giving OpenAPI definition would be much appreciated.
Update 1
After following #mehdy 's instructions, this is what I have tried:
I have used this import path github.com/kedgeproject/kedge/vendor/k8s.io/client-go/pkg/api/v1 to import the PodSpec definition instead of k8s.io/api/core/v1 and code looks like this:
package foomodel
import "github.com/kedgeproject/kedge/vendor/k8s.io/client-go/pkg/api/v1"
// MinimalPod is a minimal pod.
// +k8s:openapi-gen=true
type MinimalPod struct {
Name string `json:"name"`
v1.PodSpec
}
Now when I generate the same with flag -i changed from k8s.io/api/core/v1 to github.com/kedgeproject/kedge/vendor/k8s.io/client-go/pkg/api/v1
$ go run example/openapi-gen/main.go -i k8s.io/kube-openapi/example/model,github.com/kedgeproject/kedge/vendor/k8s.io/client-go/pkg/api/v1 -h example/foomodel/header.txt -p k8s.io/kube-openapi/example/foomodel
This is what is generated:
$ cat openapi_generated.go
// +build !ignore_autogenerated
/*
======
Some random text
======
*/
// This file was autogenerated by openapi-gen. Do not edit it manually!
package foomodel
import (
spec "github.com/go-openapi/spec"
common "k8s.io/kube-openapi/pkg/common"
)
func GetOpenAPIDefinitions(ref common.ReferenceCallback) map[string]common.OpenAPIDefinition {
return map[string]common.OpenAPIDefinition{
"k8s.io/kube-openapi/example/model.Container": {
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Container defines a single application container that you want to run within a pod.",
Properties: map[string]spec.Schema{
"health": {
SchemaProps: spec.SchemaProps{
Description: "One common definitions for 'livenessProbe' and 'readinessProbe' this allows to have only one place to define both probes (if they are the same) Periodic probe of container liveness and readiness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes",
Ref: ref("k8s.io/client-go/pkg/api/v1.Probe"),
},
},
"Container": {
SchemaProps: spec.SchemaProps{
Ref: ref("k8s.io/client-go/pkg/api/v1.Container"),
},
},
},
Required: []string{"Container"},
},
},
Dependencies: []string{
"k8s.io/client-go/pkg/api/v1.Container", "k8s.io/client-go/pkg/api/v1.Probe"},
},
}
}
I get only this much of the configuration generated. While when I switch back to "k8s.io/api/core/v1" I get config code auto generated which is more than 8k lines. What am I missing here?
Here definition of k8s.io/client-go/pkg/api/v1.Container and k8s.io/client-go/pkg/api/v1.Probe is missing while when I use k8s.io/api/core/v1 as import everything is generated.
Note: To generate above steps, please git clone https://github.com/kedgeproject/kedge in GOPATH.
You can use kube-openapi package for this. I am going to add a sample to the repo but I've tested this simple model:
// Car is a simple car model.
// +k8s:openapi-gen=true
type Car struct {
Color string
Capacity int
// +k8s:openapi-gen=false
HiddenFeature string
}
If you assume you created this file in
go run example/openapi-gen/main.go -h example/model/header.txt -i k8s.io/kube-openapi/example/model -p k8s.io/kube-openapi/example/model
(you also need to add a header.txt file). You should see a new file created in example/model folder called openapi_generated.go. This is an intermediate generated file that has your OpenAPI model in it:
func GetOpenAPIDefinitions(ref common.ReferenceCallback) map[string]common.OpenAPIDefinition {
return map[string]common.OpenAPIDefinition{
"k8s.io/kube-openapi/example/model.Car": {
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Car is a simple car model.",
Properties: map[string]spec.Schema{
"Color": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"Capacity": {
SchemaProps: spec.SchemaProps{
Type: []string{"integer"},
Format: "int32",
},
},
},
Required: []string{"Color", "Capacity"},
},
},
Dependencies: []string{},
},
}
}
From there you should be able to call the generated method, get the model for your Type and get its Schema.
With some go get magic and changing the command line a little, I was able to generate the model for your model. Here is what you should change in your code:
package model
import "k8s.io/api/core/v1"
// MinimalPod is a minimal pod.
// +k8s:openapi-gen=true
type MinimalPod struct {
Name string `json:"name"`
v1.PodSpec
}
and then change the run command a little to include PodSpec in the generation:
go run example/openapi-gen/main.go -h example/model/header.txt -i k8s.io/kube-openapi/example/model,k8s.io/api/core/v1 -p k8s.io/kube-openapi/example/model
Here is what I got: https://gist.github.com/mbohlool/e399ac2458d12e48cc13081289efc55a

Unable to declare Kind type for Kubernetes API Type Declarations

I'm relatively new to golang and need some help pointing to the right direction.
I'm trying to declare a new Deployment type.
My imports look like:
import (
"encoding/json"
"fmt"
yaml "gopkg.in/yaml.v2"
"io/ioutil"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/kubernetes/pkg/api/v1"
"k8s.io/kubernetes/pkg/apis/extensions/v1beta1"
)
When I try to create a Deployment Object like:
test := v1beta1.Deployment{
Spec: v1beta1.DeploymentSpec{
Template: v1.PodTemplateSpec{
Spec: v1.PodSpec{
Containers: []v1.Container{{
Name: "test",
Image: "image_url",
},
},
},
},
},
}
It works, but the Deployment Object that returns doesn't have a Kind which is necessary to identify the object.
According to https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/extensions/types.go#L162
There's an embedded metav1.TypeMeta which has the Kind object that I need. (For reference: https://github.com/kubernetes/apimachinery/blob/master/pkg/apis/meta/v1/types.go#L38)
I tried declaring metav1.TypeMeta in the struct literal like:
test := v1beta1.Deployment{
metav1.TypeMeta: metav1.TypeMeta{Kind: "Deployment"}
Spec: v1beta1.DeploymentSpec{
Template: v1.PodTemplateSpec{
Spec: v1.PodSpec{
Containers: []v1.Container{{
Name: "test",
Image: "image_url",
},
},
},
},
},
}
But I get a
unknown field '"k8s.io/apimachinery/pkg/apis/meta/v1".TypeMeta' in struct literal of type v1beta1.Deployment
I suspect it is due to metav1.TypeMeta declaration in the Deployment struct is an unexported field.
How should I declare Kind?
When using an embedded struct, the key is usually the type name without the package. You can declare your TypeMeta like this:
test := v1beta1.Deployment{
TypeMeta: metav1.TypeMeta{
APIVersion: "apps/v1beta1",
Kind: "Deployment",
},
}
However, manually setting the TypeMeta on any Kubernetes API object is usually only necessary if you plan to persist these objects yourself (for example, to generate YAML files).
When using the Kubernetes client API (for example, using the k8s.io/client-go package) to talk to an API server, you will not need the TypeMeta property, since all API operations are strongly typed anyway and all metadata can safely be inferred. After all, the API version and kind of a v1beta1.Deployment struct should be (and are, to the client library) obvious.

Resources