How to create custom objects in Kubernetes? [duplicate] - go

This question already has an answer here:
Create/Get a custom kubernetes resource
(1 answer)
Closed 1 year ago.
I am using Velero to create and backup and restore, Velero has controllers which get triggered when I can create the custom objects.
import veleroApi "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
restoreObj := veleroApi.Restore{
TypeMeta: metav1.TypeMeta{},
ObjectMeta: metav1.ObjectMeta{
DeletionGracePeriodSeconds: &gracePeriodSeconds,
},
Spec: veleroApi.RestoreSpec{
BackupName: "backup-name-20211101",
RestorePVs: &restorePV,
},
Status: veleroApi.RestoreStatus{},
}
But how can I submit this custom object to the Kube API server?
I used API client to apply the changes:
apiClient.CoreV1().RESTClient().Patch(types.ApplyPatchType).Body(restoreObj).Do(context)
But I am getting:
unknown type used for body: {TypeMeta:{Kind:Restore APIVersion:velero.io/v1} ObjectMeta:{Name: GenerateName: Namespace:velero SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:0xc000256018 Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} Spec:{BackupName:backup-name-20211101 ScheduleName: IncludedNamespaces:[] ExcludedNamespaces:[] IncludedResources:[] ExcludedResources:[] NamespaceMapping:map[] LabelSelector:nil RestorePVs:0xc0007a9088 PreserveNodePorts:<nil> IncludeClusterResources:<nil> Hooks:{Resources:[]}} Status:{Phase: ValidationErrors:[] Warnings:0 Errors:0 FailureReason: StartTimestamp:<nil> CompletionTimestamp:<nil> Progress:<nil>}}

If you would like to create a client for custom object follow the following steps:
Describe the custom resource for which you would like to create a rest client:
kubectl describe CustomResourceDefinition <custom resource definition name>
Note down the API and version and the Kind, as an example it would look like:
API Version: apiextensions.k8s.io/v1
Kind: CustomResourceDefinition
Here, apiextensions.k8s.io is API and v1 is the version.
Check if the API version that you got from step 1 is in the list of APIs:
kubectl get --raw "/"
Create the client:
func getClusterConfig() *rest.Config {
config, err := rest.InClusterConfig()
if err != nil {
glog.Fatal(err.Error())
}
return config
}
func getRestClient() *rest.RESTClient {
cfg := getClusterConfig()
gv := schema.GroupVersion{Group: "<API>", Version: "<version>"}
cfg.GroupVersion = &gv
cfg.APIPath = "/apis" // you can verify the path from step 2
var Scheme = runtime.NewScheme()
var Codecs = serializer.NewCodecFactory(Scheme)
cfg.NegotiatedSerializer = Codecs.WithoutConversion()
restClient, err := rest.RESTClientFor(cfg)
if err != nil {
panic(err.Error())
}
return restClient
}
Alternatively, check the answer from kozmo here
For Velero you can reuse the client they have.
As an example take a look at this code:
restore, err := o.client.VeleroV1().Restores(restore.Namespace).Create(context.TODO(), restore, metav1.CreateOptions{})

Related

Is there a way to add collaborators to a Google Sheet using the Golang Client Library or REST API?

I am able to create a new spreadsheet with the gsheets client library, and my next step is to add editors to that newly created sheet so that the sheet can be accessed by the users of the application
Here is the code for creating the sheet:
ctx := context.Background()
srv, err := gsheets.NewService(ctx)
if err != nil {
log.Printf("Unable to retrieve Sheets client: %v", err)
}
sp := &gsheets.Spreadsheet{
Properties: &gsheets.SpreadsheetProperties{
Title: groupName,
},
}
spreadsheet, err := srv.Spreadsheets.Create(sp).Do()
if err != nil {
return nil, err
}
I have searched through the golang client library docs and the REST API docs, and I am unable to find anything related to adding collaborators
I was expecting there to be some request object that allows me to add a collaborator using their email and role:
req := gsheets.Request{
AddCollaborator: &gsheets.AddCollaboratorRequest{
Email: "person#gmail.com",
Role: "editor",
},
}
busr := &gsheets.BatchUpdateSpreadsheetRequest{
Requests: []*gsheets.Request{&req},
}
res, err := srv.Spreadsheets.BatchUpdate(spreadsheetId, busr).Do()
or at the very least I was expecting there to be an API endpoint where I can achieve the same result
I am also curious if there is a way to create this new spreadsheet as read only to the public? This would at least allow me to continue developing
It is possible to add editors with the google.golang.org/api/sheets/v4 library.
You can simply create a spreadsheet with:
func (r *SpreadsheetsService) Create(spreadsheet *Spreadsheet) *SpreadsheetsCreateCall
and add editors with Editor type:
type Editors struct {
...
// Users: The email addresses of users with edit access to the protected
// range.
Users []string `json:"users,omitempty"`
...
}
Check library docs for more details.

how to do open api format yaml validation, without using clusters?

I have build a schema in open api format:
type Test_manifest struct {
metav1.TypeMeta
metav1.ObjectMeta
Spec spec
}
type spec struct {
Policies []string
Resources resources
Results []results
Variables variables
}
This is not complete schema, just a part of it.
And here below is the actual yaml file:
apiVersion: cli.kyverno.io/v1beta1
kind: kyvernotest
metadata:
name: test-check-nvidia-gpus
labels:
foolabel: foovalue
annotations:
fookey: foovalue
I'm trying to validate this incoming yaml file from the user, I can convert this yaml to json and then validate the value of the fields, but I'm not getting how to validate the field itself, I mean if user write name1 rather than name then how to show error on it. Basically how to validate the key.
Here's what I've implemented for value validation:
test := "cmd/cli/kubectl-kyverno/test/test.yaml"
yamlFile, err := ioutil.ReadFile(test)
if err != nil {
fmt.Printf("Error: failed to read file %v", err)
}
policyBytes, err1 := yaml.ToJSON(yamlFile)
if err1 != nil {
fmt.Printf("failed to convert to JSON")
}
tests := &kyvernov1.Test_manifest{}
if err := json.Unmarshal(policyBytes, tests); err != nil {
fmt.Printf("failed to decode yaml")
}
if tests.TypeMeta.APIVersion == "" {
fmt.Printf("skipping file as tests.TypeMeta.APIVersion not found")
}
if tests.TypeMeta.Kind == "" {
fmt.Printf("skipping file as tests.TypeMeta.Kind not found")
} else if tests.TypeMeta.Kind != "KyvernoTest" {
fmt.Printf("skipping file as tests.TypeMeta.Kind is not `KyvernoTest`")
}
Also we want this valiadtion to happen outside the cluster.
Two things come to my mind:
I notice that you are trying to build a k8s API extension manually which is a lot of rework, I will suggest you use a framework which will handle this for you. This is the recommended best practice that is used very frequently. It is too complicated to be done manually. Here are some resources. (kube-builder, operator-sdk). These solution are Open-API based as well. They will let you define your schema in a simple template, and generate all the validation and API + controller code for you.
If you want more validation and sanity testing, typically this is done with the help of an admission controller in your cluster. It intercepts the incoming request, and before it is processed by the API server, performs actions on it. (For verification, compliance, policy enforcement, authentication etc.) You can read more about admission controllers here.

GO GCP SDK auth code to connect gcp project

Im using the following code which works as expected, I use from the cli gcloud auth application-default login and enter my credentials and I was able to run the code successfully from my macbook.
Now I need to run this code in my CI and we need to use different approach , what should be the approach to get the client_secret
and client_id or service account / some ENV variable, what is the way for doing it via GO code?
import "google.golang.org/api/compute/v1"
project := "my-project"
region := "my-region"
ctx := context.Background()
c, err := google.DefaultClient(ctx, compute.CloudPlatformScope)
if err != nil {
log.Fatal(err)
}
computeService, err := compute.New(c)
if err != nil {
log.Fatal(err)
}
req := computeService.Routers.List(project, region)
if err := req.Pages(ctx, func(page *compute.RouterList) error {
for _, router := range page.Items {
// process each `router` resource:
fmt.Printf("%#v\n", router)
// NAT Gateways are found in router.nats
}
return nil
}); err != nil {
log.Fatal(err)
}
Since you're using Jenkins you probably want to start with how to create a service account. It guides you on creating a service account and exporting a key to be set as a var in another CI/CD system.
Then refer to the docs from the client library on how to create a new client with source credential.
e.g.
client, err := storage.NewClient(ctx, option.WithCredentialsFile("path/to/keyfile.json"))
If you provided no source, it would attempt to read the credentials locally and act as the service account running the operation (not applicable in your use case).
Many CIs support the export of specific env vars. Or your script / conf can do it too.
But if you want to run in a CI why you need such configuration? Integration tests?
Some services can be used locally for unit/smoke testing. Like pubsub, there is a way to run a fake/local pubsub to perform some tests.
Or perhaps I did not understand your question, in this case can you provide an example?

Problem with export open telemetry data to newRelic

I have an issue with exporting and sending the open telemetry data to the newRelic's GRPC endpoint.
This is my snippet code that tries to connect to the newRelic endpoint:
var headers = map[string]string{
"api-key": "my newRelc API key",
}
var clientOpts = []otlptracegrpc.Option{
otlptracegrpc.WithEndpoint("https://otlp.nr-data.net:4317"),
otlptracegrpc.WithInsecure(),
otlptracegrpc.WithReconnectionPeriod(2 * time.Second),
otlptracegrpc.WithDialOption(grpc.WithBlock()),
otlptracegrpc.WithTimeout(30 * time.Second),
otlptracegrpc.WithHeaders(headers),
otlptracegrpc.WithCompressor("gzip"),
}
otlpExporter, err := otlptrace.New(ctx, otlptracegrpc.NewClient(clientOpts...))
if err != nil {
return nil, fmt.Errorf("creating OTLP trace exporter: %w", err)
}
resource, _ := g.Config.resource(ctx)
tracerProvider := trace.NewTracerProvider(
trace.WithSampler(trace.AlwaysSample()),
trace.WithBatcher(otlpExporter),
trace.WithResource(resource),
)
otel.SetTracerProvider(tracerProvider)
it stuck in step otlptrace.New.
I'm new in OpenTelemetry, I read the open telemetry documentation and I can print Otel data in the console but when I want to export it to newrelic it does not work.
I read newRelic Otel documentation too and they had an exporter SDK but it discontinued and they provide this new GRPC endpoint but it does not have well documentation and example.
Do you have any idea?
I found the problem, The issue was about TLS.
I replaced this line:
otlptracegrpc.WithInsecure(),
with this line:
otlptracegrpc.WithTLSCredentials(credentials.NewClientTLSFromCert(nil, "")),

How to Assume Cross-Account Role?

AWS' Golang SDK says that I should use stscreds.AssumeRoleProvider to assume a cross-account role (in this case, for querying another account's DynamoDb table from a web server). This code works:
var sess *session.Session
func init() {
sess = session.Must(session.NewSession(&aws.Config{
Region: aws.String("us-west-2"),
}))
}
func getDynamoDbClient() *dynamodb.DynamoDB {
crossAccountRoleArn := "arn:...:my-cross-account-role-ARN"
creds := stscreds.NewCredentials(sess, crossAccountRoleArn, func(arp *stscreds.AssumeRoleProvider) {
arp.RoleSessionName = "my-role-session-name"
arp.Duration = 60 * time.Minute
arp.ExpiryWindow = 30 * time.Second
})
dynamoDbClient := dynamodb.New(sess, aws.NewConfig().WithCredentials(creds))
return dynamoDbClient
}
According to the documentation, the returned client is thread-safe:
DynamoDB methods are safe to use concurrently.
The question is, since the credential are auto-renewed via stscreds.AssumeRoleProvider, do I
Need to new up a new client on each request (to ensure that I've got unexpired credentials), or
Can I new up a DynamoDb client when the web server starts up, and reuse it for the life of the web server?
Edited To Note:
I dug into the source code for the Golang AWS SDK, and it looks like the credentials returned by stscreds.NewCredentials() are nothing more than a wrapper around a reference to the stscreds.AssumeRoleProvider. So it seems likely to me that the client will magically get auto-renewed credentials.
AWS' documentation leaves something to be desired.
roleArn := "arn:aws:iam::1234567890:role/my-role"
awsSession, _ := session.NewSession(&aws.Config{
Region: aws.String("us-west-2"),
})
stsClient := sts.New(awsSession)
stsRequest := sts.AssumeRoleInput{
RoleArn: aws.String(roleArn),
RoleSessionName: aws.String("my-role-test"),
DurationSeconds: aws.Int64(900), //min allowed
}
stsResponse, err := stsClient.AssumeRole(&stsRequest)
if err != nil {
log.Fatal("an exception occurred when attempting to assume the my role. error=" + err.Error())
}
os.Setenv("AWS_ACCESS_KEY_ID", *stsResponse.Credentials.AccessKeyId)
os.Setenv("AWS_SECRET_ACCESS_KEY", *stsResponse.Credentials.SecretAccessKey)
os.Setenv("AWS_SESSION_TOKEN", *stsResponse.Credentials.SessionToken)
os.Setenv("AWS_SECURITY_TOKEN", *stsResponse.Credentials.SessionToken)
os.Setenv("ASSUMED_ROLE", roleArn)

Resources