I would like to randomly shut down pods in a kubernetes cluster with go. I already wrote code, which enables to login to the server and run code.
Now I would need to read all the available pods in the cluster, choose some randomly and terminate them. (I am new to go)
Could you please help me doing this?
This is how I am running commands on the cluster/server
cli.ExecuteCmd("kubectl get pods")
// Use one connection per command.
// Catch in the client when required.
func (cli *SSHClient)ExecuteCmd(command string){
conn, err := ssh.Dial("tcp", cli.Hostname+":22", cli.Config)
if err!=nil {
logrus.Infof("%s#%s", cli.Config.User, cli.Hostname)
logrus.Info("Hint: Add you key to the ssh agent: 'ssh-add ~/.ssh/id_rsa'")
logrus.Fatal(err)
}
session, _ := conn.NewSession()
defer session.Close()
var stdoutBuf bytes.Buffer
session.Stdout = &stdoutBuf
err = session.Run(command)
if err != nil {
logrus.Fatalf("Run failed:%v", err)
}
logrus.Infof(">%s", stdoutBuf.Bytes())
}
Use k8s.io/client-go (Github Link) client package to list kubernetes pods, and then delete them randomly.
Use client.CoreV1().Pods() methods to list and delete pods.
Related
when I was using go111, I had traces of all my Datastore calls (similar to image below). But as soon as I upgraded to go115 and started using cloud.google.com/go/datastore, I lost this information completely. I tried to set up telemetry by adding in my main:
projectID := os.Getenv("GOOGLE_CLOUD_PROJECT")
exporter, err := texporter.NewExporter(texporter.WithProjectID(projectID))
if err != nil {
log.Fatalf(bgCtx, "texporter.NewExporter of '%v': %v", projectID, err)
}
tp := sdktrace.NewTracerProvider(sdktrace.WithBatcher(exporter))
defer tp.ForceFlush(bgCtx)
otel.SetTracerProvider(tp)
But this didn't work. Am I missing anything to tell the datastore library to export those calls?
Thank you!
I finally found https://github.com/GoogleCloudPlatform/golang-samples/blob/master/trace/trace_quickstart/main.go
and realized I was missing the following:
trace.RegisterExporter(exporter)
This solved my problem. Then I also added the following on localhost
trace.ApplyConfig(trace.Config{DefaultSampler: trace.AlwaysSample()})
To make sure all requests are traced:
httpHandler := &ochttp.Handler{
// Use the Google Cloud propagation format.
Propagation: &propagation.HTTPFormat{},
}
if err := http.ListenAndServe(":"+port, httpHandler); err != nil {
I am trying to have some consumers to process messages from kafka, and I would like to implement kubernetes deployment scalability for elastic message processing capability.
I found this code from sarama official guide https://pkg.go.dev/github.com/Shopify/sarama#NewConsumerGroup:
package main
import (
"context"
"fmt"
)
type exampleConsumerGroupHandler struct{}
func (exampleConsumerGroupHandler) Setup(_ ConsumerGroupSession) error { return nil }
func (exampleConsumerGroupHandler) Cleanup(_ ConsumerGroupSession) error { return nil }
func (h exampleConsumerGroupHandler) ConsumeClaim(sess ConsumerGroupSession, claim ConsumerGroupClaim) error {
for msg := range claim.Messages() {
fmt.Printf("Message topic:%q partition:%d offset:%d\n", msg.Topic, msg.Partition, msg.Offset)
sess.MarkMessage(msg, "")
}
return nil
}
func main() {
config := NewTestConfig()
config.Version = V2_0_0_0 // specify appropriate version
config.Consumer.Return.Errors = true
group, err := NewConsumerGroup([]string{"localhost:9092"}, "my-group", config)
if err != nil {
panic(err)
}
defer func() { _ = group.Close() }()
// Track errors
go func() {
for err := range group.Errors() {
fmt.Println("ERROR", err)
}
}()
// Iterate over consumer sessions.
ctx := context.Background()
for {
topics := []string{"my-topic"}
handler := exampleConsumerGroupHandler{}
// `Consume` should be called inside an infinite loop, when a
// server-side rebalance happens, the consumer session will need to be
// recreated to get the new claims
err := group.Consume(ctx, topics, handler)
if err != nil {
panic(err)
}
}
}
I have some questions:
how to set numbers of consumers in a consumer group?
If I deploy this program in a Pod, can I scale it safely? I mean, assume one program is running, and I scale the replicas from 1 to 2, will another NewConsumerGroup call with the same group id works perfectly without conflict?
Thank you in advance.
NOTE: I am using Kafka 2.8 and I heard that sarama_cluster package is DEPRECATED.
Reminder that groups cannot scale beyond the topic partition count
Scaling the pods is the correct way to use consumer groups, and using the same group name is correct, however I'd recommend extracting that and the broker address to environment variables so they can easily be changed at deploy time
As-is the containerized code would be unable to use localhost as a Kafka connection string as that would be the pod itself
I'm creating some trivial apps to learn Firestore.
I started the local Firestore Emulator with:
$ gcloud beta emulator firestore start
After starting the emulator, I ran tests with "go test"
I populated the Firestore with data and created a function that queried some of the records/documents added.
I deleted some of the Documents from my app but they continue to show up in Queries.
I tried:
stoping with ctrl-c and ctrl d
$ gcloud beta emulator firestore stop
restarted my Macbook but the Documents persist.
I don't understand how the datastore is persisting after restarting the computer, I'm guessing that the data is stored in a JSON file or something like that.
I searched but was not able to find any documentation on the emulator.
Am I supposed to start the emulator and then run tests against the emulated Firestore?
How do I flush the Firestore?
The emulator supports an endpoint to clear the database (docs):
curl -v -X DELETE "http://localhost:PORT/emulator/v1/projects/PROJECT_NAME/databases/(default)/documents"
Fill in the PORT and PROJECT_NAME.
Since you're using Go, here's a little test helper I implemented that helps with starting the emulator, waiting for it to come up, purging existing data, initializing a client, and shutting down the operator after it is completed.
It uses the technique in Juan's answer (which you should mark as the answer).
To use this utility, you need to just say:
client := startFirestoreEmulator(t)
Source code:
// Copyright 2021 Ahmet Alp Balkan
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package firestoretestutil contains test utilities for starting a firestore
// emulator locally for unit tests.
package firestoretestutil
import (
"bytes"
"context"
"fmt"
"net"
"net/http"
"os"
"os/exec"
"sync"
"testing"
"time"
firestore "cloud.google.com/go/firestore"
)
const firestoreEmulatorProj = "dummy-emulator-firestore-project"
// cBuffer is a buffer safe for concurrent use.
type cBuffer struct {
b bytes.Buffer
sync.Mutex
}
func (c *cBuffer) Write(p []byte) (n int, err error) {
c.Lock()
defer c.Unlock()
return c.b.Write(p)
}
func StartEmulator(t *testing.T, ctx context.Context) *firestore.Client {
t.Helper()
port := "8010"
addr := "localhost:" + port
ctx, cancel := context.WithCancel(ctx)
t.Cleanup(func() {
t.Log("shutting down firestore operator")
cancel()
})
// TODO investigate why there are still java processes hanging around
// despite we kill the exec'd command, suspecting /bin/bash wrapper that gcloud
// applies around the java process.
cmd := exec.CommandContext(ctx, "gcloud", "beta", "emulators", "firestore", "start", "--host-port="+addr)
out := &cBuffer{b: bytes.Buffer{}}
cmd.Stderr, cmd.Stdout = out, out
if err := cmd.Start(); err != nil {
t.Fatalf("failed to start firestore emulator: %v -- out:%s", err, out.b.String())
}
dialCtx, clean := context.WithTimeout(ctx, time.Second*10)
defer clean()
var connected bool
for !connected {
select {
case <-dialCtx.Done():
t.Fatalf("emulator did not come up timely: %v -- output: %s", dialCtx.Err(), out.b.String())
default:
c, err := (&net.Dialer{Timeout: time.Millisecond * 200}).DialContext(ctx, "tcp", addr)
if err == nil {
c.Close()
t.Log("firestore emulator started")
connected = true
break
}
time.Sleep(time.Millisecond * 200) //before retrying
}
}
os.Setenv("FIRESTORE_EMULATOR_HOST", addr)
cl, err := firestore.NewClient(ctx, firestoreEmulatorProj)
if err != nil {
t.Fatal(err)
}
os.Unsetenv("FIRESTORE_EMULATOR_HOST")
truncateDB(t, addr)
return cl
}
func truncateDB(t *testing.T, addr string) {
t.Helper()
// technique adopted from https://stackoverflow.com/a/58866194/54929
req, _ := http.NewRequest(http.MethodDelete, fmt.Sprintf("http://%s/emulator/v1/projects/%s/databases/(default)/documents",
addr, firestoreEmulatorProj), nil)
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatal(err)
}
if resp.StatusCode != http.StatusOK {
t.Fatalf("failed to clear db: %v", resp.Status)
}
}
You can use this:
module.exports.teardown = async () => {
Promise.all(firebase.apps().map(app => app.delete()));
};
Now, each time you call teardown, you will delete all data from the Firestore emulator.
The kubernetes go client has tons of methods and I can't find how I can get the current CPU & RAM usage of a specific (or all pods).
Can someone tell me what methods I need to call to get the current usage for pods & nodes?
My NodeList:
nodes, err := clientset.CoreV1().Nodes().List(metav1.ListOptions{})
Kubernetes Go Client: https://github.com/kubernetes/client-go
Metrics package: https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/metrics
As far as I got the metrics server implements the Kubernetes metrics package in order to fetch the resource usage from pods and nodes, but I couldn't figure out where & how they do it: https://github.com/kubernetes-incubator/metrics-server
It is correct that go-client does not have support for metrics type, but in the metrics package there is a pregenerated client that can be used for fetching metrics objects and assign them right away to the appropriate structure. The only thing you need to do first is to generate a config and pass it to metrics client. So a simple client for metrics would look like this:
package main
import (
"k8s.io/client-go/tools/clientcmd"
metrics "k8s.io/metrics/pkg/client/clientset/versioned"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func main() {
var kubeconfig, master string //empty, assuming inClusterConfig
config, err := clientcmd.BuildConfigFromFlags(master, kubeconfig)
if err != nil{
panic(err)
}
mc, err := metrics.NewForConfig(config)
if err != nil {
panic(err)
}
mc.MetricsV1beta1().NodeMetricses().Get("your node name", metav1.GetOptions{})
mc.MetricsV1beta1().NodeMetricses().List(metav1.ListOptions{})
mc.MetricsV1beta1().PodMetricses(metav1.NamespaceAll).List(metav1.ListOptions{})
mc.MetricsV1beta1().PodMetricses(metav1.NamespaceAll).Get("your pod name", metav1.GetOptions{})
}
Each of the above methods from metric client returns an appropriate structure (you can check those here) and an error (if any) which you should process according to your requirements.
here is an example.
package main
import (
"fmt"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/tools/clientcmd"
metrics "k8s.io/metrics/pkg/client/clientset/versioned"
)
func main() {
var kubeconfig, master string //empty, assuming inClusterConfig
config, err := clientcmd.BuildConfigFromFlags(master, kubeconfig)
if err != nil {
panic(err)
}
mc, err := metrics.NewForConfig(config)
if err != nil {
panic(err)
}
podMetrics, err := mc.MetricsV1beta1().PodMetricses(metav1.NamespaceAll).List(metav1.ListOptions{})
if err != nil {
fmt.Println("Error:", err)
return
}
for _, podMetric := range podMetrics.Items {
podContainers := podMetric.Containers
for _, container := range podContainers {
cpuQuantity, ok := container.Usage.Cpu().AsInt64()
memQuantity, ok := container.Usage.Memory().AsInt64()
if !ok {
return
}
msg := fmt.Sprintf("Container Name: %s \n CPU usage: %d \n Memory usage: %d", container.Name, cpuQuantity, memQuantity)
fmt.Println(msg)
}
}
}
The API you're looking for in new versions of Kubernetes (tested on mine as of 1.10.7) is the metrics.k8s.io/v1beta1 API route.
You can see it locally if you run a kubectl proxy and check http://localhost:8001/apis/metrics.k8s.io/v1beta1/pods and /nodes on your localhost.
I see where your confusion is though. At the time of writing, it does not look like the metrics/v1beta1 has a generated typed package (https://godoc.org/k8s.io/client-go/kubernetes/typed), and doesn't appear in the kubernetes.ClientSet object.
You can hit all available endpoints directly though the rest.RestClient object, and just specify metrics/v1beta1 as the versionedAPIPath, which will be more work and less convenient than the nicely wrapped ClientSet, but I'm not sure how long it'll take before that API shows up in that interface.
I want to create a service on kubernetes which manages helm charts on the cluster. It installs charts from a private chart repository. Since I didn't find any documents on how to use helm client api, I was looking for some samples or guidelines for creating a service on top of helm client.
FOR HELM3
As other answers pointed, with Helm 2, you need to talk with tiller which complicates stuff.
It is way more clean with Helm 3 since tiller was removed and helm client directly communicates with Kubernetes API Server.
Here is an example code to install a helm chart programmatically with helm3:
package main
import (
"fmt"
"os"
"helm.sh/helm/v3/pkg/action"
"helm.sh/helm/v3/pkg/chart/loader"
"helm.sh/helm/v3/pkg/kube"
_ "k8s.io/client-go/plugin/pkg/client/auth"
)
func main() {
chartPath := "/tmp/my-chart-0.1.0.tgz"
chart, err := loader.Load(chartPath)
if err != nil {
panic(err)
}
kubeconfigPath := "/tmp/my-kubeconfig"
releaseName := "my-release"
releaseNamespace := "default"
actionConfig := new(action.Configuration)
if err := actionConfig.Init(kube.GetConfig(kubeconfigPath, "", releaseNamespace), releaseNamespace, os.Getenv("HELM_DRIVER"), func(format string, v ...interface{}) {
fmt.Sprintf(format, v)
}); err != nil {
panic(err)
}
iCli := action.NewInstall(actionConfig)
iCli.Namespace = releaseNamespace
iCli.ReleaseName = releaseName
rel, err := iCli.Run(chart, nil)
if err != nil {
panic(err)
}
fmt.Println("Successfully installed release: ", rel.Name)
}
Since it took me some time to get this working here is a minimal example (no error handling, left details about kube config, ...) for listing release names:
package main
import (
"k8s.io/client-go/kubernetes"
"k8s.io/helm/pkg/helm"
"k8s.io/helm/pkg/helm/portforwarder"
)
func main() {
// omit getting kubeConfig, see: https://github.com/kubernetes/client-go/tree/master/examples
// get kubernetes client
client, _ := kubernetes.NewForConfig(kubeConfig)
// port forward tiller
tillerTunnel, _ := portforwarder.New("kube-system", client, config)
// new helm client
helmClient := helm.NewClient(helm.Host(host))
// list/print releases
resp, _ := helmClient.ListReleases()
for _, release := range resp.Releases {
fmt.Println(release.GetName())
}
}
I was long trying to set up Helm installation with --set values, and I found that the best place to look currently available functionality is official helm documentation example and official Go docs for the client.
This only pertains to Helm 3.
Here's an example I managed to get working by using the resources linked above.
I haven't found a more elegant way to define values rather than recursively asking for the map[string]interface{}, so if anyone knows a better way, please let me know.
Values should be roughly equivalent to:
helm install myrelease /mypath --set redis.sentinel.masterName=BigMaster,redis.sentinel.pass="random" ... etc
Notice the use of settings.RESTClientGetter(), rather than kube.Get, as in other answers. I found kube.Get to be causing nasty conflicts with k8s clients.
package main
import (
"log"
"os"
"helm.sh/helm/v3/pkg/action"
"helm.sh/helm/v3/pkg/chart/loader"
"helm.sh/helm/v3/pkg/cli"
"helm.sh/helm/v3/pkg/release"
)
func main(){
chartPath := "/mypath"
namespace := "default"
releaseName := "myrelease"
settings := cli.New()
actionConfig := new(action.Configuration)
// You can pass an empty string instead of settings.Namespace() to list
// all namespaces
if err := actionConfig.Init(settings.RESTClientGetter(), namespace,
os.Getenv("HELM_DRIVER"), log.Printf); err != nil {
log.Printf("%+v", err)
os.Exit(1)
}
// define values
vals := map[string]interface{}{
"redis": map[string]interface{}{
"sentinel": map[string]interface{}{
"masterName": "BigMaster",
"pass": "random",
"addr": "localhost",
"port": "26379",
},
},
}
// load chart from the path
chart, err := loader.Load(chartPath)
if err != nil {
panic(err)
}
client := action.NewInstall(actionConfig)
client.Namespace = namespace
client.ReleaseName = releaseName
// client.DryRun = true - very handy!
// install the chart here
rel, err := client.Run(chart, vals)
if err != nil {
panic(err)
}
log.Printf("Installed Chart from path: %s in namespace: %s\n", rel.Name, rel.Namespace)
// this will confirm the values set during installation
log.Println(rel.Config)
}
I was looking for the same answer, since I do know the solution now, sharing it here.
What you are looking for is to write a wrapper around helm library.
First you need a client which speaks to the tiller of your cluster. For that you need to create a tunnel to the tiller from your localhost. Use this (its the same link as kiran shared.)
Setup the Helm environement variables look here
Use this next. It will return a helm client. (you might need to write a wrapper around it to work with your setup of clusters)
After you get the *helm.Client handle, you can use helm's client API given here. You just have to use the one you need with the appropriate values.
You might need some utility functions defined here, like loading a chart as a folder/archive/file.
If you want to do something more, you pretty much locate the method in the doc and call it using the client.