I'm trying to deal with a problem (most probably a design one) regarding the usage of Channels and the proper handling of those.
I'm using Knative Eventing/Cloud Events to create and eventing pipeline.
I want to be able to handle different channels in order to receive events originating from different sources/methods.
In order to do so, I have the implementation that follows (code has been removed in order to be concise with detailing the issue).
I have a file1.go which defines a EventHandler struct, associated methods and a couple of exported methods (CreatePreview() and SaveAndPublish()) that are the "normal" behaviour of the app and that actually receives/deals with whatever value comes on the Channel:
type EventHandler struct {
Channel chan string
}
func (ev *EventHandler) Handle(event cloudevents.Event) {
if event.Data == nil {
(...)
}
var data string
if err := event.DataAs(&data); err != nil {
(...)
}
ev.Channel <- data
defer close(ev.Channel)
}
func (ev *EventHandler) Create(param *Element) (error) {
(...) //Unimportant code
}
func (repo *Repository) CreatePreview(param1 string, param2 string, eventHandler *EventHandler) (*pb.PreviewResponse, error) {
(...)
err := eventHandler.Create(&document)
(...)
preview := <- eventHandler.Channel
(...)
}
func (repo *Repository) SaveAndPublish(param1 string, param2 bool, eventHandler *EventHandler) (*pb.PublishResponse, error) {
(...)
err := eventHandler.Create(&documentToUpdate)
(...)
published := <- eventHandler.Channel
(...)
return repo.SomeOtherMethod(published.ID)
}
Now, on my main.go function, I have the "regular" start of a service, including a gRPC Listener, a HTTP Listener and handling of events. This is done via cmux. So here's a sample of the code (again, code simplified):
func HandlerWrapper(event cloudevents.Event) {
//TODO: HOW DO I HANDLE THIS???
}
// This approach seems to cause issues since it's always the same instance
// var (
// eventHandler = &rep.EventHandler{Channel: make(chan string)}
// )
func (service *RPCService) Start() (err error) {
port, err := strconv.Atoi(os.Getenv("LISTEN_PORT"))
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
lis, err := net.Listen("tcp", fmt.Sprintf(":%d", port))
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
// Create multiplexer and listener types
mux := cmux.New(lis)
grpcLis := mux.Match(cmux.HTTP2())
httpLis := mux.Match(cmux.HTTP1())
// *************
// gRPC
// *************
service.server = grpc.NewServer()
reflection.Register(service.server)
pb.RegisterStoryServiceServer(service.server, service)
// *************
// HTTP
// *************
// Declare new CloudEvents Receiver
c, err := kncloudevents.NewDefaultClient(httpLis)
if err != nil {
log.Fatal("Failed to create client, ", err)
}
// *************
// Start Listeners
// *************
// start gRPC server
go func() {
if err := service.server.Serve(grpcLis); err != nil {
log.Fatalf("failed to gRPC serve: %s", err)
}
}()
// start HTTP server
go func() {
// With this line bellow, I'd have to create a Received per eventHandler. Not cool
// log.Fatal(c.StartReceiver(context.Background(), eventHandler.Handle))
// Here we use a wrapper to deal with the event handling and have a single listener
log.Fatal(c.StartReceiver(context.Background(), HandlerWrapper))
}()
if err := mux.Serve(); err != nil {
log.Fatalf("failed to Mux serve: %s", err)
}
return
}
//CreatePreview is used to save a preview for a story
func (service *RPCService) CreatePreview(ctx context.Context, input *pb.PreviewRequest) (*pb.PreviewResponse, error){
eventHandler := &rep.EventHandler{Channel: make(chan string)}
story, err := service.repo.CreatePreview("param1", "param2", eventHandler)
if err != nil {
return nil, err
}
// Return matching the `CreatePreviewResponse` message we created in our
// protobuf definition.
return &pb.PreviewResponse{Story: story}, nil
}
// SaveAndPublish is used to save a story and publish it, returning the story saved.
func (service *RPCService) SaveAndPublish(ctx context.Context, input *pb.PublishRequest) (*pb.PublishResponse, error){
eventHandler := &rep.EventHandler{Channel: make(chan string)}
story, err := service.repo.SaveAndPublish("param1", true, eventHandler)
if err != nil {
return nil, err
}
// Return matching the `SaveAndPublishResponse` message we created in our
// protobuf definition.
return &pb.PublishResponse{Story: story}, nil
}
Now, I know that instead of having to instantiate a single, global eventHandler, in order to use the eventHandler.Handle method on c.StartReceiver() on main.go I can define a wrapper that would, maybe, contain a list of eventHandlers (the HandlerWrapper() method on main.go).
However, I do not know how I could identify which instance of an EventHandler is which and how to properly handle and route these operations, and that is my question:
How do I go about this case where I want to create a Wrapper (a single function to pass into c.StartReceive()) and then let it be handled by the correct instance of Handle()?
I hope the question is clear. I've been trying to get my head around this for a couple days already and can't figure out how to do it.
Presumably, you should be able to differentiate events by using the different sources/methods coming from that event. A quick look at the event spec shows you could split into channels based on source, for example.
The main thing I see that isn't being utilized here is the context object. It seems you could glean the source from that context. This can be seen in their hello world example (check out the receive function).
For your example:
// these are all the handlers for the different sources.
type EventHandlers map[string]CloudEventHandler
var _eventHandlerKey = "cloudEventHandlers"
func HandlerWrapper(ctx context.Context, event cloudevents.Event) {
// Get event source from event context.
src := event.Context.Source
// Then get the appropriate handler for that source (attached to context).
handler := ctx.Value(_eventHandlers).(Values)[src]
// ex: src = "/foo"
handler.SendToChannel(event)
}
func main() {
eventHandlers := make(map[string]CloudEventHandler)
// create all the channels we need, add it to the context.
for _, source := range sourceTypes { // foo bar baz
handler := NewHandler(source)
eventHandlers[source] = handler
}
// start HTTP server
go func() {
// Add the handlers to the context.
context := context.WithValue(context.Background(), _eventHandlerKey, eventHandlers)
log.Fatal(c.StartReceiver(context.Background(), HandlerWrapper))
}
}()
If there are say 3 different sources to be supported, you can use the factory pattern to instantiate those different channels and an interface that all of those implement.
// CloudEventHandler Handles sending cloud events to the proper channel for processing.
type CloudEventHandler interface {
SendToChannel(cloudEvents.Event)
}
type fooHandler struct {channel chan string}
type barHandler struct {channel chan int}
type bazHandler struct {channel chan bool}
func NewHandler(source string) CloudEventHandler {
switch source {
case "/foo":
return &fooHandler{channel: make(chan string, 2)} // buffered channel
case "/bar":
return &barHandler{channel: make(chan int, 2)}
case "/baz":
return &bazHandler{channel: make(chan bool, 2)}
}
}
func (fh *fooHandler) SendToChannel(event CloudEvents.Event) {
var data string
if err := event.DataAs(&data); err != nil {
// (...)
}
go func() {
fh.channel <- data
}()
}
func (bh *barHandler) SendToChannel(event CloudEvents.Event) {
var data int
if err := event.DataAs(&data); err != nil {
// (...)
}
go func() {
bh.channel <- data
}()
}
Related
I have a websocket client that receives multiple data types. A function unmarshals json received from the server into different structs depending on the data received. The struct is then returned as an interface through a channel to my main file. Since i receive multiple data types from the server, I am not able to specify the exact return value of my parsing function.
With the data in my main file, I would like to have a way to be able to then go through the different values in the data. Since I am returning an interface, this seems impossible to do. Whenever i try to index the interface, I receive an error saying c.VALUE undefined (type interface{} has no field or method VALUE).
I feel like I'm not doing something right here. The 2 solutions I've thought about so far are:
having my channel value be a generic and my listen & JSON decoder funcs (these are all put below) all return a generic or
create an interface with methods. My channel would be of this type and again, my listen & JSON decoder funcs would return this interface.
I'm not sure if either of these ways would actually solve my issue, though. I also don't know if there is one way that would be more performant compared to other ways.
Here is my code to better understand the issue
func main() {
// check if in production or testing mode
var testing bool = true // default to testing
args := os.Args
isTesting(args, &testing, &stored_data.Base_currency)
// go routine handler
comms := make(chan os.Signal, 1)
signal.Notify(comms, os.Interrupt, syscall.SIGTERM)
ctx := context.Background()
ctx, cancel := context.WithCancel(ctx)
var wg sync.WaitGroup
// set ohlc interval and pairs
OHLCinterval := 5
pairs := []string{"BTC/" + stored_data.Base_currency, "EOS/" + stored_data.Base_currency}
// create ws connections
pubSocket, err := ws_client.ConnectToServer("public", testing)
if err != nil {
fmt.Println(err)
os.Exit(1)
}
// create websocket channels
pubCh := make(chan interface{})
defer close(pubCh)
// listen to websocket connections
wg.Add(1)
go pubSocket.PubListen(ctx, &wg, pubCh, testing)
// connect to data streams
pubSocket.SubscribeToOHLC(pairs, OHLCinterval)
// listen to public socket
go func() {
for c := range pubCh {
fmt.Println(c) // This is where I would like to be able to go through my data more thoroughly
}
}()
<-comms
cancel()
wg.Wait()
}
Here is what happens in the PubListen function and my JSON decoding function
func (socket *Socket) PubListen(ctx context.Context, wg *sync.WaitGroup, ch chan interface{}, testing bool) {
defer wg.Done()
defer socket.Close()
var res interface{}
socket.OnTextMessage = func(message string, socket Socket) {
res = pubJsonDecoder(message, testing)
ch <- res
}
<-ctx.Done()
log.Println("closing public socket")
return
}
func pubJsonDecoder(response string, testing bool) interface{} {
var resp interface{}
byteResponse := []byte(response)
resp, err := ohlcResponseDecoder(byteResponse, testing)
if err != nil {
resp, err = heartBeatResponseDecoder(byteResponse, testing)
if err != nil {
resp, err = serverConnectionStatusResponseDecoder(byteResponse, testing)
if err != nil {
resp, err = ohlcSubscriptionResponseDecoder(byteResponse, testing)
}
}
}
return resp
}
Thanks for any help you may have
Since you seem to control the complete list of types which can be unesrialized, you can use a type swicth :
swich v := c.(type) {
case *ohlcResponse:
// in this block, v is a *ohlcRrsponse
case *heartBeatResponse:
// in this block, v is a *heartBeatResponse
case *serverConnectionStatusResponse:
// in this block, v is a *serverConnectionStatus
case *ohlcSubscriptionResponse:
// in this block, v is a *ohlcSubscriptionResponse
default:
// choose some way to report unhandled types:
log.Fatalf("unhandled response type: %T", c)
}
I use the following function, and I need to raise the coverage of it (if possible to 100%), the problem is that typically I use interface to handle such cases in Go and for this specific case not sure how to do it, as this is a bit more tricky, any idea?
The package https://pkg.go.dev/google.golang.org/genproto/googleapis/cloud/compute/v1
Which I use doesn't have interface so not sure how can I mock it?
import (
"context"
"errors"
"fmt"
"os"
compute "cloud.google.com/go/compute/apiv1"
"google.golang.org/api/iterator"
"google.golang.org/api/option"
computev1 "google.golang.org/genproto/googleapis/cloud/compute/v1"
)
func Res(ctx context.Context, project string, region string,vpc string,secret string) error {
c, err := compute.NewAddressesRESTClient(ctx, option.WithCredentialsFile(secret))
if err != nil {
return err
}
defer c.Close()
addrReq := &computev1.ListAddressesRequest{
Project: project,
Region: region,
}
it := c.List(ctx, addrReq)
for {
resp, err := it.Next()
if err == iterator.Done {
break
}
if err != nil {
return err
}
if *(resp.Status) != "IN_USE" {
return ipConverter(*resp.Name, vpc)
}
}
return nil
}
Whenever I find myself in this scenario, I found that the easiest solution is to create missing interfaces myself. I limit these interfaces to the types and functions that I am using, instead of writing interfaces for the entire library. Then, in my code, instead of accepting third-party concrete types, I accept my interfaces for those types. Then I use gomock to generate mocks for these interfaces as usual.
The following is a descriptive example inspired by your code.
type RestClient interface {
List(context.Context, *computev1.ListAddressesRequest) (ListResult, error) // assuming List returns ListResult type.
Close() error
}
func newRestClient(ctx context.Context, secret string) (RestClient, error) {
return compute.NewAddressesRESTClient(ctx, option.WithCredentialsFile(secret))
}
func Res(ctx context.Context, project string, region string, vpc string, secret string) error {
c, err := newRestClient(ctx, secret)
if err != nil {
return err
}
defer c.Close()
return res(ctx, project, region, vpc, c)
}
func res(ctx context.Context, project string, region string, vpc string, c RestClient) error {
addrReq := &computev1.ListAddressesRequest{
Project: project,
Region: region,
}
it, err := c.List(ctx, addrReq)
if err != nil {
return err
}
for {
resp, err := it.Next()
if err == iterator.Done {
break
}
if err != nil {
return err
}
if *(resp.Status) != "IN_USE" {
return ipConverter(*resp.Name, vpc)
}
}
return nil
}
Now you can test the important bits of the Res function by injecting a mock RestClient to the internal res function.
One obstacle to testability here is that you instantiate a client inside your Res function rather than injecting it. Because
the secret doesn't change during the lifetime of the programme,
the methods of *compute.AddressesClient (other than Close) are concurrency-safe,
you could create one client and reuse it for each invocation or Res. To inject it into Res, you can declare some Compute type and turn Res into a method on that type:
type Compute struct {
Lister Lister // some appropriate interface type
}
func (cp *Compute) Res(ctx context.Context, project, region, vpc string) error {
addrReq := &computev1.ListAddressesRequest{
Project: project,
Region: region,
}
it := cp.Lister.List(ctx, addrReq)
for {
resp, err := it.Next()
if err == iterator.Done {
break
}
if err != nil {
return err
}
if *(resp.Status) != "IN_USE" {
return ipConverter(*resp.Name, vpc)
}
}
return nil
}
One question remains: how should you declare Lister? One possibility is
type Lister interface {
List(ctx context.Context, req *computev1.ListAddressesRequest, opts ...gax.CallOption) *compute.AddressIterator
}
However, because compute.AddressIterator is a struct type with some unexported fields and for which package compute provides no factory function, you can't easily control how the iterator returned from List behaves in your tests. One way out is to declare an additional interface,
type Iterator interface {
Next() (*computev1.Address, error)
}
and change the result type of List from *compute.AddressIterator to Iterator:
type Lister interface {
List(ctx context.Context, req *computev1.ListAddressesRequest, opts ...gax.CallOption) Iterator
}
Then you can declare another struct type for the real Lister and use that on the production side:
type RealLister struct {
Client *compute.AddressesClient
}
func (rl *RealLister) List(ctx context.Context, req *computev1.ListAddressesRequest, opts ...gax.CallOption) Iterator {
return rl.Client.List(ctx, req, opts...)
}
func main() {
secret := "don't hardcode me"
ctx, cancel := context.WithCancel(context.Background()) // for instance
defer cancel()
c, err := compute.NewAddressesRESTClient(ctx, option.WithCredentialsFile(secret))
if err != nil {
log.Fatal(err) // or deal with the error in some way
}
defer c.Close()
cp := Compute{Lister: &RealLister{Client: c}}
if err := cp.Res(ctx, "my-project", "us-east-1", "my-vpc"); err != nil {
log.Fatal(err) // or deal with the error in some way
}
}
For your tests, you can declare another struct type that will act as a configurable test double:
type FakeLister func(ctx context.Context, req *computev1.ListAddressesRequest, opts ...gax.CallOption) Iterator
func (fl FakeLister) List(ctx context.Context, req *computev1.ListAddressesRequest, opts ...gax.CallOption) Iterator {
return fl(ctx, req, opts...)
}
To control the behaviour of the Iterator in your test, you can declare another configurable concrete type:
type FakeIterator struct{
Err error
Status string
}
func (fi *FakeIterator) Next() (*computev1.Address, error) {
addr := computev1.Address{Status: &fi.Status}
return &addr, fi.Err
}
A test function may look like this:
func TestResStatusInUse(t *testing.T) {
// Arrange
l := func(_ context.Context, _ *computev1.ListAddressesRequest, _ ...gax.CallOption) Iterator {
return &FakeIterator{
Status: "IN_USE",
Err: nil,
}
}
cp := Compute{Lister: FakeLister(l)}
dummyCtx := context.Background()
// Act
err := cp.Res(dummyCtx, "my-project", "us-east-1", "my-vpc")
// Assert
if err != nil {
// ...
}
}
I am trying to learn go, I have a Java background. What I am trying to accomplish is the ability to mock pubsub and when a message is received a channel sends a message.
What I am finding hard is the fact that the method has no params. I have to Mock pubsub Receive and in that method send a mock message over a channel.
when I run
go test ./... --cover
I currently get 0%
method to test
func (s *Service) Consume() error {
subscription := os.Getenv("OUTO_PUBSUB_SUBSCRIPTION")
sub := s.Client.Subscription(subscription)
ctx := context.Background()
err := sub.Receive(ctx, func(ctx context.Context, msg *pubsub.Message) {
msg.Ack()
fmt.Println(msg.Data)
s.messageChannel <- string(msg.Data)
})
if err != nil {
fmt.Println(err)
return fmt.Errorf("sub.Receive: %v", err)
}
return nil
}
pubsub test
func TestConsume(t *testing.T) {
// create channels
testChan := make(chan string, 100)
service := Service{messageChannel: testChan}
sending := service.messageChannel
sending <- "Test message"
t.Run("Test when consume runs a message is sent via channel", func(t *testing.T) {
got := <-sending
want := "Test message"
if want != got {
t.Fatalf("wanted %v, got %v", want, got)
}
})
}
I am able to get the channel test to work but I need to call the Consume method so the go framework knows I am actually testing against it.
advice?
Answer provided but run into an issue mocking Pubsub client. When implementing the mock client it just hangs and the test never finishes and when using the MockClient struct the IDE complain it is not type of pubsub.Client
type MockClient struct {
}
type MockSubscription struct {
}
func (mc MockClient) Subscription() MockSubscription {
return MockSubscription{}
}
func (ms MockSubscription) Receive(ctx context.Context, f func(context.Context, *pubsub.Message)) error {
/* create the message you're mocking you're receiving here */
/* you might have to mock the message struct and its interface if you want to validate that it has be ACKed */
msg := new(pubsub.Message)
f(ctx, msg)
// return an error or not i'll be returning nil for now
return nil
}
func TestConsume(t *testing.T) {
conn, err := grpc.Dial("myMockServerAddress", grpc.WithInsecure())
if err != nil {
// TODO: Handle the error
}
ctx := context.Background()
// Now create the pubsub client with the grpc conn
client, err := pubsub.NewClient(ctx, "mockProject", option.WithGRPCConn(conn))
// create channels
testChan := make(chan string, 100)
//when MockClient{} is here a error occurs Cannot use 'MockClient{}' (type MockClient) as the type *pubsub.Client
service := Service{messageChannel: testChan, Client: client}
sending := service.messageChannel
sending <- "Test message"
t.Run("Test when consume runs a message is sent via channel", func(t *testing.T) {
service.Consume() // run the function so you can test it
got := <-sending
want := "Test message"
if want != got {
t.Fatalf("wanted %v, got %v", want, got)
}
})
}
It's a bit tedious but you'd have to create a mock of your Service struct and it's associated interface.
Below is an example, you may have to change it to match the interface properly.
type MockClient struct {
}
type MockSubscription struct {
}
func (mc MockClient) Subscription(sub string) {
return MockSubscription{}
}
func (ms MockSubscription) Receive(ctx context.Context,f func(context.Context, *pubsub.Message)) error {
/* create the message you're mocking you're receviing here */
/* you might have to mock the message struct and its interface if you want to validate that it has be ACKed */
msg := new(pubsub.Message)
f(ctx, msg)
// return an error or not i'll be returning nil for now
return nil
}
func TestConsume(t *testing.T) {
// create channels
testChan := make(chan string, 100)
service := Service{messageChannel: testChan, Client: MockClient{}}
sending := service.messageChannel
sending <- "Test message"
t.Run("Test when consume runs a message is sent via channel", func(t *testing.T) {
service.Consume() // run the function so you can test it
got := <-sending
want := "Test message"
if want != got {
t.Fatalf("wanted %v, got %v", want, got)
}
})
}
I am new to Golang and Kubernetes. I tried to create a custom controller in golang using the client-go library. The controller connects with the K8s Api server brings the pods details into cache and sends it to the workqueue where I perform some actions on the pods. However I want the process to be fast and for that I need to create multiple workers. How to create multiple workers which could act upon the same workqueue and enhance the speed of the code?
Below is the sample of my controller:
package main
import (
"context"
"flag"
"fmt"
"log"
"time"
"github.com/golang/glog"
"k8s.io/apimachinery/pkg/watch"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
rs "k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/workqueue"
)
type Controller struct {
clientset kubernetes.Interface
queue workqueue.RateLimitingInterface
informer cache.SharedIndexInformer
}
var (
//used the config file
kubeconfig = flag.String("kubeconfig", "location", "absolute path to the kubeconfig file")
)
// Creating the SharedIndexInformer to bring the details into the cache
func CreateSharedIndexInformer() {
flag.Parse()
//creating config using the kubeconfig file
configuration, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err.Error())
fmt.Println("Unable to find the file")
}
cs, err := kubernetes.NewForConfig(configuration)
//Creating the queue
queue := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter())
pods, err := cs.CoreV1().Pods("").List(context.TODO(), metav1.ListOptions{})
//Creating the SharedIndexInformer
informer := cache.NewSharedIndexInformer(
&cache.ListWatch{
ListFunc: func(options metav1.ListOptions) (rs.Object, error) {
return cs.CoreV1().Pods("").List(context.TODO(), options)
},
WatchFunc: func(options metav1.ListOptions) (watch.Interface, error) {
return cs.CoreV1().Pods("").Watch(context.TODO(), options)
},
},
&v1.Pod{},
time.Second*10, //Skip resync
cache.Indexers{},
)
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
key, err := cache.MetaNamespaceKeyFunc(obj)
if err == nil {
queue.Add(key)
}
}
},
DeleteFunc: func(obj interface{}) {
key, err := cache.DeletionHandlingMetaNamespaceKeyFunc(obj)
if err == nil {
queue.Add(key)
}
},
})
controller := &Controller{
clientset: cs,
queue: queue,
informer: informer,
}
stop := make(chan struct{})
go controller.Run(stop)
// Wait forever
select {}
}
func (c *Controller) Run(stopCh chan struct{}) {
// don't let panics crash the process
defer runtime.HandleCrash()
// make sure the work queue is shutdown which will trigger workers to end
defer c.queue.ShutDown()
//c.logger.Info("Starting kubewatch controller")
go c.informer.Run(stopCh)
// wait for the caches to synchronize before starting the worker
if !cache.WaitForCacheSync(stopCh, c.informer.HasSynced) {
runtime.HandleError(fmt.Errorf("Timed out waiting for caches to sync"))
return
}
//c.logger.Info("Kubewatch controller synced and ready")
// runWorker will loop until "something bad" happens. The .Until will
// then rekick the worker after one second
go wait.Until(c.runWorker, time.Second, stopCh)
<-stopCh
}
func (c *Controller) runWorker() {
// processNextWorkItem will automatically wait until there's work available
for c.processNextItem() {
// continue looping
}
}
// processNextWorkItem deals with one key off the queue. It returns false
// when it's time to quit.
func (c *Controller) processNextItem() bool {
// pull the next work item from queue. It should be a key we use to lookup
// something in a cache
key, quit := c.queue.Get()
if quit {
return false
}
// you always have to indicate to the queue that you've completed a piece of
// work
defer c.queue.Done(key)
var obj string
var ok bool
if obj, ok = key.(string); !ok {
// As the item in the workqueue is actually invalid, we call
// Forget here else we'd go into a loop of attempting to
// process a work item that is invalid.
c.queue.Forget(key)
runtime.HandleError(fmt.Errorf("expected string in workqueue but got %#v", obj))
}
// do your work on the key.
err := c.processBusinessLogic(key.(string))
if err == nil {
// No error, tell the queue to stop tracking history
c.queue.Forget(key)
} else if c.queue.NumRequeues(key) < 10 {
//c.logger.Errorf("Error processing %s (will retry): %v", key, err)
// requeue the item to work on later
c.queue.AddRateLimited(key)
} else {
// err != nil and too many retries
//c.logger.Errorf("Error processing %s (giving up): %v", key, err)
c.queue.Forget(key)
runtime.HandleError(err)
}
return true
}
func (c *Controller) processBusinessLogic(key string) error {
obj, exists, err := c.informer.GetIndexer().GetByKey(key)
if err != nil {
glog.Errorf("Fetching object with key %s from store failed with %v", key, err)
return err
}
if !exists {
// Below we will warm up our cache with a Pod, so that we will see a delete for one pod
fmt.Printf("Pod %s does not exist anymore\n", key)
} else {
//Perform some business logic over the pods or Deployment
// Note that you also have to check the uid if you have a local controlled resource, which
// is dependent on the actual instance, to detect that a Pod was recreated with the same name
fmt.Printf("Add event for Pod %s\n", obj.(*v1.Pod).GetName())
}
}
}
return nil
}
func (c *Controller) handleErr(err error, key interface{}) {
glog.Infof("Dropping pod %q out of the queue: %v", key, err)
}
func main() {
CreateSharedIndexInformer()
}
You can just add more workers in your Run function like follows:
func (c *Controller) Run(stopCh chan struct{}) {
...
// runWorker will loop until "something bad" happens. The .Until will
// then rekick the worker after one second
for i := 0; i < 5; i++ {
go wait.Until(c.runWorker, time.Second, stopCh)
}
<-stopCh
}
I'm trying to get the logs from multiple docker containers at once (order doesn't matter). This works as expected if types.ContainerLogsOption.Follow is set to false.
If types.ContainerLogsOption.Follow is set to true sometimes the log output get stuck after a few logs and no follow up logs are printed to stdout.
If the output doesn't get stuck it works as expected.
Additionally if I restart one or all of the containers the command doesn't exit like docker logs -f containerName does.
func (w *Whatever) Logs(options LogOptions) {
readers := []io.Reader{}
for _, container := range options.Containers {
responseBody, err := w.Docker.Client.ContainerLogs(context.Background(), container, types.ContainerLogsOptions{
ShowStdout: true,
ShowStderr: true,
Follow: options.Follow,
})
defer responseBody.Close()
if err != nil {
log.Fatal(err)
}
readers = append(readers, responseBody)
}
// concatenate all readers to one
multiReader := io.MultiReader(readers...)
_, err := stdcopy.StdCopy(os.Stdout, os.Stderr, multiReader)
if err != nil && err != io.EOF {
log.Fatal(err)
}
}
Basically there is no great difference in my implementation from that of docker logs https://github.com/docker/docker/blob/master/cli/command/container/logs.go, hence I'm wondering what causes this issues.
As JimB commented, that method won't work due to the operation of io.MultiReader. What you need to do is read from each from each response individually and combine the output. Since you're dealing with logs, it would make sense to break up the reads on newlines. bufio.Scanner does this for a single io.Reader. So one option would be to create a new type that scans multiple readers concurrently.
You could use it like this:
scanner := NewConcurrentScanner(readers...)
for scanner.Scan() {
fmt.Println(scanner.Text())
}
if err := scanner.Err(); err != nil {
log.Fatalln(err)
}
Example implementation of a concurrent scanner:
// ConcurrentScanner works like io.Scanner, but with multiple io.Readers
type ConcurrentScanner struct {
scans chan []byte // Scanned data from readers
errors chan error // Errors from readers
done chan struct{} // Signal that all readers have completed
cancel func() // Cancel all readers (stop on first error)
data []byte // Last scanned value
err error
}
// NewConcurrentScanner starts scanning each reader in a separate goroutine
// and returns a *ConcurrentScanner.
func NewConcurrentScanner(readers ...io.Reader) *ConcurrentScanner {
ctx, cancel := context.WithCancel(context.Background())
s := &ConcurrentScanner{
scans: make(chan []byte),
errors: make(chan error),
done: make(chan struct{}),
cancel: cancel,
}
var wg sync.WaitGroup
wg.Add(len(readers))
for _, reader := range readers {
// Start a scanner for each reader in it's own goroutine.
go func(reader io.Reader) {
defer wg.Done()
scanner := bufio.NewScanner(reader)
for scanner.Scan() {
select {
case s.scans <- scanner.Bytes():
// While there is data, send it to s.scans,
// this will block until Scan() is called.
case <-ctx.Done():
// This fires when context is cancelled,
// indicating that we should exit now.
return
}
}
if err := scanner.Err(); err != nil {
select {
case s.errors <- err:
// Reprort we got an error
case <-ctx.Done():
// Exit now if context was cancelled, otherwise sending
// the error and this goroutine will never exit.
return
}
}
}(reader)
}
go func() {
// Signal that all scanners have completed
wg.Wait()
close(s.done)
}()
return s
}
func (s *ConcurrentScanner) Scan() bool {
select {
case s.data = <-s.scans:
// Got data from a scanner
return true
case <-s.done:
// All scanners are done, nothing to do.
case s.err = <-s.errors:
// One of the scanners error'd, were done.
}
s.cancel() // Cancel context regardless of how we exited.
return false
}
func (s *ConcurrentScanner) Bytes() []byte {
return s.data
}
func (s *ConcurrentScanner) Text() string {
return string(s.data)
}
func (s *ConcurrentScanner) Err() error {
return s.err
}
Here's an example of it working in the Go Playground: https://play.golang.org/p/EUB0K2V7iT
You can see that the concurrent scanner output is interleaved. Rather than reading all of one reader, then moving on to the next, as is seen with io.MultiReader.