Goal is to retrieve Azure DevOps users with their license and project entitlements in go.
I'm using Microsoft SDK.
Our Azure DevOps organization has more than 1500 users. So when I request each user entitlements, I have an error message due to Azure DevOps rate limit => 443: read: connection reset by peer
However, limiting top with 100/200 does the job, of course..
For a real solution, I though not using SDK anymore and using direct REST API calls with a custom http handler which would support rate limit. Or maybe using heimdall.
What is your advise for a good design guys ?
Thanks.
Here is code :
package main
import (
"context"
"fmt"
"github.com/microsoft/azure-devops-go-api/azuredevops"
"github.com/microsoft/azure-devops-go-api/azuredevops/memberentitlementmanagement"
"log"
"runtime"
"sync"
"time"
)
var organizationUrl = "https://dev.azure.com/xxx"
var personalAccessToken = "xxx"
type User struct {
DisplayName string
MailAddress string
PrincipalName string
LicenseDisplayName string
Status string
GroupAssignments string
ProjectEntitlements []string
LastAccessedDate azuredevops.Time
DateCreated azuredevops.Time
}
func init() {
runtime.GOMAXPROCS(runtime.NumCPU()) // Try to use all available CPUs.
}
func main() {
// Time measure
defer timeTrack(time.Now(), "Fetching Azure DevOps Users License and Projects")
// Compute context
fmt.Println("Version", runtime.Version())
fmt.Println("NumCPU", runtime.NumCPU())
fmt.Println("GOMAXPROCS", runtime.GOMAXPROCS(0))
fmt.Println("Starting concurrent calls...")
// Create a connection to your organization
connection := azuredevops.NewPatConnection(organizationUrl, personalAccessToken)
// New context
ctx := context.Background()
// Create a member client
memberClient, err := memberentitlementmanagement.NewClient(ctx, connection)
if err != nil {
log.Fatal(err)
}
// Request all users
top := 10000
skip := 0
filter := "Id"
response, err := memberClient.GetUserEntitlements(ctx, memberentitlementmanagement.GetUserEntitlementsArgs{
Top: &top,
Skip: &skip,
Filter: &filter,
SortOption: nil,
})
usersLen := len(*response.Members)
allUsers := make(chan User, usersLen)
var wg sync.WaitGroup
wg.Add(usersLen)
for _, user := range *response.Members {
go func(user memberentitlementmanagement.UserEntitlement) {
defer wg.Done()
var userEntitlement = memberentitlementmanagement.GetUserEntitlementArgs{UserId: user.Id}
account, err := memberClient.GetUserEntitlement(ctx, userEntitlement)
if err != nil {
log.Fatal(err)
}
var GroupAssignments string
var ProjectEntitlements []string
for _, assignment := range *account.GroupAssignments {
GroupAssignments = *assignment.Group.DisplayName
}
for _, userProject := range *account.ProjectEntitlements {
ProjectEntitlements = append(ProjectEntitlements, *userProject.ProjectRef.Name)
}
allUsers <- User{
DisplayName: *account.User.DisplayName,
MailAddress: *account.User.MailAddress,
PrincipalName: *account.User.PrincipalName,
LicenseDisplayName: *account.AccessLevel.LicenseDisplayName,
DateCreated: *account.DateCreated,
LastAccessedDate: *account.LastAccessedDate,
GroupAssignments: GroupAssignments,
ProjectEntitlements: ProjectEntitlements,
}
}(user)
}
wg.Wait()
close(allUsers)
for eachUser := range allUsers {
fmt.Println(eachUser)
}
}
func timeTrack(start time.Time, name string) {
elapsed := time.Since(start)
log.Printf("%s took %s", name, elapsed)
}
You can write custom version of GetUserEntitlement function.
https://github.com/microsoft/azure-devops-go-api/blob/dev/azuredevops/memberentitlementmanagement/client.go#L297-L314
It does not use any private members.
After getting http.Response you can check Retry-After header and delay next loop's iteration if it is present.
https://github.com/microsoft/azure-devops-go-api/blob/dev/azuredevops/memberentitlementmanagement/client.go#L306
P.S. Concurrency in your code is redundant and can be removed.
Update - explaining concurrency issue:
You cannot easily implement rate-limiting in concurrent code. It will be much simpler if you execute all requests sequentially and check Retry-After header in every response before moving to the next one.
With parallel execution: 1) you cannot rely on Retry-After header value because you may have another request executing at the same time returning a different value. 2) You cannot apply delay to other requests because some of them are already in progress.
For a real solution, I though not using SDK anymore and using direct
REST API calls with a custom http handler which would support rate
limit. Or maybe using heimdall.
Do you mean you want to avoid the Rate Limit by using the REST API directly?
If so, then your idea will not work.
Most REST APIs are accessible through client libraries, and if you're using SDK based on a REST API or other thing based on a REST API, it will of course hit a rate limit.
Since the rate limit is based on users, I suggest that you can complete your operations based on multiple users (provided that your request is not too much that the server blocking your IP).
Related
I am new in programming and have no idea about using the the token generate client api function in the source code from my client side golang program. Looking for some advice. Thank you so much.
Source code package: https://pkg.go.dev/github.com/gravitational/teleport/api/client#Client.UpsertToken
Function Source Code:
func (c *Client) UpsertToken(ctx context.Context, token types.ProvisionToken) error {
tokenV2, ok := token.(*types.ProvisionTokenV2)
if !ok {
return trace.BadParameter("invalid type %T", token)
}
_, err := c.grpc.UpsertToken(ctx, tokenV2, c.callOpts...)
return trail.FromGRPC(err)
}
My code:
package main
import (
"context"
"crypto/tls"
"fmt"
"log"
"os"
"strings"
"time"
"github.com/gravitational/teleport/api/client"
"github.com/gravitational/teleport/api/client/proto"
"google.golang.org/grpc"
)
// Client is a gRPC Client that connects to a Teleport Auth server either
// locally or over ssh through a Teleport web proxy or tunnel proxy.
//
// This client can be used to cover a variety of Teleport use cases,
// such as programmatically handling access requests, integrating
// with external tools, or dynamically configuring Teleport.
type Client struct {
// c contains configuration values for the client.
//c Config
// tlsConfig is the *tls.Config for a successfully connected client.
tlsConfig *tls.Config
// dialer is the ContextDialer for a successfully connected client.
//dialer ContextDialer
// conn is a grpc connection to the auth server.
conn *grpc.ClientConn
// grpc is the gRPC client specification for the auth server.
grpc proto.AuthServiceClient
// closedFlag is set to indicate that the connnection is closed.
// It's a pointer to allow the Client struct to be copied.
closedFlag *int32
// callOpts configure calls made by this client.
callOpts []grpc.CallOption
}
/*
type ProvisionToken interface {
Resource
// SetMetadata sets resource metatada
SetMetadata(meta Metadata)
// GetRoles returns a list of teleport roles
// that will be granted to the user of the token
// in the crendentials
GetRoles() SystemRoles
// SetRoles sets teleport roles
SetRoles(SystemRoles)
// GetAllowRules returns the list of allow rules
GetAllowRules() []*TokenRule
// GetAWSIIDTTL returns the TTL of EC2 IIDs
GetAWSIIDTTL() Duration
// V1 returns V1 version of the resource
V2() *ProvisionTokenSpecV2
// String returns user friendly representation of the resource
String() string
}
type ProvisionTokenSpecV2 struct {
// Roles is a list of roles associated with the token,
// that will be converted to metadata in the SSH and X509
// certificates issued to the user of the token
Roles []SystemRole `protobuf:"bytes,1,rep,name=Roles,proto3,casttype=SystemRole" json:"roles"`
Allow []*TokenRule `protobuf:"bytes,2,rep,name=allow,proto3" json:"allow,omitempty"`
AWSIIDTTL Duration `protobuf:"varint,3,opt,name=AWSIIDTTL,proto3,casttype=Duration" json:"aws_iid_ttl,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
*/
func main() {
ctx := context.Background()
args := os.Args[1:]
nodeType := ""
if len(args) > 0 {
nodeType = args[0]
}
proxyAddress := os.Getenv("TELEPORT_PROXY")
if len(proxyAddress) <= 0 {
proxyAddress = "proxy.teleport.example.local:443"
}
clt, err := client.New(ctx, client.Config{
Addrs: []string{
"proxy.teleport.example.local:443",
"proxy.teleport.example.local:3025",
"proxy.teleport.example.local:3024",
"proxy.teleport.example.local:3080",
},
Credentials: []client.Credentials{
client.LoadProfile("", ""),
},
})
if err != nil {
log.Fatalf("failed to create client: %v", err)
}
defer clt.Close()
ctx, err, token, err2 := clt.UpsertToken(ctx, token)
if err || err2 != nil {
log.Fatalf("failed to get tokens: %v", err)
}
now := time.Now()
t := 0
fmt.Printf("{\"tokens\": [")
for a, b := range token {
if strings.Contains(b.GetRoles(), b.Allow().String(), b.GetAWSIIDTTL(), nodeType) {
if t >= 1 {
fmt.Printf(",")
} else {
panic(err)
}
expiry := "never" //time.Now().Add(time.Hour * 8).Unix()
_ = expiry
if b.Expiry().Unix() > 0 {
exptime := b.Expiry().Format(time.RFC822)
expdur := b.Expiry().Sub(now).Round(time.Second)
expiry = fmt.Sprintf("%s (%s)", exptime, expdur.String())
}
fmt.Printf("\"count\": \"%1d\",", a)
fmt.Printf(b.Roles(), b.GetAllowRules(), b.GetAWSIIDTTL(), b.GetMetadata().Labels)
}
}
}
Output:
Syntax error instead of creating a token
It's seems your code have many mistake. And, It's very obvious you are getting syntax error. I am sure you would have got the line number in the console where actually these syntax error has occurred.
Please understand the syntax of Golang and also how to call the functions and how many parameter should i pass to those functions.
There are few mistakes i would like to point out after reviewing your code.
//It shouldn't be like this
ctx, err, token, err2 := clt.UpsertToken(ctx, token)
//Instead it should be like this
err := clt.UpsertToken(ctx, token)
//The return type of UpsertToken() method is error, you should use only one variable to receive this error.
strings.Contains() function takes two argument but you are passing four.
Refer this document for string.Contains()
You are assigning t := 0 and checking it with if condition inside for loop and never incremented.
Refer this document for fmt.Printf()
Refer this for function
Remove all the syntax error then only your code will run also cross check your logic.
If you want to see the example of syntax error then check here : https://go.dev/play/p/Hhu48UqlPRF
I am trying to have some consumers to process messages from kafka, and I would like to implement kubernetes deployment scalability for elastic message processing capability.
I found this code from sarama official guide https://pkg.go.dev/github.com/Shopify/sarama#NewConsumerGroup:
package main
import (
"context"
"fmt"
)
type exampleConsumerGroupHandler struct{}
func (exampleConsumerGroupHandler) Setup(_ ConsumerGroupSession) error { return nil }
func (exampleConsumerGroupHandler) Cleanup(_ ConsumerGroupSession) error { return nil }
func (h exampleConsumerGroupHandler) ConsumeClaim(sess ConsumerGroupSession, claim ConsumerGroupClaim) error {
for msg := range claim.Messages() {
fmt.Printf("Message topic:%q partition:%d offset:%d\n", msg.Topic, msg.Partition, msg.Offset)
sess.MarkMessage(msg, "")
}
return nil
}
func main() {
config := NewTestConfig()
config.Version = V2_0_0_0 // specify appropriate version
config.Consumer.Return.Errors = true
group, err := NewConsumerGroup([]string{"localhost:9092"}, "my-group", config)
if err != nil {
panic(err)
}
defer func() { _ = group.Close() }()
// Track errors
go func() {
for err := range group.Errors() {
fmt.Println("ERROR", err)
}
}()
// Iterate over consumer sessions.
ctx := context.Background()
for {
topics := []string{"my-topic"}
handler := exampleConsumerGroupHandler{}
// `Consume` should be called inside an infinite loop, when a
// server-side rebalance happens, the consumer session will need to be
// recreated to get the new claims
err := group.Consume(ctx, topics, handler)
if err != nil {
panic(err)
}
}
}
I have some questions:
how to set numbers of consumers in a consumer group?
If I deploy this program in a Pod, can I scale it safely? I mean, assume one program is running, and I scale the replicas from 1 to 2, will another NewConsumerGroup call with the same group id works perfectly without conflict?
Thank you in advance.
NOTE: I am using Kafka 2.8 and I heard that sarama_cluster package is DEPRECATED.
Reminder that groups cannot scale beyond the topic partition count
Scaling the pods is the correct way to use consumer groups, and using the same group name is correct, however I'd recommend extracting that and the broker address to environment variables so they can easily be changed at deploy time
As-is the containerized code would be unable to use localhost as a Kafka connection string as that would be the pod itself
I've got a module that relies on populating a cache with a call to an external service like so:
func (provider *Cache) GetItem(productId string, skuId string, itemType string) (*Item, error) {
// First, create the key we'll use to uniquely identify the item
key := fmt.Sprintf("%s:%s", productId, skuId)
// Now, attempt to get the concurrency control associated with the item key
// If we couldn't find it then create one and add it to the map
var once *sync.Once
if entry, ok := provider.lockMap.Load(key); ok {
once = entry.(*sync.Once)
} else {
once = &sync.Once{}
provider.lockMap.Store(key, once)
}
// Now, use the concurrency control to attempt to request the item
// but only once. Channel any errors that occur
cErr := make(chan error, 1)
once.Do(func() {
// We didn't find the item in the cache so we'll have to get it from the partner-center
item, err := provider.client.GetItem(productId, skuId)
if err != nil {
cErr <- err
return
}
// Add the item to the cache
provider.cache.Store(key, &item)
})
// Attempt to read an error from the channel; if we get one then return it
// Otherwise, pull the item out of the cache. We have to use the select here because this is
// the only way to attempt to read from a channel without it blocking
var sku interface{}
select {
case err, ok := <-cErr:
if ok {
return nil, err
}
default:
item, _ = provider.cache.Load(key)
}
// Now, pull out a reference to the item and return it
return item.(*Item), nil
}
This method works as I expect it to. My problem is testing; specifically testing to ensure that the GetItem method is called only once for a given value of key. My test code is below:
var _ = Describe("Item Tests", func() {
It("GetItem - Not cached, two concurrent requests - Client called once", func() {
// setup cache
// Setup a wait group so we can ensure both processes finish
var wg sync.WaitGroup
wg.Add(2)
// Fire off two concurrent requests for the same SKU
go runRequest(&wg, cache)
go runRequest(&wg, cache)
wg.Wait()
// Check the cache; it should have one value
_, ok := cache.cache.Load("PID:SKUID")
Expect(ok).Should(BeTrue())
// The client should have only been requested once
Expect(client.RequestCount).Should(Equal(1)) // FAILS HERE
})
})
// Used for testing concurrency
func runRequest(wg *sync.WaitGroup, cache *SkuCache) {
defer wg.Done()
sku, err := cache.GetItem("PID", "SKUID", "fakeitem")
Expect(err).ShouldNot(HaveOccurred())
}
type mockClient struct {
RequestFails bool
RequestCount int
lock sync.Mutex
}
func NewMockClient(requestFails bool) *mockClient {
return &mockClient{
RequestFails: requestFails,
RequestCount: 0,
lock: sync.Mutex{},
}
}
func (client *mockClient) GetItem(productId string, skuId string) (item Item, err error) {
defer GinkgoRecover()
// If we want to simulate client failure then return an error here
if client.RequestFails {
err = fmt.Errorf("GetItem failed")
return
}
// Sleep for 100ms so we can more accurately simulate the request latency
time.Sleep(100 * time.Millisecond)
// Update the request count
client.lock.Lock()
client.RequestCount++
client.lock.Unlock()
item = Item{
Id: skuId,
ProductId: productId,
}
return
}
The problem I've been having is that occasionally this test will fail because the request count is 2 when it's expected it was 1, at the commented line. This failure is not consistent and I'm not quite sure how to debug this problem. Any help would be greatly appreciated.
I think your tests fail sometimes because your cache fails to provide guarantee that it only fetches items once, and you're lucky the tests caught this.
If an item is not in it, and 2 concurrent goroutines call Cache.GetItem() at the same time, it may happen that lockMap.Load() will report in both that the key is not in the map, both goroutines create a sync.Once, and both will store their own instance in the map (obviously only one–the latter–will remain in the map, but your cache does not check this).
Then the 2 goroutines both will call client.GetItem() because 2 separate sync.Once provides no synchronization. Only if the same sync.Once instance is used, only then there is guarantee that the function passed to Once.Do() is executed only once.
I think a sync.Mutex would be easier and more appropriate to avoid creating and using 2 sync.Once here.
Or since you're already using sync.Map, you may use the Map.LoadOrStore() method: create a sync.Once, and pass that to Map.LoadOrStore(). If the key is already in the map, use the returned sync.Once. If the key is not in the map, your sync.Once will be stored in it and so you can use that. This will ensure no multiple concurrent goroutines can store multiple sync.once instances in it.
Something like this:
var once *sync.Once
if entry, loaded := provider.lockMap.LoadOrStore(key, once); loaded {
// Was already in the map, use the loaded Once
once = entry.(*sync.Once)
}
This solution is still not perfect: if 2 goroutines call Cache.GetItem() at the same time, only one will attempt to fetch the item from the client, but if that fails, only this goroutine will report the error, the other goroutine will not try to fetch the item from the client, but will load it from the map and you don't check whether loading succeeds. You should, and if it's not in the map, that means another, concurrent attempt failed to get it. And so you should report error then (and clear the sync.Once).
As you can see, it's getting more complicated. I stand by my earlier advice: using a sync.Mutex would be easier here.
I'm currently writing a Prometheus exporter for a telemetry network application.
I've read the doc here Writing Exporters and while I understand the use case for implementing a custom collector to avoid race condition, I'm not sure whether my use case could fit with direct instrumentation.
Basically, the network metrics are streamed via gRPC by the network devices so my exporter just receives them and doesn't have to effectively scrape them.
I've used direct instrumentation with below code:
I declare my metrics using promauto package to keep code compact:
package metrics
import (
"github.com/lucabrasi83/prom-high-obs/proto/telemetry"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
cpu5Sec = promauto.NewGaugeVec(
prometheus.GaugeOpts{
Name: "cisco_iosxe_iosd_cpu_busy_5_sec_percentage",
Help: "The IOSd daemon CPU busy percentage over the last 5 seconds",
},
[]string{"node"},
)
Below is how I simply set the metric value from the gRPC protocol buffer decoded message:
cpu5Sec.WithLabelValues(msg.GetNodeIdStr()).Set(float64(val))
Finally, here is my main loop which basically handles the telemetry gRPC streams for metrics I'm interested in:
for {
req, err := stream.Recv()
if err == io.EOF {
return nil
}
if err != nil {
logging.PeppaMonLog(
"error",
fmt.Sprintf("Error while reading client %v stream: %v", clientIPSocket, err))
return err
}
data := req.GetData()
msg := &telemetry.Telemetry{}
err = proto.Unmarshal(data, msg)
if err != nil {
log.Fatalln(err)
}
if !logFlag {
logging.PeppaMonLog(
"info",
fmt.Sprintf(
"Telemetry Subscription Request Received - Client %v - Node %v - YANG Model Path %v",
clientIPSocket, msg.GetNodeIdStr(), msg.GetEncodingPath(),
),
)
}
logFlag = true
// Flag to determine whether the Telemetry device streams accepted YANG Node path
yangPathSupported := false
for _, m := range metrics.CiscoMetricRegistrar {
if msg.EncodingPath == m.EncodingPath {
yangPathSupported = true
go m.RecordMetricFunc(msg)
}
}
}
For each metric I'm interested in, I register it with a record metric function (m.RecordMetricFunc ) that takes the protocol buffer message as argument as per below.
package metrics
import "github.com/lucabrasi83/prom-high-obs/proto/telemetry"
var CiscoMetricRegistrar []CiscoTelemetryMetric
type CiscoTelemetryMetric struct {
EncodingPath string
RecordMetricFunc func(msg *telemetry.Telemetry)
}
I then use an init function for the actual registration:
func init() {
CiscoMetricRegistrar = append(CiscoMetricRegistrar, CiscoTelemetryMetric{
EncodingPath: CpuYANGEncodingPath,
RecordMetricFunc: ParsePBMsgCpuBusyPercent,
})
}
I'm using Grafana as the frontend and so far haven't seen any particular discrepancy while correlating the Prometheus exposed metrics VS Checking metrics directly on the device.
So I would like to understand whether this is following Prometheus best practices or I should still go through the custom collector route.
Thanks in advance.
You are not following best practices because you are using the global metrics that the article you linked to cautions against. With your current implementation your dashboard will forever show some arbitrary and constant value for the CPU metric after a device disconnects (or, more precisely, until your exporter is restarted).
Instead, the RPC method should maintain a set of local metrics and remove them once the method returns. That way the device's metrics vanish from the scrape output when it disconnects.
Here is one approach to do this. It uses a map that contains currently active metrics. Each map element is the set of metrics for one particular stream (which I understand corresponds to one device). Once the stream ends, that entry is removed.
package main
import (
"sync"
"github.com/prometheus/client_golang/prometheus"
)
// Exporter is a prometheus.Collector implementation.
type Exporter struct {
// We need some way to map gRPC streams to their metrics. Using the stream
// itself as a map key is simple enough, but anything works as long as we
// can remove metrics once the stream ends.
sync.Mutex
Metrics map[StreamServer]*DeviceMetrics
}
type DeviceMetrics struct {
sync.Mutex
CPU prometheus.Metric
}
// Globally defined descriptions are fine.
var cpu5SecDesc = prometheus.NewDesc(
"cisco_iosxe_iosd_cpu_busy_5_sec_percentage",
"The IOSd daemon CPU busy percentage over the last 5 seconds",
[]string{"node"},
nil, // constant labels
)
// Collect implements prometheus.Collector.
func (e *Exporter) Collect(ch chan<- prometheus.Metric) {
// Copy current metrics so we don't lock for very long if ch's consumer is
// slow.
var metrics []prometheus.Metric
e.Lock()
for _, deviceMetrics := range e.Metrics {
deviceMetrics.Lock()
metrics = append(metrics,
deviceMetrics.CPU,
)
deviceMetrics.Unlock()
}
e.Unlock()
for _, m := range metrics {
if m != nil {
ch <- m
}
}
}
// Describe implements prometheus.Collector.
func (e *Exporter) Describe(ch chan<- *prometheus.Desc) {
ch <- cpu5SecDesc
}
// Service is the gRPC service implementation.
type Service struct {
exp *Exporter
}
func (s *Service) RPCMethod(stream StreamServer) (*Response, error) {
deviceMetrics := new(DeviceMetrics)
s.exp.Lock()
s.exp.Metrics[stream] = deviceMetrics
s.exp.Unlock()
defer func() {
// Stop emitting metrics for this stream.
s.exp.Lock()
delete(s.exp.Metrics, stream)
s.exp.Unlock()
}()
for {
req, err := stream.Recv()
// TODO: handle error
var msg *Telemetry = parseRequest(req) // Your existing code that unmarshals the nested message.
var (
metricField *prometheus.Metric
metric prometheus.Metric
)
switch msg.GetEncodingPath() {
case CpuYANGEncodingPath:
metricField = &deviceMetrics.CPU
metric = prometheus.MustNewConstMetric(
cpu5SecDesc,
prometheus.GaugeValue,
ParsePBMsgCpuBusyPercent(msg), // func(*Telemetry) float64
"node", msg.GetNodeIdStr(),
)
default:
continue
}
deviceMetrics.Lock()
*metricField = metric
deviceMetrics.Unlock()
}
return nil, &Response{}
}
I'm trying to use "golang.org/x/time/rate" to build a function which blocks until a token is free. Is this the correct way to use the library to rate limit blocks of code to 40 requests per second, with a bucket size of 2.
type Client struct {
limiter *rate.Limiter
ctx context.Context
}
func NewClient() *Client {
c :=Client{}
c.limiter = rate.NewLimiter(40, 2)
c.ctx = context.Background()
return &c
}
func (client *Client) RateLimitFunc() {
err := client.limiter.Wait(client.ctx)
if err != nil {
fmt.Printf("rate limit error: %v", err)
}
}
To rate limit a block of code I call
RateLimitFunc()
I don't want to use a ticker as I want the rate limiter to take into account the length of time the calling code runs for.
Reading the documentation here; link
You can see that the first parameter to NewLimiter is of type rate.Limit.
If you want 40 requests / second then that translates into a rate of 1 request every 25 ms.
You can create that by doing:
limiter := rate.NewLimiter(rate.Every(25 * time.Millisecond), 2)
Side note:
In generate, a context, ctx, should not be stored on a struct and should be per request. It would appear that Client will be reused, thus you could pass a context to the RateLimitFunc() or wherever appropriate instead of storing a single context on the client struct.
func RateLimit(ctx context.Context) {
limiter := rate.NewLimiter(40, 10)
err := limiter.Wait(ctx)
if err != nil {
// Log the error and return
}
// Do the actual work here
}
As Zak said, do not store Context inside a struct type according to the Go documentation context.