Rate limited with the response MY_CONTACTS_OVERFLOW_COUNT - go

I am proto-typing an application using Google new People API. During my testing I have added and deleted contacts in batches, to see how many can be added per minute and per day in total.
I understand the documentation say how many can be added per minute, but from my testing I don't seem to get anywhere close to this. Even when reviewing my metrics, my request is far long than the supposed limits per minute and per day.
My main question I have is after a couple of attempts across a service account on 3 of my gmail account's I am now getting back googleapi: Error 429: MY_CONTACTS_OVERFLOW_COUNT, rateLimitExceeded. I can't find any mention of MY_CONTACTS_OVERFLOW_COUNT online. I assumed from the error it meant I have too many contacts, but when running a delete script it appears I don't have any at all.
This response is returned for all 3 accounts on my development machine now for longer than 24 hours, which is making me believe I have possibly been blocked and not rate limited?
Client code for running the test:
package main
import (
"context"
"log"
"google.golang.org/api/people/v1"
"os"
"bufio"
"time"
//"github.com/davecgh/go-spew/spew"
)
func chunks(xs []string, chunkSize int) [][]string {
if len(xs) == 0 {
return nil
}
divided := make([][]string, (len(xs)+chunkSize-1)/chunkSize)
prev := 0
i := 0
till := len(xs) - chunkSize
for prev < till {
next := prev + chunkSize
divided[i] = xs[prev:next]
prev = next
i++
}
divided[i] = xs[prev:]
return divided
}
func main(){
ctx := context.Background()
srv, err := people.NewService(ctx)
if err != nil {
log.Fatalf("Unable to create people Client %v", err)
}
file, err := os.Open("test125k.txt")
if err != nil {
log.Fatalf("failed opening file: %s", err)
}
scanner := bufio.NewScanner(file)
scanner.Split(bufio.ScanLines)
var txtlines []string
for scanner.Scan() {
txtlines = append(txtlines, scanner.Text())
}
chunkEmails := chunks(txtlines,200)
count := 0
var validPeopleResources []string
log.Printf("Started")
for i,chunk := range chunkEmails{ //
var contacts people.BatchCreateContactsRequest
contacts.ReadMask = "emailAddresses,photos"
for _,chunkEmail := range chunk{
var contact people.ContactToCreate
var person people.Person
var personEmails people.EmailAddress
personEmails.Value = chunkEmail
var AllEmails = [](*people.EmailAddress){
&personEmails,
}
person.EmailAddresses = AllEmails
contact.ContactPerson = &person
contacts.Contacts = append(contacts.Contacts, &contact)
}
r,err := srv.People.BatchCreateContacts(&contacts).Do()
if err != nil {
log.Printf("Unable to create contacts")
log.Printf(err.Error())
log.Fatalf("")
}
var contactEmail string
var resource string
for _, validPeople := range r.CreatedPeople {
contactEmail = validPeople.Person.EmailAddresses[0].Value
resource = validPeople.Person.ResourceName
validPeopleResources = append(validPeopleResources,resource)
}
count = count + 1
if count == 2 {
var contactToDelete people.BatchDeleteContactsRequest
contactToDelete.ResourceNames = validPeopleResources
_,err = srv.People.BatchDeleteContacts(&contactToDelete).Do()
if err != nil {
log.Printf("Unable to delete contacts")
log.Printf(err.Error())
log.Fatalf("")
}
validPeopleResources = nil
count = 0
log.Printf("performed delete")
}
log.Printf("%d comlpeted",i)
time.Sleep(10 * time.Second)
}
}

"MY_CONTACTS_OVERFLOW_COUNT" happens when you try to insert new contacts to the Google account, but they already have the maximum number of contacts.
The max limit is 25,000, since 2011: https://workspaceupdates.googleblog.com/2011/05/need-more-contacts-in-gmail-contacts.html
It should be noted here that Google also takes the deleted contacts into account for calculating the count. You can find the deleted contacts in the Contacts Trash. These contacts will be cleared after 30 days.

Related

How do I keep Scan from reading all characters from a user input in GO?

I am currently trying to validate user input for a random team generator. The user inputs the number of team members per team and the program breaks up the list of names into teams. If I put in a string it outputs an error message for each character in the string. I want it to output just one error message even if I type several characters.
//Dan Bell
//Random team generator
package main
import (
"bufio"
"fmt"
"log"
"math/rand"
"os"
"strings"
"time"
)
func main() {
var my_slice []string
// open the names.txt file contained in the same directory
f, err := os.Open("names.txt")
if err != nil {
log.Fatal(err)
}
defer f.Close()
//scan the contents of the .txt file into a slice
scanner := bufio.NewScanner(f)
for scanner.Scan() {
my_slice = append(my_slice, scanner.Text())
}
if err := scanner.Err(); err != nil {
log.Fatal(err)
}
//shuffle the slice so that the groups are random each time
rand.Seed(time.Now().UnixNano())
rand.Shuffle(len(my_slice), func(i, j int) { my_slice[i], my_slice[j] = my_slice[j], my_slice[i] })
fmt.Println("\nWelcome to our team generator" + "\n\n\n" + "How big should the teams be?")
var teamSize int
//put the user input into the teamSize variable
//validate input
for {
_, err := fmt.Scan(&teamSize)
if err != nil {
fmt.Println("\nEnter a valid number")
} else {
break
}
}
// fmt.Scanln(&teamSize)
var divided [][]string
//chop the slice into chunks sized using the teamSize entered by a user
for i := 0; i < len(my_slice); i += teamSize {
end := i + teamSize
if end > len(my_slice) {
end = len(my_slice)
}
divided = append(divided, my_slice[i:end])
}
//print each group
for i := 0; i < len(divided); i++ {
fmt.Println("\nGroup " + fmt.Sprint(i+1) + "\n" + strings.Join(divided[i], ", "))
}
}

TIme golang Telegram Bot

I have a telegram bot that helps users to notify if they late for work. So they write as I will come at 12:00 and I get string I have to check if the time they wrote is permitted.
For Example: If they write to me I will come at 12:00 but time.Now() is 11:00 so I give them error
The problem is how to compare their local time with time which they sent via bot if they located around the world and my bot server tunning in another country
How to get their local time?
I have 2 solution
Get user location via IP, GPS
use if, case statement
Do u have any other idea? I will grateful if u help me.
Code snippet to compare with my local time
func getArrivalTime(text string, status string) (time.Time, error) {
var arrivalTime time.Time
rule := regexp.MustCompile(`([0-9]|1[0-9]|2[0-3])((\:)|(\.))[0-5][0-9]`)
timeStr := rule.FindStringSubmatch(text)
if len(timeStr) == 0 {
log.Println("ERROR OCCURED")
return time.Time{}, errors.New("Wrong time format or omitted")
}
timeFormatted := strings.Replace(timeStr[0], ".", ":", 1)
date := fmt.Sprintf("%vT%v:00+00:00", time.Now().Format("2006-01-02"), timeFormatted)
arrivalTime, err := time.Parse(time.RFC3339, date)
if strings.Contains(text, "/завтра") && status == "late" {
arrivalTime = arrivalTime.Add(time.Hour * 24)
}
arrivalTimeUtc, _ := ConvertToUTC(arrivalTime)
nowUtc, _ := ConvertToUTC(time.Now())
nowUtc = nowUtc.Add(time.Hour * 5)
if arrivalTimeUtc.After(nowUtc) == false && status != "lateButInOffice" {
log.Println("ERROR OCCURED")
return time.Time{}, errors.New("Wrong time format or omitted")
}
if err != nil {
log.Println(err)
}
return arrivalTime, err
}

Bulk insert speed gap when multiple machines are used

I have a go utility that reads documents from flat files and bulk loads them into couchbase. The application is able to insert data at a speed of up to 21K writes per second when 2 writer threads are being executed on a single machine (Destination is a remote couchbase cluster having one server within the network).
But When 2 writer threads are executed from 2 different machines (1 thread each), the insertion speed is reduced to half (10K writes/sec).
Since both the machines are using their own RAM and CPU for insertion, plus the utility has shown the speed of up to 25K writes per second, the network doesn’t seem to be the issue (I checked the network utilization as well and it is below 50 percent when multiple machines are used.)
Note: All machines that are used have i7 3.40GHz quad-core processor and 8GB RAM. The total amount of data being inserted is up to 500MB.
Bucket Configuration: 5.00GB RAM, Bucket disk I/O priority: High
I need to know what’s causing this speed gap. Please help…
Here is the Code:
package main
import (
"bufio"
"encoding/csv"
"fmt"
"io"
"log"
"strconv"
"os"
"sync"
"runtime"
"gopkg.in/couchbase/gocb.v1"
)
var (
bucket *gocb.Bucket
CB_Host string
)
func main() {
var wg sync.WaitGroup
CB_Host = <IP Address of Couchbase Server>
runtime.GOMAXPROCS(runtime.NumCPU())
cluster, err := gocb.Connect("couchbase://" + CB_Host) //..........Establish Couchbase Connection
if err != nil {
fmt.Println("ERROR CONNECTING COUCHBASE:", err)
}
bucket, err = cluster.OpenBucket("BUCKET", "*******")
if err != nil {
fmt.Println("ERROR OPENING BUCKET:", err)
}
Path := "E:\\Data\\File_" \\Path of the text file that contains data
for i := 1; i <= 2; i++{
wg.Add(1)
go InsertDataFromFile(Path+strconv.Itoa(i)+".txt", i, &wg)
}
wg.Wait()
err = bucket.Close() //.............. Close Couchbase Connection
if err != nil {
fmt.Println("ERROR CLOSING COUCHBASE CONNECTION:", err)
}
}
/*-- Main function Ends Here --*/
func InsertDataFromFile(Path string, i int, wg *sync.WaitGroup) (){
var (
ID string
JSONData string
items []gocb.BulkOp
)
csvFile, _ := os.Open(FilePath) //...............Open flat file containing data
reader := csv.NewReader(bufio.NewReader(csvFile))
reader.Comma = '$'
reader.LazyQuotes = true
counter := 1
fmt.Println("Starting Insertion of File "+ strconv.Itoa(i) + "...")
for {
line, error := reader.Read()
if error == io.EOF {
break
} else if error != nil { //...............Parse data and append it into items[] array
log.Fatal(error)
}
ID = line[0]
JSONData = line[1]
items = append(items, &gocb.UpsertOp{Key: ID, Value: JSONData})
if counter % 500 == 0 {
BulkInsert(&items) //................Bulk Insert Next 500 Documents Data into couchbase
items = nil
}
counter = counter + 1
}
BulkInsert(&items) //................Insert remaining documents
items = nil
fmt.Println("Insertion of File "+ strconv.Itoa(i) + " Completed...")
wg.Done()
}
func BulkInsert(item *[]gocb.BulkOp) (){
err := bucket.Do(*item)
if err != nil {
fmt.Println("ERROR PERFORMING BULK INSERT:", err)
}
}

Write/Save data in CSV using GO language

I am trying to write Student Marks to a csv file in GO .
It is printing the desired 10 result per page with Println but is saving only the last value (not all 10) in csv .
This is what I am doing
Visitor visits studentmarks.com/page=1
Marks for 10 students are displayed and it is also saved in CSV
Visitor clicks next page and he is navigated to studentmarks.com/page=2
Marks for another 10 students are displayed and it is also saved in subsequent column/rows in the CSV
and so on
fmt.Fprintf(w, KeyTemplate, key.fname, key.marks, key.lname ) is working fine and displays all 10 results per page but I am unable to save all 10 results in the CSV (with my current code, only the last result is saved).
Here is my snippet of the code that is responsible for printing and saving the results.
func PageRequest(w http.ResponseWriter, r *http.Request) {
// Default page number is 1
if len(r.URL.Path) <= 1 {
r.URL.Path = "/1"
}
// Page number is not negative or 0
page.Abs(page)
if page.Cmp(one) == -1 {
page.SetInt64(1)
}
// Page header
fmt.Fprintf(w, PageHeader, pages, previous, next)
// Marks for UID
UID, length := compute(start)
for i := 0; i < length; i++ {
key := UID[i]
fmt.Fprintf(w, key.fname, key.marks, key.lname, key.remarks)
// Save in csv
csvfile, err := os.Create("marks.csv")
if err != nil {
fmt.Println("Error:", err)
return
}
defer csvfile.Close()
records := [][]string{{key.fname, key.marks, key.lname, , key.remarks}}
writer := csv.NewWriter(csvfile)
for _, record := range records {
err := writer.Write(record)
if err != nil {
fmt.Println("Error:", err)
return
}
}
writer.Flush()
// Page Footer
fmt.Fprintf(w, PageFooter, previous, next)
}
How can I print and save (in csv) all the 10 results using go language?
The basic problem is that you are calling os.Create. The documentation for os.Create says
Create creates the named file with mode 0666 (before umask), truncating it if it already exists. If successful, methods on the returned File can be used for I/O; the associated file descriptor has mode O_RDWR. If there is an error, it will be of type *PathError.
So each call to os.Create will remove all content from the file you passed. Instead what you want is probably os.OpenFile with the os.O_CREATE, os.O_WRONLY and os.O_APPEND flags. This will make sure that the file will be created, if it doesn't exists, but won't truncate it.
But there is another problem in your code. You are calling defer csvfile.Close() inside the loop. A deferred function will only be executed once the function returns and not after the loop iteration. This can lead to problems, especially since you are opening the same file over and over again.
Instead you should open the file once before the loop so that you only need to close it once. Like this:
package main
import (
"encoding/csv"
"fmt"
"net/http"
"os"
)
func PageRequest(w http.ResponseWriter, r *http.Request) {
// Default page number is 1
if len(r.URL.Path) <= 1 {
r.URL.Path = "/1"
}
// Page number is not negative or 0
page.Abs(page)
if page.Cmp(one) == -1 {
page.SetInt64(1)
}
// Page header
fmt.Fprintf(w, PageHeader, pages, previous, next)
// Save in csv
csvfile, err := os.OpenFile("marks.csv", os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0666)
if err != nil {
fmt.Println("Error:", err)
return
}
defer csvfile.Close()
writer := csv.NewWriter(csvfile)
defer writer.Flush()
// Marks for UID
UID, length := compute(start)
for i := 0; i < length; i++ {
key := UID[i]
fmt.Fprintf(w, key.fname, key.marks, key.lname, key.remarks)
records := [][]string{{key.fname, key.marks, key.lname, key.remarks}}
for _, record := range records {
err := writer.Write(record)
if err != nil {
fmt.Println("Error:", err)
return
}
}
}
// Page Footer
fmt.Fprintf(w, PageFooter, previous, next)
}

Download from multiple sources in parallel using a goroutine

First I would like to say i've already viewed Golang download multiple files in parallel using goroutines and Example for sync.WaitGroup correct? and i've used them as a guide in my code. However I'm not certain that it's working for me. I'm trying to download files from multiple buckets on aws. This is what I have (some lines will be blank for security reasons).
package main
import (
"fmt"
"os"
"os/user"
"path/filepath"
"sync"
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3"
"github.com/aws/aws-sdk-go/service/s3/s3manager"
)
var (
//Bucket = "" // Download from this bucket
Prefix = "" // Using this key prefix
LocalDirectory = "s3logs" // Into this directory
)
// create a single session to be used
var sess = session.New()
// used to control concurrency
var wg sync.WaitGroup
func main() {
start := time.Now()
//map of buckets to region
regBuckets := map[string]string{
}
// download the files for each bucket
for region, bucket := range regBuckets {
fmt.Println(region)
wg.Add(1)
go getLogs(region, bucket, LocalDirectory, &wg)
}
wg.Wait()
elapsed := time.Since(start)
fmt.Printf("\nTime took %s\n", elapsed)
}
// function to get data from buckets
func getLogs(region string, bucket string, directory string, wg *sync.WaitGroup) {
client := s3.New(sess, &aws.Config{Region: aws.String(region)})
params := &s3.ListObjectsInput{Bucket: &bucket, Prefix: &Prefix}
manager := s3manager.NewDownloaderWithClient(client, func(d *s3manager.Downloader) {
d.PartSize = 6 * 1024 * 1024 // 6MB per part
d.Concurrency = 5
})
d := downloader{bucket: bucket, dir: directory, Downloader: manager}
client.ListObjectsPages(params, d.eachPage)
wg.Done()
}
// downloader object and methods
type downloader struct {
*s3manager.Downloader
bucket, dir string
}
func (d *downloader) eachPage(page *s3.ListObjectsOutput, more bool) bool {
for _, obj := range page.Contents {
d.downloadToFile(*obj.Key)
}
return true
}
func (d *downloader) downloadToFile(key string) {
// Create the directories in the path
// desktop path
user, errs := user.Current()
if errs != nil {
panic(errs)
}
homedir := user.HomeDir
desktop := homedir + "/Desktop/" + d.dir
file := filepath.Join(desktop, key)
if err := os.MkdirAll(filepath.Dir(file), 0775); err != nil {
panic(err)
}
// Setup the local file
fd, err := os.Create(file)
if err != nil {
panic(err)
}
defer fd.Close()
// Download the file using the AWS SDK
fmt.Printf("Downloading s3://%s/%s to %s...\n", d.bucket, key, file)
params := &s3.GetObjectInput{Bucket: &d.bucket, Key: &key}
d.Download(fd, params)
_, e := d.Download(fd, params)
if e != nil {
panic(e)
}
}
In the regBuckets hashmap I place a list of bucket names : regions In the for loop below I print the bucket name. So If i have two buckets I want to download the items from both buckets at the same time. I was testing this with a print statement. I excepted to see the name of the first bucket and soon after the name of the second bucket. However it seems like instead of downloading the files from multiple buckets is parallel it's downloading them in order, e.g bucket 1 when bucket 1 is done the for loop continues and then bucket 2...etc So i need help making sure I'm downloading in parallel because I have roughly 10 buckets and speed is important. I also wonder if it's because i'm using a single session. Any idea?

Resources