I want to send a Slack notification with an attached file. This is my current code:
package Message
import (
"fmt"
"os"
"github.com/ashwanthkumar/slack-go-webhook"
)
func Message(message string, cannalul string, attash bool) {
f, err := os.Open(filename)
if err != nil {
return false
}
defer f.Close()
_ = f
fullName := "myServer"
webhookUrl := "https://hooks.slack.com/services/......."
attachment1 := slack.Attachment {}
//attachment1.AddField(slack.Field { Title: "easySmtp", Value: "EasySmtp" }).AddField(slack.Field { Title: "Status", Value: "Completed" })
if attash {
attachment1.AddField(slack.Field { Title: "easySmtp", Value: fullName})
}
payload := slack.Payload {
Text: message,
Username: "worker",
Channel: cannalul,
IconEmoji: ":grin:",
Attachments: []slack.Attachment{attachment1},
}
err := slack.Send(webhookUrl, "", payload)
if len(err) > 0 {
fmt.Printf("error: %s\n", err)
}
}
My code works, but I don't know how I can add an attached file in my current code. How I can do this?
You can not attach a file to an attachment through a webhook in Slack. That functionality does not exist in Slack.
If its just text you can add the content as part of the message or another attachments (up to a limit of currently 500,000 characters, which will soon be reduced to 40,000 - see here for reference).
Or you can directly upload a file to a channel with the API method files.upload.
Related
We have a Github repository that stores image files.
https://github.com/rollthecloudinc/ipe-objects/tree/dev/media
We would like to serve those image files via golang. The golang api runs on aws api gateway as a lambda function. The function in its current state which goes to a blank screen is below.
func GetMediaFile(req *events.APIGatewayProxyRequest, ac *ActionContext) (events.APIGatewayProxyResponse, error) {
res := events.APIGatewayProxyResponse{StatusCode: 500}
pathPieces := strings.Split(req.Path, "/")
siteName := pathPieces[1]
file, _ := url.QueryUnescape(pathPieces[3]) // pathPieces[2]
log.Printf("requested media site: " + siteName)
log.Printf("requested media file: " + file)
// buf := aws.NewWriteAtBuffer([]byte{})
// downloader := s3manager.NewDownloader(ac.Session)
/*_, err := downloader.Download(buf, &s3.GetObjectInput{
Bucket: aws.String(ac.BucketName),
Key: aws.String("media/" + file),
})
if err != nil {
return res, err
}*/
ext := strings.Split(pathPieces[len(pathPieces)-1], ".")
contentType := mime.TypeByExtension(ext[len(ext)-1])
if ext[len(ext)-1] == "md" {
contentType = "text/markdown"
}
suffix := ""
if os.Getenv("GITHUB_BRANCH") == "master" {
suffix = "-prod"
}
var q struct {
Repository struct {
Object struct {
ObjectFragment struct {
Text string
IsBinary bool
ByteSize int
} `graphql:"... on Blob"`
} `graphql:"object(expression: $exp)"`
} `graphql:"repository(owner: $owner, name: $name)"`
}
qVars := map[string]interface{}{
"exp": githubv4.String(os.Getenv("GITHUB_BRANCH") + ":media/" + file),
"owner": githubv4.String("rollthecloudinc"),
"name": githubv4.String(siteName + suffix),
}
err := ac.GithubV4Client.Query(context.Background(), &q, qVars)
if err != nil {
log.Print("Github latest file failure.")
log.Panic(err)
}
// log.Printf(q.Repository.Object.ObjectFragment.Text)
// json.Unmarshal([]byte(q.Repository.Object.ObjectFragment.Text), &obj)
// log.Printf("END GithubFileUploadAdaptor::LOAD %s", id)
log.Print("content")
log.Print(q.Repository.Object.ObjectFragment.Text)
res.StatusCode = 200
res.Headers = map[string]string{
"Content-Type": contentType,
}
res.Body = q.Repository.Object.ObjectFragment.Text //base64.StdEncoding.EncodeToString([]byte(q.Repository.Object.ObjectFragment.Text))
res.IsBase64Encoded = true
return res, nil
}
The full api file can viewed below but excludes the changes above for migration to Github. This api has been running fine using s3. However, we are now trying to migrate to Github for object storage instead. Have successfully implemented write but are having difficulties described above for reading the file using our lambda.
https://github.com/rollthecloudinc/verti-go/blob/master/api/media/main.go
Help requested to figure out how to serve image files from our Github repo using the golang lambda on aws which can be accessed here as a blank screen.
https://81j44yaaab.execute-api.us-east-1.amazonaws.com/ipe/media/Screen%20Shot%202022-02-02%20at%202.00.29%20PM.png
However, this repo is also a pages site which serves the image just fine.
https://rollthecloudinc.github.io/ipe-objects/media/Screen%20Shot%202022-02-02%20at%202.00.29%20PM.png
Thanks
Further debugging the Text property appears to be empty inside the log.
The IsBinary property value being false lead use to the discovery of a typo. The name input for the graph QL invocation was missing -objects. Once the typo was corrected IsBinary started showing up true. However, the Text property value is still empty.
Having managed to find some similar issues but for uploading many have suggested that graph QL isn't the right tool for uploading binary data to begin with. Therefore, rather than chase tail we have decided to try the Github REST v3 api. Specifically, the go-github package for golang instead.
https://github.com/google/go-github
Perhaps using the REST api instead will lead to successful results.
An additional step was necessary to fetch the blob contents of the object queried via the graph QL api. Once this was achieved the media file was served with success. This required using the go-github blob api to fetch the blob base64 contents from github.
https://81j44yaaab.execute-api.us-east-1.amazonaws.com/ipe/media/Screen%20Shot%202022-02-02%20at%202.00.29%20PM.png
GetMediaFile lambda
func GetMediaFile(req *events.APIGatewayProxyRequest, ac *ActionContext) (events.APIGatewayProxyResponse, error) {
res := events.APIGatewayProxyResponse{StatusCode: 500}
pathPieces := strings.Split(req.Path, "/")
siteName := pathPieces[1]
file, _ := url.QueryUnescape(pathPieces[3]) // pathPieces[2]
log.Print("requested media site: " + siteName)
log.Print("requested media file: " + file)
// buf := aws.NewWriteAtBuffer([]byte{})
// downloader := s3manager.NewDownloader(ac.Session)
/*_, err := downloader.Download(buf, &s3.GetObjectInput{
Bucket: aws.String(ac.BucketName),
Key: aws.String("media/" + file),
})
if err != nil {
return res, err
}*/
ext := strings.Split(pathPieces[len(pathPieces)-1], ".")
contentType := mime.TypeByExtension(ext[len(ext)-1])
if ext[len(ext)-1] == "md" {
contentType = "text/markdown"
}
suffix := ""
if os.Getenv("GITHUB_BRANCH") == "master" {
suffix = "-prod"
}
owner := "rollthecloudinc"
repo := siteName + "-objects" + suffix
var q struct {
Repository struct {
Object struct {
ObjectFragment struct {
Oid githubv4.GitObjectID
} `graphql:"... on Blob"`
} `graphql:"object(expression: $exp)"`
} `graphql:"repository(owner: $owner, name: $name)"`
}
qVars := map[string]interface{}{
"exp": githubv4.String(os.Getenv("GITHUB_BRANCH") + ":media/" + file),
"owner": githubv4.String(owner),
"name": githubv4.String(repo),
}
err := ac.GithubV4Client.Query(context.Background(), &q, qVars)
if err != nil {
log.Print("Github latest file failure.")
log.Panic(err)
}
oid := q.Repository.Object.ObjectFragment.Oid
log.Print("Github file object id " + oid)
blob, _, err := ac.GithubRestClient.Git.GetBlob(context.Background(), owner, repo, string(oid))
if err != nil {
log.Print("Github get blob failure.")
log.Panic(err)
}
res.StatusCode = 200
res.Headers = map[string]string{
"Content-Type": contentType,
}
res.Body = blob.GetContent()
res.IsBase64Encoded = true
return res, nil
}
Full Source: https://github.com/rollthecloudinc/verti-go/blob/master/api/media/main.go
I have something like a data pipeline.
API response (10k) rows as JSON.
=> Sanitize some of the data into a new structure
=> Create a CSV File
I can currently do that by getting the full response and doing that step by step.
I was wondering if there's a simpler way to stream the response reading into CSV right away and also writing in the file as it goes over the request-response.
Current code:
I will have a JSON like { "name": "Full Name", ...( 20 columns)} and that data repeats about 10-20k times with different values.
For request
var res *http.Response
if res, err = client.Do(request); err != nil {
return errors.Wrap(err, "failed to perform request")
}
For Unmarshal
var record []RecordStruct
if err = json.NewDecoder(res.Body).Decode(&record); err != nil {
return err
}
For CSV
var row []byte
if row, err = csvutil.Marshal(record); err != nil {
return err
}
To stream an array of JSON objects you have to decode nested objects instead of root object. To do this you need read data using tokens (check out Token method). According to the documentation:
Token returns the next JSON token in the input stream. At the end of the input stream, Token returns nil, io.EOF.
Token guarantees that the delimiters [ ] { } it returns are properly nested and matched: if Token encounters an unexpected delimiter in the input, it will return an error.
The input stream consists of basic JSON values—bool, string, number, and null—along with delimiters [ ] { } of type Delim to mark the start and end of arrays and objects. Commas and colons are elided.
That mean you can decode document part by part. Find an official example how to do it here
I will post a code snippet that shows how you can combine json stream technic with writing result to the CSV:
package main
import (
"encoding/csv"
"encoding/json"
"log"
"os"
"strings"
)
type RecordStruct struct {
Name string `json:"name"`
Info string `json:"info"`
// ... any field you want
}
func (rs *RecordStruct) CSVRecord() []string {
// Here we form data for CSV writer
return []string{rs.Name, rs.Info}
}
const jsonData =
`[
{ "name": "Full Name", "info": "..."},
{ "name": "Full Name", "info": "..."},
{ "name": "Full Name", "info": "..."},
{ "name": "Full Name", "info": "..."},
{ "name": "Full Name", "info": "..."}
]`
func main() {
// Create file for storing our result
file, err := os.Create("result.csv")
if err != nil {
log.Fatalln(err)
}
defer file.Close()
// Create CSV writer using standard "encoding/csv" package
var w = csv.NewWriter(file)
// Put your reader here. In this case I use strings.Reader
// If you are getting data through http it will be resp.Body
var jsonReader = strings.NewReader(jsonData)
// Create JSON decoder using "encoding/json" package
decoder := json.NewDecoder(jsonReader)
// Token returns the next JSON token in the input stream.
// At the end of the input stream, Token returns nil, io.EOF.
// In this case our first token is '[', i.e. array start
_, err = decoder.Token()
if err != nil {
log.Fatalln(err)
}
// More reports whether there is another element in the
// current array or object being parsed.
for decoder.More() {
var record RecordStruct
// Decode only the one item from our array
if err := decoder.Decode(&record); err != nil {
log.Fatalln(err)
}
// Convert and put out record to the csv file
if err := writeToCSV(w, record.CSVRecord()); err != nil {
log.Fatalln(err)
}
}
// Our last token is ']', i.e. array end
_, err = decoder.Token()
if err != nil {
log.Fatalln(err)
}
}
func writeToCSV(w *csv.Writer, record []string) error {
if err := w.Write(record); err != nil {
return err
}
w.Flush()
return nil
}
You can also use 3d party packages like github.com/bcicen/jstream
Requirements
Two services:
Server - for writing blog posts to MongoDB
Client - sends request to the first service
The blog post has title of type string, and content which is a dynamic type - can be any JSON value.
Protobuf
syntax = "proto3";
package blog;
option go_package = "blogpb";
import "google/protobuf/struct.proto";
message Blog {
string id = 1;
string title = 2;
google.protobuf.Value content = 3;
}
message CreateBlogRequest {
Blog blog = 1;
}
message CreateBlogResponse {
Blog blog = 1;
}
service BlogService {
rpc CreateBlog (CreateBlogRequest) returns (CreateBlogResponse);
}
Let's start with protobuf message, which meats requirements - string for title and any JSON value for content.
Client
package main
import (...)
func main() {
cc, _ := grpc.Dial("localhost:50051", grpc.WithInsecure())
defer cc.Close()
c := blogpb.NewBlogServiceClient(cc)
var blog blogpb.Blog
json.Unmarshal([]byte(`{"title": "First example", "content": "string"}`), &blog)
c.CreateBlog(context.Background(), &blogpb.CreateBlogRequest{Blog: &blog})
json.Unmarshal([]byte(`{"title": "Second example", "content": {"foo": "bar"}}`), &blog)
c.CreateBlog(context.Background(), &blogpb.CreateBlogRequest{Blog: &blog})
}
The client sends two requests to the server - one with content having string type, and other with the object. No errors here.
Server
package main
import (...)
var collection *mongo.Collection
type server struct {
}
type blogItem struct {
ID primitive.ObjectID `bson:"_id,omitempty"`
Title string `bson:"title"`
Content *_struct.Value `bson:"content"`
}
func (*server) CreateBlog(ctx context.Context, req *blogpb.CreateBlogRequest) (*blogpb.CreateBlogResponse, error) {
blog := req.GetBlog()
data := blogItem{
Title: blog.GetTitle(),
Content: blog.GetContent(),
}
// TODO: convert "data" or "data.Content" to something that could be BSON encoded..
res, err := collection.InsertOne(context.Background(), data)
if err != nil {
log.Fatal(err)
}
oid, _ := res.InsertedID.(primitive.ObjectID)
return &blogpb.CreateBlogResponse{
Blog: &blogpb.Blog{
Id: oid.Hex(),
Title: blog.GetTitle(),
Content: blog.GetContent(),
},
}, nil
}
func main() {
client, _ := mongo.NewClient(options.Client().ApplyURI("mongodb://localhost:27017"))
client.Connect(context.TODO())
collection = client.Database("mydb").Collection("blog")
lis, _ := net.Listen("tcp", "0.0.0.0:50051")
s := grpc.NewServer([]grpc.ServerOption{}...)
blogpb.RegisterBlogServiceServer(s, &server{})
reflection.Register(s)
go func() { s.Serve(lis) }()
ch := make(chan os.Signal, 1)
signal.Notify(ch, os.Interrupt)
<-ch
client.Disconnect(context.TODO())
lis.Close()
s.Stop()
}
And here I get:
cannot transform type main.blogItem to a BSON Document: no encoder
found for structpb.isValue_Kind
What do I expect? To see the exact value of content in MongoDB, something like this:
{ "_id" : ObjectId("5e5f6f75d2679d058eb9ef79"), "title" : "Second example", "content": "string" }
{ "_id" : ObjectId("5e5f6f75d2679d058eb9ef78"), "title" : "First example", "content": { "foo": "bar" } }
I guess I need to transform data.Content in the line where I added TODO:...
I can create github repo with this example if that could help.
So as suggested by #nguyenhoai890 in the comment I managed to fix it using jsonpb lib - first MarshalToString to covert from structpb to string(json) and then json.Unmarshal to convert from string(json) to interface{} which is supported by BSON. Also I had to fix a Client to correctly unmarshal from string to protobuf.
Client
func main() {
cc, _ := grpc.Dial("localhost:50051", grpc.WithInsecure())
defer cc.Close()
c := blogpb.NewBlogServiceClient(cc)
var blog blogpb.Blog
jsonpb.UnmarshalString(`{"title": "Second example", "content": {"foo": "bar"}}`, &blog)
c.CreateBlog(context.Background(), &blogpb.CreateBlogRequest{Blog: &blog})
jsonpb.UnmarshalString(`{"title": "Second example", "content": "stirngas"}`, &blog)
c.CreateBlog(context.Background(), &blogpb.CreateBlogRequest{Blog: &blog})
}
Server
type blogItem struct {
ID primitive.ObjectID `bson:"_id,omitempty"`
Title string `bson:"title"`
Content interface{} `bson:"content"`
}
func (*server) CreateBlog(ctx context.Context, req *blogpb.CreateBlogRequest) (*blogpb.CreateBlogResponse, error) {
blog := req.GetBlog()
contentString, err := new(jsonpb.Marshaler).MarshalToString(blog.GetContent())
if err != nil {
log.Fatal(err)
}
var contentInterface interface{}
json.Unmarshal([]byte(contentString), &contentInterface)
data := blogItem{
Title: blog.GetTitle(),
Content: contentInterface,
}
res, err := collection.InsertOne(context.Background(), data)
if err != nil {
log.Fatal(err)
}
oid, _ := res.InsertedID.(primitive.ObjectID)
return &blogpb.CreateBlogResponse{
Blog: &blogpb.Blog{
Id: oid.Hex(),
Title: blog.GetTitle(),
Content: blog.GetContent(),
},
}, nil
}
EDIT: Adrian's suggestion makes sense, so I moved my code into a function and called the function from my cobra block:
package cmd
import (
"fmt"
"log"
"os"
"io"
"github.com/spf13/cobra"
"github.com/spf13/viper"
input "github.com/tcnksm/go-input"
)
var configureCmd = &cobra.Command{
Use: "configure",
Short: "Configure your TFE credentials",
Long: `Prompts for your TFE API credentials, then writes them to
a configuration file (defaults to ~/.tgc.yaml`,
Run: func(cmd *cobra.Command, args []string) {
CreateConfigFileFromPrompts(os.Stdin, os.Stdout)
},
}
func CreateConfigFileFromPrompts(stdin io.Reader, stdout io.Writer) {
ui := &input.UI{
Writer: stdout,
Reader: stdin,
}
tfeURL, err := ui.Ask("TFE URL:", &input.Options{
Default: "https://app.terraform.io",
Required: true,
Loop: true,
})
if err != nil {
log.Fatal(err)
}
viper.Set("tfe_url", tfeURL)
tfeAPIToken, err := ui.Ask(fmt.Sprintf("TFE API Token (Create one at %s/app/settings/tokens)", tfeURL), &input.Options{
Default: "",
Required: true,
Loop: true,
Mask: true,
MaskDefault: true,
})
if err != nil {
log.Fatal(err)
}
viper.Set("tfe_api_token", tfeAPIToken)
configPath := ConfigPath()
viper.SetConfigFile(configPath)
err = viper.WriteConfig()
if err != nil {
log.Fatal("Failed to write to: ", configPath, " Error was: ", err)
}
fmt.Println("Saved to", configPath)
}
So what can I pass to this method to test that the output is as expected?
package cmd
import (
"strings"
"testing"
)
func TestCreateConfigFileFromPrompts(t *testing.T) {
// How do I pass the stdin and out to the method?
// Then how do I test their contents?
// CreateConfigFileFromPrompts()
}
func TestCreateConfigFileFromPrompts(t *testing.T) {
var in bytes.Buffer
var gotOut, wantOut bytes.Buffer
// The reader should read to the \n each of two times.
in.Write([]byte("example-url.com\nexampletoken\n"))
// wantOut could just be []byte, but for symmetry's sake I've used another buffer
wantOut.Write([]byte("TFE URL:TFE API Token (Create one at example-url.com/app/settings/tokens)"))
// I don't know enough about Viper to manage ConfigPath()
// but it seems youll have to do it here somehow.
configFilePath := "test/file/location"
CreateConfigFileFromPrompts(&in, &gotOut)
// verify that correct prompts were sent to the writer
if !bytes.Equal(gotOut.Bytes(), wantOut.Bytes()) {
t.Errorf("Prompts = %s, want %s", gotOut.Bytes(), wantOut.Bytes())
}
// May not need/want to test viper's writing of the config file here, or at all, but if so:
var fileGot, fileWant []byte
fileWant = []byte("Correct Config file contents:\n URL:example-url.com\nTOKEN:exampletoken")
fileGot, err := ioutil.ReadFile(configFilePath)
if err != nil {
t.Errorf("Error reading config file %s", configFilePath)
}
if !bytes.Equal(fileGot, fileWant) {
t.Errorf("ConfigFile: %s not created correctly got = %s, want %s", configFilePath, fileGot, fileWant)
}
}
As highlighted by #zdebra in comments to his answer, the go-input package is panicing and giving you the error: Reader must be a file. If you are married to using that package, you can avoid the problem by disabling the masking option on the ui.Ask for your second input:
tfeAPIToken, err := ui.Ask(fmt.Sprintf("TFE API Token (Create one at %s/app/settings/tokens)", tfeURL), &input.Options{
Default: "",
Required: true,
Loop: true,
//Mask: true, // if this is set to True, the input must be a file for some reason
//MaskDefault: true,
})
The reader and the writer need to be set up before the tested function is called. After is called, the result is written into the writer where it should be verified.
package cmd
import (
"strings"
"testing"
)
func TestCreateConfigFileFromPrompts(t *testing.T) {
in := strings.NewReader("<your input>") // you can use anything that satisfies io.Reader interface here
out := new(strings.Builder) // you could use anything that satisfies io.Writer interface here like bytes.Buffer
CreateConfigFileFromPrompts(in, out)
// here you verify the output written into the out
expectedOutput := "<your expected output>"
if out.String() != expectedOutput {
t.Errorf("expected %s to be equal to %s", out.String(), expectedOutput)
}
}
So I wrote a small script that takes text files as an input, reads every line and tries to validate it as an email. If it passes, it writes the line into a new ('clean') file, if it doesn't pass, it strips it of spaces and tries to validate it again. Now, if it passes this time, it writes the line into a new file and if it fails, it ignores the line.
Thing is, such as it is, my script may write duplicate emails into the output files. How should I go around that and check for duplicates present in the output file before writing?
Here's the relevant code:
// create reading and writing buffers
scanner := bufio.NewScanner(r)
writer := bufio.NewWriter(w)
for scanner.Scan() {
email := scanner.Text()
// validate each email
if !correctEmail.MatchString(email) {
// if validation didn't pass, strip and lowercase the email and store it
email = strings.Replace(email, " ", "", -1)
// validate the email again after cleaning
if !correctEmail.MatchString(email) {
// if validation didn't pass, ignore this email
continue
} else {
// if validation passed, write clean email into file
_, err = writer.WriteString(email + "\r\n")
if err != nil {
return err
}
}
} else {
// if validation passed, write the email into file
_, err = writer.WriteString(email + "\r\n")
if err != nil {
return err
}
}
}
err = writer.Flush()
if err != nil {
return err
}
Create a type that implements writer then create a custom WriteString
Inside WriteString open the file where you store your emails, iterate over each email and save the new emails.
You may use a Go built-in map as a set like this:
package main
import (
"fmt"
)
var emailSet map[string]bool = make(map[string]bool)
func emailExists(email string) bool {
_, ok := emailSet[email]
return ok
}
func addEmail(email string) {
emailSet[email] = true
}
func main() {
emails := []string{
"duplicated#golang.org",
"abc#golang.org",
"stackoverflow#golang.org",
"duplicated#golang.org", // <- Duplicated!
}
for _, email := range emails {
if !emailExists(email) {
fmt.Println(email)
addEmail(email)
}
}
}
Here is the output:
duplicated#golang.org
abc#golang.org
stackoverflow#golang.org
You may try the same code at The Go Playground.