Route53: filter ListResourceRecordSets by RecordType - go

I can't get aws route53 service's ListResourceRecordSets to filter by StartRecord Type. Even with the StartRecordType filter, it returns all records (cname and A) instead of the type I select.
I also noticed I would get a validation error if StartRecordName was not included, so it seems if StartRecordType is used, then StartRecordName is required.
The code below returns all records, but does not filter as it should.
AWSLogin(instance)
svc := route53.New(instance.AWSSession)
listParams := &route53.ListResourceRecordSetsInput{
HostedZoneId: aws.String("Z2798GPJN9CUFJ"), // Required
StartRecordName: aws.String("subdomain.subdomain.mydomain.com"),
StartRecordType: aws.String(route53.RRTypeA),
// StartRecordType: aws.String(route53.RRTypeCname),
}
respList, err := svc.ListResourceRecordSets(listParams)
if err != nil {
fmt.Println(err.Error())
return
}
fmt.Println("All Type "A" records:")
fmt.Println(respList)

I think you've misunderstood what StartRecordName and StartRecordType do. They do not filter the list, only specify where the list begins.
From the Service Documentation:
If you specify both Name and Type: The results begin with the first resource record set in the list whose name is greater than or equal to Name, and whose type is greater than or equal to Type.
So from your example I would expect all your records to be returned (up to a limit of 100), but the first record will be the A record for subdomain.subdomain.mydomain.com.
It will then proceed (and wrap) in alphabetical order by name/type.

Related

How to use golang bleve search results?

I am new to Go and Bleve (sorry if I'm asking trivial thingsā€¦). This search engine seems to be really nice, but I am getting stuck when it comes to deal with my search results.
Let's say we have a struct:
type Person struct {
Name string `json:"name"`
Bio string `json:"bio"`
}
Now, we extract data from the database (using sqlx lib):
rows := []Person{}
db.Select(&rows, "SELECT * FROM person")
...and index it:
index.Index, err = bleve.Open("index.bleve")
batch := index.Index.NewBatch()
i := 0
for _, row := range rows {
rowId := fmt.Sprintf("%T_%d", row, row.ID)
batch.Index(rowId, row)
i++
if i > 100 {
index.Index.Batch(batch)
i = 0
}
}
Now we have our index created. It works great.
Using the bleve command line utility, it returns data correctly:
bleve query index.bleve doe
3 matches, showing 1 through 3, took 27.767838ms
1. Person_68402 (0.252219)
Name
Doe
Bio
My name is John Doe!
2. ...
Here we see that bleve has stored Name and Bio fields.
Now I want to do it to access it from my code!
query := bleve.NewMatchAllQuery()
searchRequest := bleve.NewSearchRequest(query)
searchResults, _ := index.Index.Search(searchRequest)
fmt.Println(searchResults[0].ID) // <- This works
But I don't only want the ID and then query the database to get the rest of the date. To avoid hitting database, I would like to be able to do something like:
fmt.Println(searchResults[0].Bio) // <- This doesn't work :(
Could you please help?
Every hit in the search result is a DocumentMatch. You can see in the documentation that DocumentMatch has Fields which is a map[string]interface{} and can be accessed as follows:
searchResults.Hits[0].Fields["Bio"].(string)
Bleve doesn't include the document's fields in the results by default. You must provide a list of the fields you'd like returned to SearchRequest.Fields (the argument to index.Search). Alternatively, you can set
searchRequest.Fields = []string{"*"}
to return all fields.

Unique email in Google Datastore

I have a User entity containing an Email field. The User entity id is a ULID, because I want to allow users to change their email addresses, but I want to ensure that the email address is unique on both a CREATE and an UPDATE.
I am using Datastore transactions. This is a code fragment:
ctx := context.Background()
k := datastore.NameKey("User", user.ID, nil)
_, err := client.RunInTransaction(ctx, func(t *datastore.Transaction) error {
// other stuff that needs to be in transaction
_, err = t.Put(k, user)
return err
})
return err
The Email field is indexed. Is there any way to search the User entity for the current user's email address as part of the transaction?
*datastore.Transaction does not have a GetAll method, so I cannot run a query like this:
datastore.NewQuery("User").Filter("Email =", user.Email)
I'm afraid that using
client.GetAll(ctx, q, nil)
will not guarantee isolation within the transaction.
The short answer is no, you cannot use a query as part of a transaction unless you are querying a specific entity group. Global queries are alway eventually consistent. However, to put everything in a single entity group would likely limit write throughput too much.
A workaround is you can have another Kind with entities that map email addresses to users. Then you can, in a transaction, check the email Entity and if it doesn't exist or it points to a bad location, set the email Entity and the user Entity all as a single transaction.

How to search for a specific value in Firebase using Golang?

I am using Golang and Firego for connecting to Firebase. I am trying to search an admin with Email: john#gmail.com. The following is my Database Structure
For this I have tried:
dB.Child("CompanyAdmins").Child("Info").OrderBy("Email").EqualTo("john#gmail.com").Value(&result)
but it does not produce expected result. How can I do this?
While #dev.bmax has the problem identified correctly, the solution is simpler. You can specify the path of a property to order on:
dB.Child("CompanyAdmins")
.OrderBy("Info/Email")
.EqualTo("john#gmail.com")
.Value(&result)
Update (2017-02-10):
Full code I just tried:
f := firego.New("https://stackoverflow.firebaseio.com", nil)
var result map[string]interface{}
if err := f.Child("42134844/CompanyAdmins").OrderBy("Info/Email").EqualTo("john#gmail.com").Value(&result); err != nil {
log.Fatal(err)
}
fmt.Printf("%s\n", result)
This prints:
map[-K111111:map[Info:map[Email:john#gmail.com]]]
Which is the exact place where I put the data.
Update 20170213:
This is the index I have defined:
"CompanyAdmins": {
".indexOn": "Info/Email"
}
If this doesn't work for you, please provide a similarly complete snippet that I can test.
Can you put Info data directly into CompanyAdmins structure? This way, your query will work.
CompanyAdmins
-id
-Email: "johndon#gmail.com"
-Settings:
- fields
The problem with your query, is that Info is not a direct child of CompanyAdmins.
You could use the email as the key instead of an auto-generated one when you insert values. That way, you can access the admin directly:
dB.Child("CompanyAdmins").Child("john#gmail.com").Child("Info")
Otherwise, you need to restructure the database. Your order-by field (email) should be one level higher, like Rodrigo Vinicius suggests. Then, your query will change to:
dB.Child("CompanyAdmins").OrderBy("Email").EqualTo("john#gmail.com")

golang gorp insert multiple records

Using gorp how can one insert multiple records efficiently? i.e instead of inserting one at a time, is there a batch insert?
var User struct {
Name string
Email string
Phone string
}
var users []Users
users = buildUsers()
dbMap.Insert(users...) //this fails compilation
//I am forced to loop over users and insert one user at a time. Error Handling omitted for brevity
Is there a better mechanism with gorp? Driver is MySQL.
As I found out on some other resource the reason this doesn't work is that interface{} and User{} do not have the same layout in memory, therefore their slices aren't of compatible types. Suggested solution was to convert []User{} into []interface{} in for loop, like shown here: https://golang.org/doc/faq#convert_slice_of_interface
There is still on caveat: you need to use pointers for DbMap.Insert()function.
Here's how I solved it:
s := make([]interface{}, len(users))
for i, v := range users {
s[i] = &v
}
err := dbMap.Insert(s...)
Note that &v is important, otherwise Insert will complain about non-pointers.
It doesn't look like gorp has anything that gives a wrapper for either raw SQL or multi value inserts (which is always SQL dialect dependent).
Are you worried about speed or transactions? If not, I would just do the inserts in a for loop.

How to handle null values in gorp Select

I'm trying to get the users from a DB as follow,
var users []User
_, err := dbMap.Select(&users, "select id,username,acctstarttime,acctlastupdatedtime,acctstoptime from accounting order by id")
Here I'm using gorp. When there are null values present, this throws exception
Select failed sql: Scan error on column index 3: unsupported driver -> Scan pair: <nil> -> *string
How can I solve this issue?. Here I used gorp because of the ease of mapping the output to a struct array.
Make whatever acctstarttime maps to a pointer to the type instead of a value of the type.
if the col is null, the pointer will be nil.
that or you can use the sql.NullXXX types, but I usually don't like those since they make everything else weird.

Resources