Get a single item from the query Firestore - go

I'm trying to get a single document from the Firestore Firebase using Golang. I know that it is easy if you have an id, you can write something like this:
database.Collection("profiles").Doc(userId).Get(ctx)
In my case I need to find a specific document using a Where condition, and this is where I get stuck. So far I was able to come up only with the following:
database.Collection("users").Where("name", "==", "Mark").Limit(1).Documents(ctx).GetAll()
And it is obviously not the best solution since I am looking only for one (basically the first) document which follows the condition and using GetAll() seems really weird. What would be the best approach?

The application can call Next and Stop to get a single document:
func getOne(ctx context.Context, q firestore.Query) (*firestore.DocumentSnapshot, error) {
it := q.Limit(1).Documents(ctx)
defer it.Stop()
snap, err := it.Next()
if err == iterator.Done {
err = fmt.Errorf("no matching documents")
}
return snap, err
}

Related

Multiple queries to Postgres within the same function

I'm new to Go, so sorry for the silly question in advance!
I'm using Gin framework and want to make multiple queries to the database within the same handler (database/sql + lib/pq)
userIds := []int{}
bookIds := []int{}
var id int
/* Handling first query here */
rows, err := pgClient.Query(getUserIdsQuery)
defer rows.Close()
if err != nil {
return
}
for rows.Next() {
err := rows.Scan(&id)
if err != nil {
return
}
userIds = append(userIds, id)
}
/* Handling second query here */
rows, err = pgClient.Query(getBookIdsQuery)
defer rows.Close()
if err != nil {
return
}
for rows.Next() {
err := rows.Scan(&id)
if err != nil {
return
}
bookIds = append(bookIds, id)
}
I have a couple of questions regarding this code (any improvements and best practices would be appreciated)
Does Go properly handle defer rows.Close() in such a case? I mean I have reassignment of rows variable later down the code, so will compiler track both and properly close at the end of a function?
Is it ok to reuse id shared var or should I redeclare it while iterating within rows.Next() loop?
What's the better approach of having even more queries within one handler? Should I have some kind of Writer that accepts query and slice and populate it with ids retrieved?
Thanks.
I've never worked with go-pg library, and my answer is mostly focused on the other stuff, which are generic, and are not specific to golang or go-pg.
Regardless of the fact that the rows here has the same reference while being shared between 2 queries (so one rows.Close() call would suffice, unless the library has some special implementation), defining two variables is cleaner, like userRows and bookRows.
Although I already said that I have not worked with go-pg, I believe that you wont need to iterate through rows and scan the id for all the rows manually, I believe that the lib has provided some API like this (based on the quick look on the documentations):
userIds := []int{}
err := pgClient.Query(&userIds, "select id from users where ...", args...)
Regarding your second question, it depends on what you mean by "ok". Since your doing some synchronous iteration, I don't think it would result into bugs, but when it comes to coding style, personally, I wouldn't do this.
I think that the best thing to do in your case is this:
// repo layer
func getUserIds(args whatever) ([]int, err) {...}
// these can be exposed, based on your packaging logic
func getBookIds(args whatever) ([]int, err) {...}
// service layer, or wherever you want to aggregate both queries
func getUserAndBookIds() ([]int, []int, err) {
userIds, err := getUserIds(...)
// potential error handling
bookIds, err := getBookIds(...)
// potential error handling
return userIds, bookIds, nil // you have done err handling earlier
}
I think this code is easier to read/maintain. You won't face the variable reassignment and other issues.
You can take a look at the go-pg documentations for more details on how to improve your query.

Running SQL queries in Tarantool from Go are silent (no errors)

I try to run SQL queries from a Golang application using the official Tarantool client. The only way I know how to do it is by using conn.Eval like below. But I don't receive any errors. I can drop non existing tables, insert rows with duplicate keys. I will never find out that something went wrong.
resp, err := conn.Eval("box.execute([[TRUNCATE TABLE not_exists;]])", []interface{}{})
// err is always nil
// resp.Error is always empty
Can you point out the way to get errors or the right way to run SQL queries.
thanks for the question!
I have talked to the team and we have two options for you. Here is the first one:
resp, err := conn.Eval("return box.execute([[TRUNCATE TABLE \"not_exists\";]])", []interface{}{})
if len(resp.Tuples()) > 1 {
fmt.Println("Error", resp.Tuples()[1])
}else{
fmt.Println("Result", resp.Tuples()[0])
}
And here is the second one:
r, err := tnt.Eval("local data, err = box.execute(...) return data or box.error(err)", []interface{}{
`TRUNCATE table "not_exists";`,
})
if err != nil {
log.Fatalln(err)
}
I hope that helps! And if it doesn't - let me know and we will look into this one more time.

How to implement multiple delete with bulkDelete

I have array of Profile Ids (uid) and need to delete all these profiles with 1 request.
Here is my code.
func MultipleDeleteFromElastic(index string, inType string, uid string, ct interface{}) error {
client, err := GetElasticCon()
if err != nil {
ElasticConnectError.DeveloperMessage = err.Error()
return ElasticConnectError
}
deleteReq := elastic.NewBulkDeleteRequest().Index(index).Type(inType).Id(uid)
_, err1 := client.Bulk().Add(deleteReq).Do(context.Background())
if err1 != nil {
ElasticConnectError.DeveloperMessage = err1.Error()
return ElasticConnectError
}
return err1
}
What does the bulkDelete need? How I can pass an array in BulkDelete?
I have no idea if I doing this right (obviously I am not).
Have you tried to use Delete by query with Go?
The delete by query API from ES lets you delete all the objects that satisfy certain query.
Just be careful. If you don't have any query, all the objects in the index will be deleted. You know, like the DELETE without a WHERE joke :P
Hope this is helpful :D

Elastic Go cannot find document

Sorry if this is a dumb question. I'm using a service which was built using Elasticsearch client for Go. I run the service and now seems like the elasticsearch server have the index of the data. However, when I tried to query those data with http://129.94.14.234:9200/chromosomes/chromosome/1, I got {"_index":"chromosomes","_type":"chromosome","_id":"1","_version":1,"found":true,"_source":{"id":"1","length":249250621}} I checked that the SQL query from the database have those data. Now the question is that, how do I check that my elasticsearch index have those data? Or If anyone can tell me what might be wrong on the code that'll be great as well.
Here's the code that I assume adding the documents to chromosomes index.
func (c ChromosomeIndexer) AddDocuments(db *sql.DB, client *elastic.Client, coordID int) {
sqlQuery := fmt.Sprintf("SELECT seq_region.name, seq_region.length FROM seq_region WHERE seq_region.`name` REGEXP '^[[:digit:]]{1,2}$|^[xXyY]$|(?i)^mt$' AND seq_region.`coord_system_id` = %d", coordID)
stmtOut, err := db.Prepare(sqlQuery)
check(err)
defer stmtOut.Close()
stmtOut.Query()
rows, err := stmtOut.Query()
defer rows.Close()
check(err)
chromoFn := func(rows *sql.Rows, bulkRequest *elastic.BulkService) {
var name string
var length int
err = rows.Scan(&name, &length)
check(err)
chromo := Chromosome{ID: name, Length: length}
fmt.Printf("chromoID: %s\n", chromo.ID)
req := elastic.NewBulkIndexRequest().
OpType("index").
Index("chromosomes").
Type("chromosome").
Id(chromo.ID).
Doc(chromo)
bulkRequest.Add(req)
}
elasticutil.IterateSQL(rows, client, chromoFn)
}
This service have other index which I can query the data with no problem, I only have problem when querying chromosomes data.
Please let me know if I need to put more code so that I can give a bit more context on the problem, I just started on Go and Elasticsearch, and I tried reading the documentation, but it just leads to more confusion.

Spawning go routines in a loop with closure

I have a list of strings which can contain number of elements ranging from 1 to 100,000. I want to verify each string and see if they are stored in a database, which requires call to network.
In order to maximize the efficiency, I want to spawn a go routine for each element.
Goal is to return false if one of the verifications inside the go routine function returns err, and return true if there is no err. So if we find at least one err we can stop since we already know that it is going to return false.
This is the basic idea, and the function below is the structure I've been thinking about using so far. I'd like to know if there is a better way (perhaps using channel?).
for _, id := range userIdList {
go func(id string){
user, err := verifyId(id)
if err != nil {
return err
}
// ...
// few more calls to other APIs for verifications
if err != nil {
return err
}
}(id)
}
I have wrote a small function that might be helpful for you.
Please take a look at limited parallel operations

Resources