How to select column specific in gorm? - go

I've tried to code for column specific select, but always fail with error messageScan error on column index 0, name "genre_name": unsupported Scan, storing driver.Value type string into type *models.MovieGenre;. How to solve it?
func (MovieRepositoryImpl *MovieRepositoryImpl) GetMovieById(id int) (*Movie, error) {
var movie Movie
err := MovieRepositoryImpl.DB.First(&movie, id).Error
if err != nil {
return nil, err
}
var movie_genres []*MovieGenre
err = MovieRepositoryImpl.DB.Select("genres.genre_name").
Joins("JOIN genres ON genres.id = movie_genres.genre_id").
Preload("Genre").Where("movie_id = ?", id).
Find(&movie_genres).Error
if err != nil {
return nil, err
}
movie.MovieGenre = append(movie.MovieGenre, movie_genres...)
return &movie, nil
}
without select the code is work, but I want to Select only genre name for API

Related

sql: Scan error on column index 0, name "genre_name": unsupported Scan, storing driver.Value type string into type *models.MovieGenre

I want to select specific column using gorm, but, when I try Like this
err = MovieRepositoryImpl.DB.Joins("JOIN genres ON genres.id = movie_genres.genre_id").Where("movie_id = ?", id).Select("genres.genre_name").Find(&movie_genres).Error
I get the error sql: Scan error on column index 0, name "genre_name": unsupported Scan, storing driver.Value type string into type *models.MovieGenre
this is the complete code
func (MovieRepositoryImpl *MovieRepositoryImpl) GetMovieById(id int) (*Movie, error) {
var movie Movie
err := MovieRepositoryImpl.DB.First(&movie, id).Error
if err != nil {
return nil, err
}
var movie_genres []*MovieGenre
err = MovieRepositoryImpl.DB.Joins("JOIN genres ON genres.id = movie_genres.genre_id").Where("movie_id = ?", id).Select("genres.genre_name").Find(&movie_genres).Error
if err != nil {
return nil, err
}
err = MovieRepositoryImpl.DB.First(&movie_genres, id).Error
movie.MovieGenre = append(movie.MovieGenre, movie_genres...)
return &movie, nil
}
Without select ORM, it's working. How I can Solve this?

How to list all the items in a table with pagination

I'm trying to list all the items in a DynamoDB table with pagination, and here below is my attempt:
const tableName = "RecordingTable"
type Recording struct {
ID string `dynamodbav:"id"`
CreatedAt string `dynamodbav:"createdAt"`
UpdatedAt string `dynamodbav:"updatedAt"`
Duration int `dynamodbav:"duration"`
}
type RecordingRepository struct {
ctx context.Context
svc *dynamodb.Client
}
func NewRecordingRepository(ctx context.Context) (*RecordingRepository, error) {
cfg, err := config.LoadDefaultConfig(ctx)
if err != nil {
return nil, err
}
return &RecordingRepository{ctx, dynamodb.NewFromConfig(cfg)}, nil
}
func (r *RecordingRepository) List(page int, size int) ([]Recording, error) {
size32 := int32(size)
queryInput := &dynamodb.QueryInput{
TableName: aws.String(tableName),
Limit: &size32,
}
recordings := []Recording{}
queryPaginator := dynamodb.NewQueryPaginator(r.svc, queryInput)
for i := 0; queryPaginator.HasMorePages(); i++ {
result, err := queryPaginator.NextPage(r.ctx)
if err != nil {
return nil, err
}
if i == page {
if result.Count > 0 {
for _, v := range result.Items {
recording := Recording{}
if err := attributevalue.UnmarshalMap(v, &recording); err != nil {
return nil, err
}
recordings = append(recordings, recording)
}
}
break
}
}
return recordings, nil
}
When I run the code above, I get the following error message:
api error ValidationException: Either the KeyConditions or KeyConditionExpression parameter must be specified in the request.
But why should I specify a KeyConditionExpression when I want to get all the items? Is there another way to go or a workaround this?
Query does need your keys. It is meant to find specific items in your DynamoDB. To get all items in your DynamoDB, you need to use the Scan operation.
This should be easily fixed in your code.
Instead of QueryInput use ScanInput and instead of NewQueryPaginator use NewScanPaginator.
Just replaced QueryInput with ScanInput and QueryPaginator with ScanPaginator.

Update mongo db collection to create new field with unique values without impacting existing data using mongo go driver

I am new to mongo and mongo go driver. Need to add new field "uri" to my collection with existing data - using mongo go driver. New field needs to be populated with unique values so that unique index can be created on it. the collection uses _id as well, if there is a way we can populate new field based on _id field that will work as well.
I am trying below code, not sure how to populate unique values.
//Step1: update all documents to add new field with unique values
_, err := myColl.UpdateMany(
ctx,
bson.D{},// select all docs in collection
bson.D{
{"$set", bson.D{{"uri", GenerateRandomUniqueString()}}},
},
)
if err != nil {
return err
}
// then next step is to create index on this field:
key := bson.D{{"uri", 1}}
opt := options.Index().SetName("uri-index").SetUnique(true)
model := mongo.IndexModel{Keys: key, Options: opt}
_, err = myColl.Indexes().CreateOne(ctx, model)
if err != nil {
return err
}
Once the index is set up, old records will marked read only, but we can not delete those. New data will have unique 'uri' string value.
Any help is much appreciated.
Using above code fails while unique index creation, as the same value is used for backfill.
I tried this as well:
func BackFillUri(db *mongo.Database) error {
myColl := db.Collection("myColl")
ctx := context.Background()
cursor, err := myColl.Find(ctx, bson.M{})
if err != nil {
return err
}
defer cursor.Close(ctx)
for cursor.Next(ctx) {
var ds bson.M
if err = cursor.Decode(&ds); err != nil {
return err
}
_, err1 := myColl.UpdateOne(
ctx,
bson.D{"_id": ds.ObjectId},
bson.D{
{"$set", bson.D{{"uri", rand.Float64()}}},
},
)
if err1 != nil {
return err1
}
}
return nil
}
But i am getting quite a few errors and not sure if any of the above logic is correct
I finally used below code, hope it helps someone who's new like me :-)
const charset = "abcdefghijklmnopqrstuvwxyz" +
"ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
func RandomUniqueString(length int) string {
return StringWithCharset(length, charset)
}
var seededRand *rand.Rand = rand.New(
rand.NewSource(time.Now().UnixNano()))
func StringWithCharset(length int, charset string) string {
b := make([]byte, length)
for i := range b {
b[i] = charset[seededRand.Intn(len(charset))]
}
return string(b)
}
// Adds index on uri
func AddUriIndex(db *mongo.Database) error {
mycoll := db.Collection("mycoll")
ctx := context.Background()
//backfill code starts
type attribute struct {
Key string `bson:"key"`
Value interface{} `bson:"value"`
}
type item struct {
ID primitive.ObjectID `bson:"_id"`
ResourceAttributes []attribute `bson:"resourceAttributes,omitempty"`
}
cursor, err := mycoll.Find(ctx, primitive.M{})
if err != nil {
return err
}
defer cursor.Close(ctx)
for cursor.Next(ctx) {
var result item
if err = cursor.Decode(&result); err != nil {
return err
}
//fmt.Println("Found() result:", result)
filter := primitive.M{"_id": result.ID}
update := primitive.M{"$set": primitive.M{"uri": RandomUniqueString(32)}}
if _, err := mycoll.UpdateOne(ctx, filter, update); err != nil {
return err
}
}
//add uri-index starts
key := bson.D{{"uri", 1}}
opt := options.Index().
SetName("uri-index").
SetUnique(true)
model := mongo.IndexModel{Keys: key, Options: opt}
_, err = mycoll.Indexes().CreateOne(ctx, model)
if err != nil {
return err
}
return nil
}

Is there something like sql.NullJson akin to sql.NullString in golang?

I am querying from postgres using golang, one of the field contains json that can sometimes be NULL
Like this
row := db.QueryRow(
"select my_string, my_json from my_table where my_string = $1",
my_id)
var my_string sql.NullString
var myjson MyJsonStruct
err := row.Scan(&my_string2, &myjson)
But I am getting
sql: Scan error on column index 2, name "my_json": unsupported Scan, storing driver.Value type <nil> into type *main.MyJsonStruct
I checked https://godoc.org/database/sql but didn't find sql.NullJson What is a go way of dealing with this situation?
No, there is no sql.json. I think the best way to dealing with json column in db is to implement valuer and scanner. so something like this :
// Scan implements database/sql.Scanner interface
func (m *MyJsonStruct) Scan(src interface{}) error {
if src == nil {
return nil
}
data, ok := src.([]byte)
if !ok {
return errors.New("type assertion to []byte failed")
}
var myJsonStruct MyJsonStruct
if err := json.Unmarshal(data, &myJsonStruct); err != nil {
return fmt.Errorf("unmarshal myJsonStruct: %w", err)
}
*m = myJsonStruct
return nil
}
// Value implements database/sql/driver.Valuer interface
func (m MyJsonStruct) Value() (driver.Value, error) {
data, err := json.Marshal(m)
if err != nil {
return nil, fmt.Errorf("marshal myJsonStruct: %w", err)
}
return data, nil
}

Reading BigQuery in Golang. Not all expected results are given. What to do?

Given that the SQL is running perfectly in Query Editor. Still after assigning it to a struct, the data seems to have different values. Why is it like that?
var RunQuery = func(req *http.Request, query string)(*bigquery.RowIterator, error){
ctx := appengine.NewContext(req)
ctxWithDeadline, _ := context.WithTimeout(ctx, 30*time.Minute)
bqClient, bqErr := bigquery.NewClient(ctxWithDeadline, project, option.WithCredentialsFile(serviceAccount))
if bqErr != nil {
log.Errorf(ctx, "%v", bqErr)
return nil, bqErr
}
q := bqClient.Query(query)
job, err := q.Run(ctx)
if err != nil {
log.Errorf(ctx, "%v", err)
return nil, err
}
status, err := job.Wait(ctx)
if err != nil {
log.Errorf(ctx, "%v", err)
return nil, err
}
if err := status.Err(); err != nil {
log.Errorf(ctx, "%v", err)
return nil, err
}
it, err := job.Read(ctx)
if err != nil {
log.Errorf(ctx, "%v", err)
return nil, err
}
log.Infof(ctx, "Total Rows: %v", it.TotalRows)
return it, nil
}
type Customers struct {
CustomerName string `bigquery:"customer_name"`
CustomerAge int `bigquery:"customer_age"`
}
var rowsRead int
func main() {
query := `SELECT
name as customer_name,
age as customer_age
FROM customer_table
WHERE customerStatus = '0'`
customerInformation, customerInfoErr := RunQuery(req, query, false)
if customerInfoErr != nil {
log.Errorf(ctx, "Fetching customer information error :: %v", customerInfoErr)
return
}
for {
var row Customers
err := customerInformation.Next(&row)
log.Infof(ctx, "row %v", row)
if err == iterator.Done {
log.Infof(ctx, "ITERATION COMPLETE. Rows read %v", rowsRead)
break
}
rowsRead++
}
}
Let's say i have Query Results of
customer_name|customer_age
cat | 2
dog | 3
horse | 10
But after assigning it to a struct the results was
customer_name|customer_age
"" | 2
dog | ""
"" | ""
Why is it like this? i even tested it on chunk where i set the limit to 1000, still the same results. But the query results in Query Editor is what i expect
Solved it using Value Loader bigquery.Value. Instead of using expected struct in mapping the query results. used map[string]bigquery.Value. Still don't know why mapping query results with expected struct is not working perfectly. Here is my solution.
for {
row := make(map[string]bigquery.Value)
err := customerInformation.Next(&row)
log.Infof(ctx, "row %v", row)
if err == iterator.Done {
log.Infof(ctx, "ITERATION COMPLETE. Rows read %v", rowsRead)
break
}
rowsRead++
}
From the documentation:
If dst is a pointer to a struct, each column in the schema will be matched with an exported field of the struct that has the same name, ignoring the case. Unmatched schema columns and struct fields will be ignored.
cloud.google.com/go/bigquery
Here you try to resolve customer_age to a struct property named CustomerAge. If you update it to Customer_Age or customer_age it should work.

Resources