Using GORM to retrieve tables names from Postgresql - go

Looking to retrieve table names from my postgresql database. Now, I know in Go you can use sql and the pq driver, but I'm using GORM for doing queries in my REST API.
The table_name type in PostgreSQL is "information_schema.sql_identifier". This is what I was trying to do, but the type isn't string.
var tables []string
if err := db.Table("information_schema.tables").Select("table_name").Where("table_schema = ?", "public").Find(&tables).Error; err != nil {
panic(err)
}

TL;DR
To select a single column values into a slice using Gorm, you can use db.Pluck helper:
var tables []string
if err := db.Table("information_schema.tables").Where("table_schema = ?", "public").Pluck("table_name", &tables).Error; err != nil {
panic(err)
}
TS;WM
Considering this, the SELECT statement returns a set of rows with one or more columns. In order to map those to Go code, we need a sort of struct so that Gorm can understand which column is mapped to which field of the struct. Even when you only select 1 single column, it's just a struct with 1 single field.
type Table struct {
TableName string
// more fields if needed...
}
So your output variable should be []*Table:
var tables []*Table
if err := db.Table("information_schema.tables").Select("table_name").Where("table_schema = ?", "public").Find(&tables).Error; err != nil {
panic(err)
}
Note: it could be []Table as well if you don't want to modify the element inside the slice.
If you don't want to define the struct, you can use the the db.Pluck function which is just a helper of this sort of code:
rows, err := db.Table("information_schema.tables").Select("table_name").Where("table_schema = ?", "public").Rows()
defer rows.Close()
var tables []string
var name string
for rows.Next() {
row.Scan(&name)
tables = append(tables, name)
}

Related

GORM return list of list of results or map of results with group by id

Essentially, using GORMDB, my current code looks something like this:
res = []*modelExample
DB.Model(&modelExample{}).
Order("task_id ").
Find(res)
And what I do with res is that I will manually loop through and append the models with the same task_id into one list, and then append this list to be worked on. The reason why I need to do this is because there are some specific operations i need to do on specific columns that I need to extract which I can't do in GORM.
However, is there a way to do this more efficiently where I return like a list of list, which I can then for loop and do my operation on each list element?
You should be able to achieve your needs with the following code snippet:
package main
import (
"fmt"
"gorm.io/driver/postgres"
"gorm.io/gorm"
)
type modelExample struct {
TaskId int
Name string
}
func main() {
dsn := "host=localhost user=postgres password=postgres dbname=postgres port=5432 sslmode=disable"
db, err := gorm.Open(postgres.Open(dsn), &gorm.Config{})
if err != nil {
panic(err)
}
db.AutoMigrate(&modelExample{})
// here you should populate the database with some data
// querying
res := make(map[int][]modelExample, 0)
rows, err := db.Table("model_examples").Select("task_id, name").Rows()
if err != nil {
panic(err)
}
defer rows.Close()
// scanning
for rows.Next() {
var taskId int
var name string
rows.Scan(&taskId, &name)
if _, isFound := res[taskId]; !isFound {
res[taskId] = []modelExample{{taskId, name}}
continue
}
res[taskId] = append(res[taskId], modelExample{taskId, name})
}
// always good idea to check for errors when scanning
if err = rows.Err(); err != nil {
panic(err)
}
for _, v := range res {
fmt.Println(v)
}
}
After the initial setup, let's take a closer look at the querying section.
First, you're going to get all the records from the table. The records you get are stored in the rows variable.
In the for loop, you scan all of the records. Each record will be either added as a new map entry or appended to an existing one (if the taskId is already present in the map).
This is the easiest way to create different lists based on a specific column (e.g. the TaskId). Actually, from what I understood, you need to split the records rather than grouping them with an aggregation function (e.g. COUNT, SUM, and so on).
The other code I added was just put in for clarity.
Let me know if this solves your issue or if you need something else, thanks!

How to avoid sql connection close if thousands of Goroutine try to access the database

I have a chapter table have about 2000000 rows, I want to to update each row for some specific conditions:
func main(){
rows, err := db.Query("SELECT id FROM chapters where title = 'custom_type'")
if err != nil {
panic(err)
}
for rows.Next() {
var id int
_ = rows.Scan(&id)
fmt.Println(id)
go updateRowForSomeReason(id)
}
}
func updateRowForSomeReason(id int) {
row, err := db.Query(fmt.Sprintf("SELECT id FROM chapters where parent_id = %v", id))
if err != nil {
panic(err) <----- // here is the panic occurs
}
for rows.Next() {
// ignore update code for simplify
}
}
Inside updateRowForSomeReason, I execute update statement for each row.
It will works with few seconds, after that the error will print:
323005
323057
323125
323244
323282
323342
323459
323498
323556
323618
323693
325343
325424
325468
325624
325816
326001
326045
326082
326226
326297
panic: sql: database is closed
This doesn't seem to be a Go problem as such, more a question of how to optimally structure your SQL within your code. You're taking a result set from executing a query on 2,000,000 rows:
rows, err := db.Query("SELECT id FROM chapters where title = 'custom_type'")
then executing another query for each row in this result set:
row, err := db.Query(fmt.Sprintf("SELECT id FROM chapters where parent_id = %v", id))
and then executing some more code for each of those, apparently one by one:
for rows.Next() {
// ignore update code for simplify
}
This is effectively two levels of nesting of statements, which is very inefficient way to load all these results into program memory and then execute independent UPDATE statements:
SELECT
+---->SELECT
+---->UPDATE
Instead, you could be doing all the work in the database itself, which would be much more efficient. You don't show what the UPDATE statement is, but this is the key part. Let's say you want to set a publish flag. You could do something like this:
UPDATE chapters
SET publish=true
WHERE parent_id in
(SELECT id FROM chapters
WHERE title='custom_type')
RETURNING id;
By using a nested query, you can combine all of the three separate queries into one single query. The database has all the information it needs to optimise the operation and build the most efficient query plan, and you are only executing a single db.Query operation. The RETURNING clause lets you retrieve a list of the ids that ended up being updated in the operation. So the code would be as simple as:
func main(){
rows, err := db.Query("UPDATE chapters SET publish=true WHERE parent_id in" +
"(SELECT id FROM chapters WHERE title='custom_type')" +
"RETURNING id;")
if err != nil {
panic(err)
}
for rows.Next() {
var id int
_ = rows.Scan(&id)
fmt.Println(id)
}
}

Gorm Update and Get the Updated Rows in a single operation?

Is there any way to get the rows that have been updated using the update command in Gorm, using a single operation.
I know this is like a million years old but for the sake of completion here's the Gorm way of doing it - clauses.
result := r.Gdb.Model(&User{}).Clauses(clause.Returning{}).Where("id = ?", "000-000-000").Updates(content)
Ref: Gorm Returning Data From Modified Rows
It's not pretty, but since you are using postgres you can do:
realDB := db.DB()
rows, err := realDB.Query("UPDATE some_table SET name = 'a' WHERE name = 'b' RETUNING id, name")
//you could probably do db.Raw but I'm not sure
if err != nil {
log.Fatal(err)
}
defer rows.Close()
for rows.Next() {
var id int
var name string
err := rows.Scan(&id, &name)
if err != nil {
log.Fatal(err)
}
log.Println(id, name)
}
This is a decent solution if you know the number of rows you're updating is relatively small (<1000)
var ids []int
var results []YourModel
// Get the list of rows that will be affected
db.Where("YOUR CONDITION HERE").Table("your_table").Select("id").Find(&ids)
query := db.Where("id IN (?)", ids)
// Do the update
query.Model(&YourModel{}).Updates(YourModel{field: "value"})
// Get the updated rows
query.Find(&results)
This is safe against race conditions since it uses the list of IDs to do the update instead of repeating the WHERE clause. As you can imagine this becomes a bit less practical when you start talking about thousands of rows.

How to update a bigquery row in golang

I have a go program connected to a bigquery table. This is the table's schema:
name STRING NULLABLE
age INTEGER NULLABLE
amount INTEGER NULLABLE
I have succeded at queryng the data of this table and printing all rows on console with this code:
ctx := context.Background()
client, err := bigquery.NewClient(ctx, projectID)
q := client.Query("SELECT * FROM test.test_user LIMIT 1000")
it, err := q.Read(ctx)
if err != nil {
log.Fatal(err)
}
for {
var values []bigquery.Value
err := it.Next(&values)
if err == iterator.Done {
break
}
if err != nil {
// TODO: Handle error.
}
fmt.Println(values)
}
And I also have succeded to insert data on the table from a struct using this code:
type test struct {
Name string
Age int
Amount int
}
u := client.Dataset("testDS").Table("test_user").Uploader()
savers := []*bigquery.StructSaver{
{Struct: test{Name: "Jack", Age: 23, Amount:123}, InsertID: "id1"},
}
if err := u.Put(ctx, savers); err != nil {
log.Fatal(err)
}
fmt.Printf("rows inserted!!")
Now, what I am failing to do is updating rows. What I want to do is selecting all the rows and update all of them with an operation (for example: amount = amount * 2)
How can I achieve this using golang?
Updating rows is not specific to Go, or any other client library. If you want to update data in BigQuery, you need to use DML (Data Manipulation Language) via SQL. So, essentially you already have the main part working (running a query) - you just need to change this SQL to use DML.
But, a word of caution: BigQuery is a OLAP service. Don't use it for OLTP. Also, there are quotas with using DML. Make sure you familiarise yourself with them.

Is there a simple way to convert data base rows to JSON in Golang

Currently on what I've seen so far is that, converting database rows to JSON or to []map[string]interface{} is not simple. I have to create two slices and then loop through columns and create keys every time.
...Some code
tableData := make([]map[string]interface{}, 0)
values := make([]interface{}, count)
valuePtrs := make([]interface{}, count)
for rows.Next() {
for i := 0; i < count; i++ {
valuePtrs[i] = &values[i]
}
rows.Scan(valuePtrs...)
entry := make(map[string]interface{})
for i, col := range columns {
var v interface{}
val := values[i]
b, ok := val.([]byte)
if ok {
v = string(b)
} else {
v = val
}
entry[col] = v
}
tableData = append(tableData, entry)
}
Is there any package for this ? Or I am missing some basics here
I'm dealing with the same issue, as far as my investigation goes it looks that there is no other way.
All the packages that I have seen use basically the same method
Few things you should know, hopefully will save you time:
database/sql package converts all the data to the appropriate types
if you are using the mysql driver(go-sql-driver/mysql) you need to add
params to your db string for it to return type time instead of a string
(use ?parseTime=true, default is false)
You can use tools that were written by the community, to offload the overhead:
A minimalistic wrapper around database/sql, sqlx, uses similar way internally with reflection.
If you need more functionality, try using an "orm": gorp, gorm.
If you interested in diving deeper check out:
Using reflection in sqlx package, sqlx.go line 560
Data type conversion in database/sql package, convert.go line 86
One thing you could do is create a struct that models your data.
**Note: I am using MS SQLServer
So lets say you want to get a user
type User struct {
ID int `json:"id,omitempty"`
UserName string `json:"user_name,omitempty"`
...
}
then you can do this
func GetUser(w http.ResponseWriter, req *http.Request) {
var r Role
params := mux.Vars(req)
db, err := sql.Open("mssql", "server=ServerName")
if err != nil {
log.Fatal(err)
}
err1 := db.QueryRow("select Id, UserName from [Your Datavse].dbo.Users where Id = ?", params["id"]).Scan(&r.ID, &r.Name)
if err1 != nil {
log.Fatal(err1)
}
json.NewEncoder(w).Encode(&r)
if err != nil {
log.Fatal(err)
}
}
Here are the imports I used
import (
"database/sql"
"net/http"
"log"
"encoding/json"
_ "github.com/denisenkom/go-mssqldb"
"github.com/gorilla/mux"
)
This allowed me to get data from the database and get it into JSON.
This takes a while to code, but it works really well.
Not in the Go distribution itself, but there is the wonderful jmoiron/sqlx:
import "github.com/jmoiron/sqlx"
tableData := make([]map[string]interface{}, 0)
for rows.Next() {
entry := make(map[string]interface{})
err := rows.MapScan(entry)
if err != nil {
log.Fatal("SQL error: " + err.Error())
}
tableData = append(tableData, entry)
}
If you know the data type that you are reading, then you can read into the data type without using generic interface.
Otherwise, there is no solution regardless of the language used due to nature of JSON itself.
JSON does not have description of composite data structures. In other words, JSON is a generic key-value structure. When parser encounters what is supposed to be a specific structure there is no identification of that structure in JSON itself. For example, if you have a structure User the parser would not know how a set of key-value pairs maps to your structure User.
The problem of type recognition is usually addressed with document schema (a.k.a. XSD in XML world) or explicitly through passed expected data type.
One quick way to go about being able to get an arbirtrary and generic []map[string]interface{} from these query libraries is to populate an array of interface pointers with the same size of the amount of columns on the query, and then pass that as a parameter on the scan function:
// For example, for the go-mssqldb lib:
queryResponse, err := d.pool.Query(query)
if err != nil {
return nil, err
}
defer queryResponse.Close()
// Holds all the end-results
results := []map[string]interface{}{}
// Getting details about all the fields from the query
fieldNames, err := queryResponse.Columns()
if err != nil {
return nil, err
}
// Creating interface-type pointers within an array of the same
// size of the number of columns we have, so that we can properly
// pass this to the "Scan" function and get all the query parameters back :)
var scanResults []interface{}
for range fieldNames {
var v interface{}
scanResults = append(scanResults, &v)
}
// Parsing the query results into the result map
for queryResponse.Next() {
// This variable will hold the value for all the columns, named by the column name
rowValues := map[string]interface{}{}
// Cleaning up old values just in case
for _, column := range scanResults {
*(column.(*interface{})) = nil
}
// Scan into the array of pointers
err := queryResponse.Scan(scanResults...)
if err != nil {
return nil, err
}
// Map the pointers back to their value and the associated column name
for index, column := range scanResults {
rowValues[fieldNames[index]] = *(column.(*interface{}))
}
results = append(results, rowValues)
}
return results, nil

Resources