How to write incrementally to html template in go - go

I'm building a web based data browser called Mavgo Flight.
I want large tables from sqlite to print continuously instead of the default behavior of only printing when all the data is available. I tried running the template per row of data which fails.
func renderHTMLTable(w http.ResponseWriter, result *sqlx.Rows) {
cols, err := result.Columns()
if err != nil {
log.Println(err, "renderHTMLTable")
return
}
tmpl, err := template.ParseFiles("./templates/2d.html")
if err != nil {
log.Println("template failed", err)
return
}
data := HTMLTable{}
data.Cols = cols
for result.Next() {
cols, err := result.SliceScan()
if err != nil {
log.Println(w, err)
break
}
s := make([]string, len(cols))
for i, v := range cols {
s[i] = fmt.Sprint(v)
}
tmpl.Execute(w, s)
}
}

I gave up on being clever did exactly what Cerise suggested
The function that writes rows incrementally:
func renderHTMLTable(w http.ResponseWriter, result *sqlx.Rows) {
cols, err := result.Columns()
if err != nil {
log.Println(err, "renderHTMLTable")
return
}
head, err := template.ParseFiles("./templates/head.html")
if err != nil {
log.Println("template failed", err)
return
}
row, err := template.ParseFiles("./templates/row.html")
if err != nil {
log.Println("template failed", err)
return
}
foot := ` </tbody>
</table>
</div>
</body>
</html>`
head.Execute(w, cols)
s := make([]string, len(cols))
for result.Next() {
values, err := result.SliceScan()
if err != nil {
log.Println(w, err)
break
}
for i, v := range values {
s[i] = fmt.Sprint(v)
}
row.Execute(w, s)
}
fmt.Fprint(w, foot)
}

Related

Managing Oracle stored procedures in go

I am trying to load 300 million records from the stored procedure cursor. (Gordor + sql)
"database/sql" + "github.com/godror/godror"
Making sql rows
func (o *OracleDb) OpenCursorTX(tx *sql.Tx, sqlStmt string) (*sql.Rows, error) {
var cur driver.Rows
ctx := context.Background()
stmt, err := tx.PrepareContext(ctx, sqlStmt)
if err != nil {
logrus.Fatalf("error parsing cursor %s: %s", sqlStmt, err.Error())
return nil, err
}
if _, err := stmt.ExecContext(ctx, sql.Out{Dest: &cur}, godror.PrefetchCount(100000), godror.FetchArraySize(100000)); err != nil {
logrus.Fatalf("error exec cursor %s: %s", sqlStmt, err.Error())
return nil, err
}
rows, err := godror.WrapRows(ctx, tx, cur)
if err != nil {
logrus.Fatalf("error cursor wrap rows %s: %s", sqlStmt, err.Error())
return nil, err
}
if err := stmt.Close(); err != nil {
return nil, err
}
return rows, nil
}
Reading rows in code
rows, err := l.oracleDb.OpenCursorTX(tx, "begin drs_router_read.get_rate_b_groups(po_rate_b_groups => :po_rate_b_groups); end;")
if err != nil {
return err
}
var rbg domain.RmsGroupHist
var gwgrId, direction int
var dialCode, key, rbgDBegin, rbgDEnd string
rbgMap := make(map[string][]domain.RmsGroupHist)
i:=0
for rows.Next() {
if err := rows.Scan(&gwgrId, &direction, &dialCode, &rbg.RmsgId, &rbgDBegin, &rbgDEnd); err != nil {
return fmt.Errorf("error scan db rows loadRateBGroups %v", err)
}
rbg.DBegin = util.StrToInt64(rbgDBegin)
rbg.DEnd = util.StrToInt64(rbgDEnd)
key = domain.RBObjectKey + ":" + strconv.Itoa(gwgrId) + ":" + strconv.Itoa(direction) + ":" + dialCode
rbgMap[key] = append(rbgMap[key], rbg)
i++
if i%100000 == 0 {
logrus.Infof("rows %d\n", i)
}
}
rows.Next() hangs for 17 minutes after that len(rbgMap) = 0
I think I'm missing something, it works fine on 9 million.

Atomically Execute commands across Redis Data Structures

I want to execute some redis commands atomically (HDel, SADD, HSet etc). I see the Watch feature in the go-redis to implement transactions , however since I am not going to modify the value of a key i.e use SET,GET etc , does it make sense to use Watch to execute it as transaction or just wrapping the commands in a TxPipeline would be good enough?
Approach 1 : Using Watch
func sampleTransaction() error{
transactionFunc := func(tx *redis.Tx) error {
// Get the current value or zero.
_, err := tx.TxPipelined(context.Background(), func(pipe redis.Pipeliner) error {
_, Err := tx.SAdd(context.Background(), "redis-set-key", "value1").Result()
if Err != nil {
return Err
}
_, deleteErr := tx.HDel(context.Background(), "redis-hash-key", "value1").Result()
if deleteErr != nil {
return deleteErr
}
return nil
})
return err
}
retries:=10
// Retry if the key has been changed.
for i := 0; i < retries; i++ {
fmt.Println("tries", i)
err := redisClient.Watch(context.Background(), transactionFunc())
if err == nil {
// Success.
return nil
}
if err == redis.TxFailedErr {
continue
}
return err
}
}
Approach 2: Just wrapping in TxPipelined
func sampleTransaction() error {
_, err:= tx.TxPipelined(context.Background(), func(pipe redis.Pipeliner) error {
_, Err := tx.SAdd(context.Background(), "redis-set-key", "value1").Result()
if Err != nil {
return Err
}
_, deleteErr := tx.HDel(context.Background(), "redis-hash-key", "value1").Result()
if deleteErr != nil {
return deleteErr
}
return nil
})
return err
}
As far as I know, pipelines do not guarantee atomicity. If you need atomicity, use lua.
https://pkg.go.dev/github.com/mediocregopher/radix.v3#NewEvalScript

database/sql rows.scan hangs after 350K rows

I have a task to pull data from an Oracle Database and I am trying to pull huge data > 6MM records with 100 columns for processing.
Need to convert the data to a Map. I was successfully able to process them for 350K records in less than 35 seconds. After that the server hangs and does not proceed further.
Is there a way I can batch these based on the row size or batch them to free up my space.
func FetchUsingGenericResult(ctx context.Context, dsConnection *string, sqlStatement string) (*entity.GenericResultCollector, error) {
columnTypes := make(map[string]string)
var resultCollection entity.GenericResultCollector
db, err := sql.Open("godror", *dsConnection)
if err != nil {
return &resultCollection, errors.Wrap(err, "error connecting to Oracle")
}
log := logger.FromContext(ctx).Sugar()
log.Infof("start querying from Oracle at :%v", time.Now())
rows, err := db.Query(sqlStatement, godror.FetchRowCount(defaultFetchCount))
if err != nil {
return &resultCollection, errors.Wrap(err, "error querying")
}
objects, err := rows2Strings(ctx, rows)
log.Infof("total Rows converted are :%v by %v", len(*objects), time.Now())
resultCollection = entity.GenericResultCollector{
Columns: columnTypes,
Rows: objects,
}
return &resultCollection, nil
}
func rows2Strings(ctx context.Context, rows *sql.Rows) (*[]map[string]string, error) {
result := make(map[string]string)
resultsSlice := []map[string]string{}
fields, err := rows.Columns()
if err != nil {
return nil, err
}
log := logger.FromContext(ctx).Sugar()
waitGroup, ctx := errgroup.WithContext(ctx)
counter := 0
for rows.Next() {
counter++
if counter%defaultFetchCount == 0 {
log.Infof("finished converting %v rows by %v", counter, time.Now())
}
waitGroup.Go(func() error {
result, err = row2mapStr(rows, fields)
if err != nil {
return err
}
resultsSlice = append(resultsSlice, result)
return nil
})
if err := waitGroup.Wait(); err != nil {
return nil, err
}
}
return &resultsSlice, nil
}
func row2mapStr(rows *sql.Rows, fields []string) (resultsMap map[string]string, err error) {
result := make(map[string]string)
scanResultContainers := make([]interface{}, len(fields))
for i := 0; i < len(fields); i++ {
var scanResultContainer interface{}
scanResultContainers[i] = &scanResultContainer
}
if err := rows.Scan(scanResultContainers...); err != nil {
return nil, err
}
for ii, key := range fields {
rawValue := reflect.Indirect(reflect.ValueOf(scanResultContainers[ii]))
// if row is null then as empty string
if rawValue.Interface() == nil {
result[key] = ""
continue
}
if data, err := value2String(&rawValue); err == nil {
result[key] = data
} else {
return nil, err
}
}
return result, nil
}
func value2String(rawValue *reflect.Value) (str string, err error) {
aa := reflect.TypeOf((*rawValue).Interface())
vv := reflect.ValueOf((*rawValue).Interface())
switch aa.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
str = strconv.FormatInt(vv.Int(), 10)
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
str = strconv.FormatUint(vv.Uint(), 10)
case reflect.Float32, reflect.Float64:
str = strconv.FormatFloat(vv.Float(), 'f', -1, 64)
case reflect.String:
str = vv.String()
case reflect.Array, reflect.Slice:
switch aa.Elem().Kind() {
case reflect.Uint8:
data := rawValue.Interface().([]byte)
str = string(data)
if str == "\x00" {
str = "0"
}
default:
err = fmt.Errorf("Unsupported struct type %v", vv.Type().Name())
}
// time type
case reflect.Struct:
if aa.ConvertibleTo(timeType) {
str = vv.Convert(timeType).Interface().(time.Time).Format(time.RFC3339Nano)
} else {
err = fmt.Errorf("Unsupported struct type %v", vv.Type().Name())
}
case reflect.Bool:
str = strconv.FormatBool(vv.Bool())
case reflect.Complex128, reflect.Complex64:
str = fmt.Sprintf("%v", vv.Complex())
default:
err = fmt.Errorf("Unsupported struct type %v", vv.Type().Name())
}
return
}
Has anyone encountered a similar problem?
Modified the code as below:
func FetchUsingGenericResult(ctx context.Context, dsConnection *string, sqlStatement string) (*entity.GenericResultCollector, error) {
columnTypes := make(map[string]string)
var resultCollection entity.GenericResultCollector
db, err := sql.Open("godror", *dsConnection)
if err != nil {
return &resultCollection, errors.Wrap(err, "error connecting to Oracle")
}
log := logger.FromContext(ctx).Sugar()
log.Infof("start querying from Oracle at :%v", time.Now())
rows, err := db.Query(sqlStatement, godror.FetchRowCount(defaultFetchCount))
if err != nil {
return &resultCollection, errors.Wrap(err, "error querying")
}
objects, err := rows2Strings(ctx, rows)
log.Infof("total Rows converted are :%v by %v", len(*objects), time.Now())
resultCollection = entity.GenericResultCollector{
Columns: columnTypes,
Rows: objects,
}
return &resultCollection, nil
}
func rows2Strings(ctx context.Context, rows *sql.Rows) (*[]map[string]string, error) {
result := make(map[string]string)
resultsSlice := []map[string]string{}
fields, err := rows.Columns()
if err != nil {
return nil, err
}
log := logger.FromContext(ctx).Sugar()
counter := 0
for rows.Next() {
counter++
if counter%defaultFetchCount == 0 {
log.Infof("finished converting %v rows by %v", counter, time.Now())
}
result, err = row2mapStr(rows, fields)
if err != nil {
return nil, err
}
resultsSlice = append(resultsSlice, result)
}
return &resultsSlice, nil
}
func row2mapStr(rows *sql.Rows, fields []string) (resultsMap map[string]string, err error) {
result := make(map[string]string)
scanResultContainers := make([]interface{}, len(fields))
for i := 0; i < len(fields); i++ {
var scanResultContainer interface{}
scanResultContainers[i] = &scanResultContainer
}
if err := rows.Scan(scanResultContainers...); err != nil {
return nil, err
}
for ii, key := range fields {
rawValue := reflect.Indirect(reflect.ValueOf(scanResultContainers[ii]))
// if row is null then as empty string
if rawValue.Interface() == nil {
result[key] = ""
continue
}
if data, err := value2String(&rawValue); err == nil {
result[key] = data
} else {
return nil, err
}
}
return result, nil
}
func value2String(rawValue *reflect.Value) (str string, err error) {
aa := reflect.TypeOf((*rawValue).Interface())
vv := reflect.ValueOf((*rawValue).Interface())
switch aa.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
str = strconv.FormatInt(vv.Int(), 10)
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
str = strconv.FormatUint(vv.Uint(), 10)
case reflect.Float32, reflect.Float64:
str = strconv.FormatFloat(vv.Float(), 'f', -1, 64)
case reflect.String:
str = vv.String()
case reflect.Array, reflect.Slice:
switch aa.Elem().Kind() {
case reflect.Uint8:
data := rawValue.Interface().([]byte)
str = string(data)
if str == "\x00" {
str = "0"
}
default:
err = fmt.Errorf("Unsupported struct type %v", vv.Type().Name())
}
// time type
case reflect.Struct:
if aa.ConvertibleTo(timeType) {
str = vv.Convert(timeType).Interface().(time.Time).Format(time.RFC3339Nano)
} else {
err = fmt.Errorf("Unsupported struct type %v", vv.Type().Name())
}
case reflect.Bool:
str = strconv.FormatBool(vv.Bool())
case reflect.Complex128, reflect.Complex64:
str = fmt.Sprintf("%v", vv.Complex())
default:
err = fmt.Errorf("Unsupported struct type %v", vv.Type().Name())
}
return
}
As the server memory was limited the array could was stuck without proceeding further.
I have started process the data as it gets scanned and this solved my problem.

Trouble getting content type of file in Go

I have a function in which I take in a base64 string and get the content of it (PDF or JPEG).
I read in the base64 content, convert it to bytes and decode it into the file that it is.
I then create a file where I will output the decoded file (JPEG or PDF).
Then I write the bytes to it.
Then I call my GetFileContentType on it and it returns to me an empty string.
If I run the functions separately, as in I first the first function to create the decoded file, and end it. And then call the second function to get the content type, it works and returns it as JPEG or PDF.
What am I doing wrong here?
And is there a better way to do this?
func ConvertToJPEGBase64(
src string,
dst string,
) error {
b, err := ioutil.ReadFile(src)
if err != nil {
return err
}
str := string(b)
byteArray, err := base64.StdEncoding.DecodeString(str)
if err != nil {
return err
}
f, err := os.Create(dst)
if err != nil {
return err
}
if _, err := f.Write(byteArray); err != nil {
return err
}
f.Sync()
filetype, err := client.GetFileContentType(f)
if err != nil {
return err
}
if strings.Contains(filetype, "jpeg") {
// do something
} else {
// do something else
}
return nil
}
// GetFileContentType tells us the type of file
func GetFileContentType(out *os.File) (string, error) {
// Only the first 512 bytes are used to sniff the content type.
buffer := make([]byte, 512)
_, err := out.Read(buffer)
if err != nil {
return "", err
}
contentType := http.DetectContentType(buffer)
return contentType, nil
}
The problem is that GetFileContentType reads from the end of the file. Fix this be seeking back to the beginning of the file before calling calling GetFileContentType:
if _, err := f.Seek(io.SeekStart, 0); err != nil {
return err
}
A better fix is to use the file data that's already in memory. This simplifies the code to the point where there's no need for the GetFileContentType function.
func ConvertToJPEGBase64(
src string,
dst string,
) error {
b, err := ioutil.ReadFile(src)
if err != nil {
return err
}
str := string(b)
byteArray, err := base64.StdEncoding.DecodeString(str)
if err != nil {
return err
}
f, err := os.Create(dst)
if err != nil {
return err
}
defer f.Close() // <-- Close the file on return.
if _, err := f.Write(byteArray); err != nil {
return err
}
fileType := http.DetectContentType(byteArray) // <-- use data in memory
if strings.Contains(fileType, "jpeg") {
// do something
} else {
// do something else
}
return nil
}
More code can be eliminated by using ioutil.WriteFile:
func ConvertToJPEGBase64(src, dst string) error {
b, err := ioutil.ReadFile(src)
if err != nil {
return err
}
byteArray, err := base64.StdEncoding.DecodeString(string(b))
if err != nil {
return err
}
if err := ioutil.WriteFile(dst, byteArray, 0666); err != nil {
return err
}
fileType := http.DetectContentType(byteArray)
if strings.Contains(fileType, "jpeg") {
// do something
} else {
// do something else
}
return nil
}

Golang: fetching data from 1 CSV File to anthoer

I am new to golang, and I am trying to fetch 1 csv file to another new csv file, but i need only 2 records from the old csv file.
How would you fetch only the first two records of that file?
Here is what I have tried so far (also in the play.golang.org):
package main
import (
"encoding/csv"
"fmt"
"io"
"os"
)
func main() {
//SELECTING THE FILE TO EXTRACT.......
csvfile1, err := os.Open("data/sample.csv")
if err != nil {
fmt.Println(err)
return
}
defer csvfile1.Close()
reader := csv.NewReader(csvfile1)
for i := 0; i < 3; i++ {
record, err := reader.Read()
if err == io.EOF {
break
} else if err != nil {
fmt.Println(err)
return
}
csvfile2, err := os.Create("data/SingleColomReading.csv")
if err != nil {
fmt.Println(err)
return
}
defer csvfile2.Close()
records := []string{
record,
}
writer := csv.NewWriter(csvfile2)
//fmt.Println(writer)
for _, single := range records {
er := writer.Write(single)
if er != nil {
fmt.Println("error", er)
return
}
fmt.Println(single)
writer.Flush()
//fmt.Println(records)
//a:=strconv.Itoa(single)
n, er2 := csvfile2.WriteString(single)
if er2 != nil {
fmt.Println(n, er2)
}
}
}
}
Fixing your program,
package main
import (
"encoding/csv"
"fmt"
"io"
"os"
)
func main() {
csvfile1, err := os.Open("data/sample.csv")
if err != nil {
fmt.Println(err)
return
}
defer csvfile1.Close()
reader := csv.NewReader(csvfile1)
csvfile2, err := os.Create("data/SingleColomReading.csv")
if err != nil {
fmt.Println(err)
return
}
writer := csv.NewWriter(csvfile2)
for i := 0; i < 2; i++ {
record, err := reader.Read()
if err != nil {
if err == io.EOF {
break
}
fmt.Println(err)
return
}
err = writer.Write(record)
if err != nil {
fmt.Println(err)
return
}
}
writer.Flush()
err = csvfile2.Close()
if err != nil {
fmt.Println(err)
return
}
}
However, since you are only interested in copying records (lines) as a whole and not individual fields of a record, you could use bufio.Scanner, as #VonC suggested. For example,
package main
import (
"bufio"
"fmt"
"os"
)
func main() {
csvfile1, err := os.Open("data/sample.csv")
if err != nil {
fmt.Println(err)
return
}
defer csvfile1.Close()
scanner := bufio.NewScanner(csvfile1)
csvfile2, err := os.Create("data/SingleColomReading.csv")
if err != nil {
fmt.Println(err)
return
}
writer := bufio.NewWriter(csvfile2)
nRecords := 0
for scanner.Scan() {
n, err := writer.Write(scanner.Bytes())
if err != nil {
fmt.Println(n, err)
return
}
err = writer.WriteByte('\n')
if err != nil {
fmt.Println(err)
return
}
if nRecords++; nRecords >= 2 {
break
}
}
if err := scanner.Err(); err != nil {
fmt.Println(err)
return
}
err = writer.Flush()
if err != nil {
fmt.Println(err)
return
}
err = csvfile2.Close()
if err != nil {
fmt.Println(err)
return
}
}
It owuld be easier to:
read your csv file into a string array (one line per element), for the two first lines only
var lines []string
scanner := bufio.NewScanner(file)
nblines := 0
for scanner.Scan() {
lines = append(lines, scanner.Text())
if nblines++; nblines >= 2 {
break
}
}
Then you can use a range lines to write those two lines in the destination file.
lines includes at most 2 elements.

Resources