rows.Next() halts after some number of rows - oracle

Im newbie in Golang, so it may be simple for professionals but I got stuck with no idea what to do next.
I'm making some migration app that extract some data from oracle DB and after some conversion insert it to Postges one-by-one.
The result of native Query in DB console returns about 400k of rows and takes about 13 sec to end.
The data from Oracle extracts with rows.Next() with some strange behavior:
First 25 rows extracted fast enough, then about few sec paused, then new 25 rows until it pauses "forever".
Here is the function:
func GetHrTicketsFromOra() (*sql.Rows, error) {
rows, err := oraDB.Query("select id,STATE_ID,REMEDY_ID,HEADER,CREATE_DATE,TEXT,SOLUTION,SOLUTION_USER_LOGIN,LAST_SOLUTION_DATE from TICKET where SOLUTION_GROUP_ID = 5549")
if err != nil {
println("Error while getting rows from Ora")
return nil, err
}
log.Println("Finished legacy tickets export")
return rows, err
}
And here I export data:
func ConvertRows(rows *sql.Rows, c chan util.ArchTicket, m chan int) error {
log.Println("Conversion start")
defer func(rows *sql.Rows) {
err := rows.Close()
if err != nil {
log.Println("ORA connection closed", err)
return
}
}(rows)
for rows.Next() {
log.Println("Reading the ticket")
ot := util.OraTicket{}
at := util.ArchTicket{}
err := rows.Scan(&ot.ID, &ot.StateId, &ot.RemedyId, &ot.Header, &ot.CreateDate, &ot.Text, &ot.Solution, &ot.SolutionUserLogin, &ot.LastSolutionDate)
if err != nil {
log.Println("Error while reading row", err)
return err
}
at = convertLegTOArch(ot)
c <- at
}
if err := rows.Err(); err != nil {
log.Println("Error while reading row", err)
return err
}
m <- 1
return nil
}
UPD. I use "github.com/sijms/go-ora/v2" driver
UPD2. Seems like the root cause of the problem is in TEXT and SOLUTION fields of the result rows. They are varchar and can be big enough. Deleting them from the direct query changes the time of execution from 13sec to 258ms. But I still have no idea what to do with that.
UPD3.
Minimal reproducible example
package main
import (
"database/sql"
_ "github.com/sijms/go-ora/v2"
"log"
)
var oraDB *sql.DB
var con = "oracle://login:password#ora_db:1521/database"
func InitOraDB(dataSourceName string) error {
var err error
oraDB, err = sql.Open("oracle", dataSourceName)
if err != nil {
return err
}
return oraDB.Ping()
}
func GetHrTicketsFromOra() {
var ot string
rows, err := oraDB.Query("select TEXT from TICKET where SOLUTION_GROUP_ID = 5549")
if err != nil {
println("Error while getting rows from Ora")
}
for rows.Next() {
log.Println("Reading the ticket")
err := rows.Scan(&ot)
if err != nil {
log.Println("Reading failed", err)
}
log.Println("Read:")
}
log.Println("Finished legacy tickets export")
}
func main() {
err := InitOraDB(con)
if err != nil {
log.Println("Error connection Ora")
}
GetHrTicketsFromOra()
}

Related

There is already an object named 'company_names' in the database

I use Gorm for Golang like this code,
func Connect() (*gorm.DB, error) {
var (
err error
db *gorm.DB
)
dsn := "sqlserver://User:12345#127.0.0.1:1433?database=gorm"
db, err = gorm.Open(sqlserver.Open(dsn), &gorm.Config{})
if err != nil {
return nil, err
}
err = db.Debug().AutoMigrate(&models.CompanyName{}, &models.CarModel{}, &models.CreateYear{}, &models.Diversity{})
if err != nil {
return nil, err
}
return db, nil
}
and in main.go
func main() {
db, err := Connect()
if err != nil {
panic("database connection failed")
}
ech := echo.New()
}
Ok in the first time when we run code, gorm create tables in database,but in the second time I got an error thet
mssql: There is already an object named 'table_names' in the database.
Do I have to delete db.Debug().AutoMigrate)(?

Ethereum error {"code":-32000,"message":"unknown account"}

I am trying to send a raw transaction with eth.sendTransaction but I am getting an error that says {"code":-32000,"message":"unknown account"}. I am not sure what is causing this and I cant seem to find an answer on the internet. Can anyone help me figure it out? Here is my code:
func ExecuteSignedTransaction(rawTransaction string) {
var hash web3.Hash
data := make(map[string]interface{})
data["data"] = rawTransaction
err := Web3HTTPClient.Call("eth_sendTransaction", &hash, data)
if err != nil{
fmt.Println(err)
Os.Exit(1)
}
fmt.Println("Sent tx hash:", hash)
}
So, what I might do here:
import (
"strings"
"crypto/ecdsa"
"math/big"
"github.com/ethereum/go-ethereum/ethclient"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
)
var chainId = big.NewInt(1) // chain id for the ethereum mainnet, change according to needs
func ecdsaPrivateKeyFromHex(privKeyHex string) *ecdsa.PrivateKey {
ecdsaKey, err := crypto.HexToECDSA(privKeyHex)
if err != nil { panic(err) }
return ecdsaKey
}
func newTransactOpts(privKey *ecdsa.PrivateKey) *bind.TransactOpts {
transactOpts, err := bind.NewKeyedTransactorWithChainID(privKey, chainId)
if err != nil { panic(err) }
return transactOpts
}
func newRpcClient() *ethclient.Client {
c, err := ethclient.Dial("insert rpc url here")
if err != nil { panic(err) }
return c
}
// note: constructing the *types.Transaction object left as
// an exercise to the reader
func ExecuteTransaction(rawTransaction *types.Transaction) {
privKeyHex := "0xblahblahblahblahblah" // use your own account's private key
transactOpts := newTransactOpts(ecdsaPrivateKeyFromHex(privKeyHex))
signedTxn, err := transactOpts.Signer(transactOpts.From, rawTransaction)
if err != nil { panic(err) }
rpcClient := newRpcClient()
if err := rpcClient.SendTransaction(context.Background(), signedTxn); err != nil { panic(err) }
// do whatever
}

Loading CSV file into bigquery after os.Create() doesn't load data

I'm trying to run the following flow:
Get data from somewhere
Create new local CSV file, write the data into that file
Upload the CSV to Bigquery
Delete the local file
But it seems to load empty data.
This is the code:
func (c *Client) Do(ctx context.Context) error {
bqClient, err := bigquerypkg.NewBigQueryUtil(ctx, "projectID", "datasetID")
if err != nil {
return err
}
data, err := c.GetSomeData(ctx)
if err != nil {
return err
}
file, err := os.Create("example.csv")
if err != nil {
return err
}
defer file.Close()
// also file need to be delete
writer := csv.NewWriter(file)
defer writer.Flush()
timestamp := time.Now().UTC().Format("2006-01-02 03:04:05.000000000")
for _, d := range data {
csvRow := []string{
d.ID,
d.Name,
timestamp,
}
err = writer.Write(csvRow)
if err != nil {
log.Printf("error writing data to CSV: %v\n", err)
}
}
source := bigquery.NewReaderSource(file)
source.Schema = bigquery.Schema{
{Name: "id", Type: bigquery.StringFieldType},
{Name: "name", Type: bigquery.StringFieldType},
{Name: "createdAt", Type: bigquery.TimestampFieldType},
}
if _, err = bqClient.LoadCsv(ctx, "tableID", source); err != nil {
return err
}
return nil
}
LoadCSV() looks like this:
func (c *Client) LoadCsv(ctx context.Context, tableID string, src bigquery.LoadSource) (string, error) {
loader := c.bigQueryClient.Dataset(c.datasetID).Table(tableID).LoaderFrom(src)
loader.WriteDisposition = bigquery.WriteTruncate
job, err := loader.Run(ctx)
if err != nil {
return "", err
}
status, err := job.Wait(ctx)
if err != nil {
return job.ID(), err
}
if status.Err() != nil {
return job.ID(), fmt.Errorf("job completed with error: %v", status.Err())
}
return job.ID(), nil
}
After running this, bigquery does create the schema but with no data.
If I'm changing os.Create() to os.Open() and the file already exist, everything work. It's like when loading the CSV the file data is not yet written (?)
What's the reason?
The problem I see here is that you don't rewind the file handle's cursor to the beginning of the file. Thus, the next read will be at the end of the file, and will be a 0 byte read. That explains why it seems like there's no content in the file.
https://pkg.go.dev/os#File.Seek can handle this for you.
Actually, the Flush is not relevant, because you're using the same file handle to read the file than you did to write it, so you'll see your own written bytes even without a flush. This would not be the case if the file was opened by a different process or was reopened.
Edit: OP Claims this flush was necessary in their case and I cannot provide evidence to disagree. Flush will not hurt things either.
Demonstration:
package main
import (
"fmt"
"io"
"os"
)
func main() {
f, err := os.CreateTemp("", "data.csv")
if err != nil {
panic(err)
} else {
defer f.Close()
defer os.Remove(f.Name())
}
fmt.Fprintf(f, "hello, world")
fmt.Fprintln(os.Stderr, "Before rewind: ")
if _, err := io.Copy(os.Stderr, f); err != nil {
panic(err)
}
f.Seek(0, io.SeekStart)
fmt.Fprintln(os.Stderr, "\nAfter rewind: ")
if _, err := io.Copy(os.Stderr, f); err != nil {
panic(err)
}
fmt.Fprintln(os.Stderr, "\n")
}
% go run t.go
Before rewind:
After rewind:
hello, world

Can I have nested bucket under a nested bucket in boltdb?

This is what I have to create nested buckets. It does not return any error but fails at creating nested bucket under another nested bucket.
func CreateNestedBuckets(buckets []string) error {
err := db.Update(func(tx *bolt.Tx) error {
var bkt *bolt.Bucket
var err error
first := true
for _, bucket := range buckets {
log.Error(bucket)
if first == true {
bkt, err = tx.CreateBucketIfNotExists([]byte(bucket))
first = false
} else {
bkt, err = bkt.CreateBucketIfNotExists([]byte(bucket))
}
if err != nil {
log.Error("error creating nested bucket")
return err
}
}
return nil
})
if err != nil {
log.Error("error creating nested bucket!!!")
return err
}
return nil
}
Short answer: yes! You can have nested buckets: https://twitter.com/boltdb/status/454730212010254336
Long answer: your code works fine! Heres some things to check though:
Are you checking the correct bolt database file? The botlt db file will be created in the directory you run your code from, unless you've specified an absolute path.
Does your input actually contain enough elements to create a nested structure?
I've ran your code with the following setup (a couple of small changes but nothing major) and it works fine:
package main
import (
"log"
"os"
"time"
"github.com/boltdb/bolt"
)
var dbname = "test.bdb"
var dbperms os.FileMode = 0770
var options = &bolt.Options{Timeout: 1 * time.Second}
func main() {
var names []string
names = append(names, "bucketOne")
names = append(names, "bucketTwo")
names = append(names, "bucketThree")
if err := CreateNestedBuckets(names); err != nil {
log.Fatal(err)
}
}
// CreateNestedBuckets - Function to create
// nested buckets from an array of Strings
func CreateNestedBuckets(buckets []string) error {
db, dberr := bolt.Open(dbname, dbperms, options)
if dberr != nil {
log.Fatal(dberr)
}
defer db.Close()
err := db.Update(func(tx *bolt.Tx) error {
var bkt *bolt.Bucket
var err error
first := true
for _, bucket := range buckets {
log.Println(bucket)
if first == true {
bkt, err = tx.CreateBucketIfNotExists([]byte(bucket))
first = false
} else {
bkt, err = bkt.CreateBucketIfNotExists([]byte(bucket))
}
if err != nil {
log.Println("error creating nested bucket")
return err
}
}
return nil
})
if err != nil {
log.Println("error creating nested bucket!!!")
return err
}
return nil
}
To test you can cat the file through the strings command:
cat test.bdb | strings
bucketThree
bucketTwo
bucketOne
If you're on Windows, I'm not sure what the equivalent command is, but you can open the file with Notepad and inspect it manually. It won't be pretty, but you should still see the name of your buckets in there.
On another note, you error handling is going to result in very similar messages being printed in succession. Here's a slightly cleaner solution you can use:
// CreateNestedBucketsNew - function to create
// nested buckets from an array of Strings - my implementation
func CreateNestedBucketsNew(buckets []string) (err error) {
err = db.Update(func(tx *bolt.Tx) (err error) {
var bkt *bolt.Bucket
for index, bucket := range buckets {
if index == 0 {
bkt, err = tx.CreateBucketIfNotExists([]byte(bucket))
} else {
bkt, err = bkt.CreateBucketIfNotExists([]byte(bucket))
}
if err != nil {
return fmt.Errorf("Error creating nested bucket [%s]: %v", bucket, err)
}
}
return err
})
return err
}
fste89's demo has some debug;
this right:
package main
import (
"fmt"
"time"
"github.com/boltdb/bolt"
)
func CreateNestedBuckets(fatherTable string, sonTabls []string) error {
db, dberr := bolt.Open("your file path", 0600, &bolt.Options{Timeout: 1 * time.Second})
if dberr != nil {
fmt.Println(dberr)
}
defer db.Close()
err := db.Update(func(tx *bolt.Tx) error {
var bkt *bolt.Bucket
var err error
bkFather, err = tx.CreateBucketIfNotExists([]byte(fatherTable))
for _, ta := range sonTabls {
fmt.Println(ta)
_, err = bkFather.CreateBucketIfNotExists([]byte(ta))
if err != nil {
fmt.Println("error creating nested bucket")
return err
}
}
return nil
})
if err != nil {
fmt.Println("error creating nested bucket!!!")
return err
}
return nil
}
func main() {
t := []string{"cc", "1", "2", "3"}
fmt.Println(CreateNestedBuckets("sb", t))
}
echo:
cc
1
2
3
<nil>
Visible
enter image description here
func CreateNestedBuckets(fatherTable string, sonTables []string) error {
db, dberr := bolt.Open("E:\\OneDrive\\code\\go\\project\\transmission\\static\\localstorage.db", 0600, &bolt.Options{Timeout: 1 * time.Second})
if dberr != nil {
fmt.Println(dberr)
}
defer db.Close()
err := db.Update(func(tx *bolt.Tx) error {
var err error
bkFather, err := tx.CreateBucketIfNotExists([]byte(fatherTable))
for _, ta := range sonTables {
fmt.Println(ta)
_, err = bkFather.CreateBucketIfNotExists([]byte(ta))
if err != nil {
fmt.Println("error creating nested bucket")
return err
}
}
return nil
})
if err != nil {
fmt.Println("error creating nested bucket!!!")
return err
}
return nil
}

Need faster way to list all datasets/tables in project

I am creating a utility that needs to be aware of all the datasets/tables that exist in my BigQuery project. My current code for getting this information is as follows (using Go API):
func populateExistingTableMap(service *bigquery.Service, cloudCtx context.Context, projectId string) (map[string]map[string]bool, error) {
tableMap := map[string]map[string]bool{}
call := service.Datasets.List(projectId)
//call.Fields("datasets/datasetReference")
if err := call.Pages(cloudCtx, func(page *bigquery.DatasetList) error {
for _, v := range page.Datasets {
if tableMap[v.DatasetReference.DatasetId] == nil {
tableMap[v.DatasetReference.DatasetId] = map[string]bool{}
}
table_call := service.Tables.List(projectId, v.DatasetReference.DatasetId)
//table_call.Fields("tables/tableReference")
if err := table_call.Pages(cloudCtx, func(page *bigquery.TableList) error {
for _, t := range page.Tables {
tableMap[v.DatasetReference.DatasetId][t.TableReference.TableId] = true
}
return nil
}); err != nil {
return errors.New("Error Parsing Table")
}
}
return nil
}); err != nil {
return tableMap, err
}
return tableMap, nil
}
For a project with about 5000 datasets, each with up to 10 tables, this code takes almost 15 minutes to return. Is there a faster way to iterate through the names of all existing datasets/tables? I have tried using the Fields method to return only the fields I need (you can see those lines commented out above), but that results in only 50 (exactly 50) of my datasets being returned.
Any ideas?
Here is an updated version of my code, with concurrency, that reduced the processing time from about 15 minutes to 3 minutes.
func populateExistingTableMap(service *bigquery.Service, cloudCtx context.Context, projectId string) (map[string]map[string]bool, error) {
tableMap = map[string]map[string]bool{}
call := service.Datasets.List(projectId)
//call.Fields("datasets/datasetReference")
if err := call.Pages(cloudCtx, func(page *bigquery.DatasetList) error {
var wg sync.WaitGroup
wg.Add(len(page.Datasets))
for _, v := range page.Datasets {
if tableMap[v.DatasetReference.DatasetId] == nil {
tableMap[v.DatasetReference.DatasetId] = map[string]bool{}
}
go func(service *bigquery.Service, datasetID string, projectId string) {
defer wg.Done()
table_call := service.Tables.List(projectId, datasetID)
//table_call.Fields("tables/tableReference")
if err := table_call.Pages(cloudCtx, func(page *bigquery.TableList) error {
for _, t := range page.Tables {
tableMap[datasetID][t.TableReference.TableId] = true
}
return nil // NOTE: returning a non-nil error stops pagination.
}); err != nil {
// TODO: Handle error.
fmt.Println(err)
}
}(service, v.DatasetReference.DatasetId, projectId)
}
wg.Wait()
return nil // NOTE: returning a non-nil error stops pagination.
}); err != nil {
return tableMap, err
// TODO: Handle error.
}
return tableMap, nil
}

Resources