How to read Image from Oracle (long raw format) in GoLang - oracle

I am trying to read images (in long raw datatype) from external Oracle database using Golang code.
When sql's row.Next() is called following error is caught:
ORA-01406: fetched column value was truncated
row.Next works fine for reading blob images from mssql DB.
Example code:
db, err := sql.Open("oci8", getDSN()) //function to get connection details
if err != nil {
fmt.Println(err)
return
}
defer db.Close()
rows, err := db.Query("SELECT image FROM sysadm.all_images")
if err != nil {
fmt.Println(err)
return
}
defer rows.Close()
for rows.Next() {
var id string
var data []byte
rows.Scan(&id, &data)
}
fmt.Println("Total errors", rows.Err())
}
I hope someone can help me to fix this issue or pinpoint to problem area.

I assume you are using go-oci8 as the driver.
Based on this issue https://github.com/mattn/go-oci8/pull/71 there are someone who get the same error like yours, and then managed to fix it by modifying some codes on the driver.
As per this commit, the problem is already solved by increasing the value of oci8cols[i].size on file $GOPATH/src/github.com/mattn/go-oci8/oci8.go. I think in your case you have bigger blob data, that's why the revision is still not working.
case C.SQLT_NUM:
oci8cols[i].kind = C.SQLT_CHR
oci8cols[i].size = int(lp * 4) // <==== THIS VALUE
oci8cols[i].pbuf = C.malloc(C.size_t(oci8cols[i].size) + 1)
So, try to increase the multiplier, like:
oci8cols[i].size = int(lp * 12) // <==== OR GREATER

Related

BigQuery Golang maxResults for Queries

I would like to be able to specify maxResults when using the golang BigQuery library. It isn't clear how to do this, though. I don't see it as an option in the documentation, and I have browsed the source to try to find it but I only see some sporadic usage in seemingly functionality not related to queries. Is there a way to circumvent this issue?
I think there is no implemented method in the SDK for that but after looking a bit, I found this one: request
You could try to execute an HTTP GET specifying the parameters (you can find an example of the use of parameters here: query_parameters)
By default the google API iterators manage page size for you. The RowIterator returns a single row by default, backed internally by fetched pages that rely on the backend to select an appropriate size.
If however you want to specify a fixed max page size, you can use the google.golang.org/api/iterator package to iterate by pages while specifying a specific size. The size, in this case, corresponds to maxResults for BigQuery's query APIs.
See https://github.com/googleapis/google-cloud-go/wiki/Iterator-Guidelines for more general information about advanced iterator usage.
Here's a quick test to demonstrate with the RowIterator in bigquery. It executes a query that returns a row for each day in October, and iterates by page:
func TestQueryPager(t *testing.T) {
ctx := context.Background()
pageSize := 5
client, err := bigquery.NewClient(ctx, "your-project-id here")
if err != nil {
t.Fatal(err)
}
defer client.Close()
q := client.Query("SELECT * FROM UNNEST(GENERATE_DATE_ARRAY('2022-10-01','2022-10-31', INTERVAL 1 DAY)) as d")
it, err := q.Read(ctx)
if err != nil {
t.Fatalf("query failure: %v", err)
}
pager := iterator.NewPager(it, pageSize, "")
var fetchedPages int
for {
var rows [][]bigquery.Value
nextToken, err := pager.NextPage(&rows)
if err != nil {
t.Fatalf("NextPage: %v", err)
}
fetchedPages = fetchedPages + 1
if len(rows) > pageSize {
t.Errorf("page size exceeded, got %d want %d", len(rows), pageSize)
}
t.Logf("(next token %s) page size: %d", nextToken, len(rows))
if nextToken == "" {
break
}
}
wantPages := 7
if fetchedPages != wantPages {
t.Fatalf("fetched %d pages, wanted %d pages", fetchedPages, wantPages)
}
}

How to handle postgres query error with pgx driver in golang?

I read this official guide about error handling
i applied it
err := db.connection.QueryRow("INSERT INTO articles(uri) VALUES ($1)", article.URI).Scan()
if err != nil {
var pgErr *pgconn.PgError
if errors.As(err, &pgErr) {
fmt.Println(pgErr.Message) // => syntax error at end of input
fmt.Println(pgErr.Code) // => 42601
}
}
Code doesn't work, my app doens't print anything. But postgres log has ERROR: duplicate key value violates unique constraint "articles_uri_key"
Ok, i can use standart golang method:
err := db.connection.QueryRow("INSERT INTO articles(uri) VALUES ($1)", article.URI).Scan()
if err != nil {
fmt.Println(err)
}
One problem, it prints no rows in result set when no errors in postgres log.
I tried replace
if err != nil with if err != errors.New("no rows in result set"),
it still prints no rows in result set
Use pgx.ErrNoRows
if err != pgx.ErrNoRows {
fmt.Println(err)
}
Please modify your question and make it appropriate.
Duplicate key value is a valid error. If you want to remove the error either you should avoid duplicate entry or remove unique constraint from it.
Using pgx with database/sql, pgx is simply acting as driver.The sql.ErrNoRows error is being returned from the database/sql library. pgx.ErrNoRows is only returned when calling a pgx function directly. As database/sql will bubble up some errors from the driver.
sqlStatement := `
INSERT INTO articles (uri)
VALUES ($1)
RETURNING id`
id := 0
//make sure what type of data you want to scan you should pass it inside scan()
err = db.QueryRow(sqlStatement, article.URI).Scan(&id)
if err != nil {
if err == sql.ErrNoRows { //pgx.ErrNoRows
// there were no rows, but otherwise no error occurred
} else {
log.Fatal(err)
}
}
fmt.Println("New record ID is:", id)
For better understanding or for multiple rows please refer this link : How to get row value(s) back after db insert?
I know it's old question, but I just found the solution
do not use
var pgErr *pgconn.PgError
try to use
var pgErr pgx.PgError
instead

Running SQL queries in Tarantool from Go are silent (no errors)

I try to run SQL queries from a Golang application using the official Tarantool client. The only way I know how to do it is by using conn.Eval like below. But I don't receive any errors. I can drop non existing tables, insert rows with duplicate keys. I will never find out that something went wrong.
resp, err := conn.Eval("box.execute([[TRUNCATE TABLE not_exists;]])", []interface{}{})
// err is always nil
// resp.Error is always empty
Can you point out the way to get errors or the right way to run SQL queries.
thanks for the question!
I have talked to the team and we have two options for you. Here is the first one:
resp, err := conn.Eval("return box.execute([[TRUNCATE TABLE \"not_exists\";]])", []interface{}{})
if len(resp.Tuples()) > 1 {
fmt.Println("Error", resp.Tuples()[1])
}else{
fmt.Println("Result", resp.Tuples()[0])
}
And here is the second one:
r, err := tnt.Eval("local data, err = box.execute(...) return data or box.error(err)", []interface{}{
`TRUNCATE table "not_exists";`,
})
if err != nil {
log.Fatalln(err)
}
I hope that helps! And if it doesn't - let me know and we will look into this one more time.

Upload data to table without waiting for streaming buffer to flush

I have a Go program which downloads data from a table (T1), formats it, and uploads it to a new temporary table (T2). Once the data has been uploaded (30s or so), the data should be copied to a third table (T3).
After uploading the formatted data to T2, querying the table returns results ok. However, when copying the table - the job completes almost instantly and the destination table (T3) is empty.
I'm copying the table as suggested here - but the result is the same when performing the action in the UI.
In the table metadata section it shows as 0B, 0 rows but there are about 100k rows and 18mb of data in there - or at least that's what comes back from a query.
Edit I did not spot that this data was still stuck in the streaming buffer - see my answer.
The comments on my question lead me to see that the issue was the streaming buffer. This was taking a long time to flush - it's not possible to flush manually.
I ended up reading this issue and comment on GitHub here. This suggested using a Load job instead.
After some research, I realised it was possible to read from an io.Reader as well as a Google Cloud Storage Reference by configuring the Loader's ReaderSource.
My original implementation that used the streaming buffer looked like this:
var vss []*bigquery.ValuesSaver
// for each row:
vss = append(vss, &bigquery.ValuesSaver{
Schema: schema,
InsertID: fmt.Sprintf(index of loop),
Row: []bigquery.Value{
"data"
},
})
err := uploader.Put(ctx, vss)
if err != nil {
if pmErr, ok := err.(bigquery.PutMultiError); ok {
for _, rowInsertionError := range pmErr {
log.Println(rowInsertionError.Errors)
}
}
return fmt.Errorf("failed to insert data: %v", err)
}
I was able to change this to a load job with code that looked like this:
var lines []string
for _, v := range rows {
json, err := json.Marshal(v)
if err != nil {
return fmt.Errorf("failed generate json %v, %+v", err, v)
}
lines = append(lines, string(json))
}
dataString := strings.Join(lines, "\n")
rs := bigquery.NewReaderSource(strings.NewReader(dataString))
rs.FileConfig.SourceFormat = bigquery.JSON
rs.FileConfig.Schema = schema
loader := dataset.Table(t2Name).LoaderFrom(rs)
loader.CreateDisposition = bigquery.CreateIfNeeded
loader.WriteDisposition = bigquery.WriteTruncate
job, err := loader.Run(ctx)
if err != nil {
return fmt.Errorf("failed to start load job %v", err)
}
_, err := job.Wait(ctx)
if err != nil {
return fmt.Errorf("load job failed %v", err)
}
Now the data is available in the table 'immediately' - I no longer need to wait for the streaming buffer.

How to update a bigquery row in golang

I have a go program connected to a bigquery table. This is the table's schema:
name STRING NULLABLE
age INTEGER NULLABLE
amount INTEGER NULLABLE
I have succeded at queryng the data of this table and printing all rows on console with this code:
ctx := context.Background()
client, err := bigquery.NewClient(ctx, projectID)
q := client.Query("SELECT * FROM test.test_user LIMIT 1000")
it, err := q.Read(ctx)
if err != nil {
log.Fatal(err)
}
for {
var values []bigquery.Value
err := it.Next(&values)
if err == iterator.Done {
break
}
if err != nil {
// TODO: Handle error.
}
fmt.Println(values)
}
And I also have succeded to insert data on the table from a struct using this code:
type test struct {
Name string
Age int
Amount int
}
u := client.Dataset("testDS").Table("test_user").Uploader()
savers := []*bigquery.StructSaver{
{Struct: test{Name: "Jack", Age: 23, Amount:123}, InsertID: "id1"},
}
if err := u.Put(ctx, savers); err != nil {
log.Fatal(err)
}
fmt.Printf("rows inserted!!")
Now, what I am failing to do is updating rows. What I want to do is selecting all the rows and update all of them with an operation (for example: amount = amount * 2)
How can I achieve this using golang?
Updating rows is not specific to Go, or any other client library. If you want to update data in BigQuery, you need to use DML (Data Manipulation Language) via SQL. So, essentially you already have the main part working (running a query) - you just need to change this SQL to use DML.
But, a word of caution: BigQuery is a OLAP service. Don't use it for OLTP. Also, there are quotas with using DML. Make sure you familiarise yourself with them.

Resources