I try to run SQL queries from a Golang application using the official Tarantool client. The only way I know how to do it is by using conn.Eval like below. But I don't receive any errors. I can drop non existing tables, insert rows with duplicate keys. I will never find out that something went wrong.
resp, err := conn.Eval("box.execute([[TRUNCATE TABLE not_exists;]])", []interface{}{})
// err is always nil
// resp.Error is always empty
Can you point out the way to get errors or the right way to run SQL queries.
thanks for the question!
I have talked to the team and we have two options for you. Here is the first one:
resp, err := conn.Eval("return box.execute([[TRUNCATE TABLE \"not_exists\";]])", []interface{}{})
if len(resp.Tuples()) > 1 {
fmt.Println("Error", resp.Tuples()[1])
}else{
fmt.Println("Result", resp.Tuples()[0])
}
And here is the second one:
r, err := tnt.Eval("local data, err = box.execute(...) return data or box.error(err)", []interface{}{
`TRUNCATE table "not_exists";`,
})
if err != nil {
log.Fatalln(err)
}
I hope that helps! And if it doesn't - let me know and we will look into this one more time.
Related
c, clerr := objectstorage.NewObjectStorageClientWithConfigurationProvider(common.NewRawConfigurationProvider(
"ocid1.tenancy.oc1..aaaaaaa5jo3pz1alm1o45rzx1ucaab4njxbwaqqbc7ld3l6biayjaert5la",
"ocid1.user.oc1..aaaaaaaauax5bo2gg3az46h53467u57ue86rk9h2wax8w7zzamxgwvsi34ja",
"ap-seoul-1",
"98:bc:6b:13:c1:64:ds:8b:9c:15:11:d2:8d:e5:92:db",
))
I'm trying to use oracle object storage, I checked the official manual, but there is something I don't understand. As above, I need the privateKey, and pricateKeyPassphrase arguments, but I don't know where to get them. Is there a detailed explanation or example?
What i want, is to upload a file to storage.
Where can I go to the page in the oracle console to get the keys I need? please give me some advice
config, err := common.ConfigurationProviderFromFile("./config", "")
if err != nil {
t.Error(err.Error())
}
c, err := objectstorage.NewObjectStorageClientWithConfigurationProvider(config)
if err != nil {
t.Error(err.Error())
}
https://cloud.oracle.com/identity/domains/my-profile/api-keys
I generated a key on this page, put it in my project, and with the above code I was able to get started without any problems.
I read this official guide about error handling
i applied it
err := db.connection.QueryRow("INSERT INTO articles(uri) VALUES ($1)", article.URI).Scan()
if err != nil {
var pgErr *pgconn.PgError
if errors.As(err, &pgErr) {
fmt.Println(pgErr.Message) // => syntax error at end of input
fmt.Println(pgErr.Code) // => 42601
}
}
Code doesn't work, my app doens't print anything. But postgres log has ERROR: duplicate key value violates unique constraint "articles_uri_key"
Ok, i can use standart golang method:
err := db.connection.QueryRow("INSERT INTO articles(uri) VALUES ($1)", article.URI).Scan()
if err != nil {
fmt.Println(err)
}
One problem, it prints no rows in result set when no errors in postgres log.
I tried replace
if err != nil with if err != errors.New("no rows in result set"),
it still prints no rows in result set
Use pgx.ErrNoRows
if err != pgx.ErrNoRows {
fmt.Println(err)
}
Please modify your question and make it appropriate.
Duplicate key value is a valid error. If you want to remove the error either you should avoid duplicate entry or remove unique constraint from it.
Using pgx with database/sql, pgx is simply acting as driver.The sql.ErrNoRows error is being returned from the database/sql library. pgx.ErrNoRows is only returned when calling a pgx function directly. As database/sql will bubble up some errors from the driver.
sqlStatement := `
INSERT INTO articles (uri)
VALUES ($1)
RETURNING id`
id := 0
//make sure what type of data you want to scan you should pass it inside scan()
err = db.QueryRow(sqlStatement, article.URI).Scan(&id)
if err != nil {
if err == sql.ErrNoRows { //pgx.ErrNoRows
// there were no rows, but otherwise no error occurred
} else {
log.Fatal(err)
}
}
fmt.Println("New record ID is:", id)
For better understanding or for multiple rows please refer this link : How to get row value(s) back after db insert?
I know it's old question, but I just found the solution
do not use
var pgErr *pgconn.PgError
try to use
var pgErr pgx.PgError
instead
I am getting started with RethinkDB, I have never used it before. I give it a try together with Gorethink following this tutorial.
To sum up this tutorial, there are two programs:
The first one updates entries infinitely.
for {
var scoreentry ScoreEntry
pl := rand.Intn(1000)
sc := rand.Intn(6) - 2
res, err := r.Table("scores").Get(strconv.Itoa(pl)).Run(session)
if err != nil {
log.Fatal(err)
}
err = res.One(&scoreentry)
scoreentry.Score = scoreentry.Score + sc
_, err = r.Table("scores").Update(scoreentry).RunWrite(session)
}
And the second one, receives this changes and logs them.
res, err := r.Table("scores").Changes().Run(session)
var value interface{}
if err != nil {
log.Fatalln(err)
}
for res.Next(&value) {
fmt.Println(value)
}
In the statistics that RethinkDB shows, I can see that there are 1.5K reads and writes per second. But in the console of the second program, I see 1 or 2 changes per second approximately.
Why does this occur? Am I missing something?
This code:
r.Table("scores").Update(scoreentry).RunWrite(session)
Probably doesn't do what you think it does. This attempts to update every document in the table by merging scoreentry into it. This is why the RethinkDB console is showing so many writes per second: every time you run that query it's resulting in thousands of writes.
Usually you want to update documents inside of ReQL, like so:
r.Table('scores').Get(strconv.Itoa(pl)).Update(func (row Term) interface{} {
return map[string]interface{}{"Score": row.GetField('Score').Add(sc)};
})
If you need to do the update in Go code, though, you can replace just that one document like so:
r.Table('scores').Get(strconv.Itoa(pl)).Replace(scoreentry)
Im not sure why it is quite that slow, it could be because by default each query blocks until the write has been completely flushed. I would first add some kind of instrumentation to see which operation is being so slow. There are also a couple of ways that you can improve the performance:
Set the Durability of the write using UpdateOpts
_, err = r.Table("scores").Update(scoreentry, r.UpdateOpts{
Durability: "soft",
}).RunWrite(session)
Execute each query in a goroutine to allow your code to execute multiple queries in parallel (you may need to use a pool of goroutines instead but this code is just a simplified example)
for {
go func() {
var scoreentry ScoreEntry
pl := rand.Intn(1000)
sc := rand.Intn(6) - 2
res, err := r.Table("scores").Get(strconv.Itoa(pl)).Run(session)
if err != nil {
log.Fatal(err)
}
err = res.One(&scoreentry)
scoreentry.Score = scoreentry.Score + sc
_, err = r.Table("scores").Update(scoreentry).RunWrite(session)
}()
}
I'm trying to insert a map value into my Cassandra database. I'm using Go to write my client. Currently its throwing the error "can not marshal string into map(varchar, varchar)". I understand what the error is, but I can't resolve it. Here is the code that I've written.
if err := session.Query("INSERT INTO emergency_records
(mapColumn)
VALUES (?)",
"{'key' : 'value'}").Exec();
err != nil {
log.Fatal(err)
}
What I don't get is that I've written one query as a whole unbroken string and it works fine without throwing this error. Yet breaking it down with the question mark it throws the error. I know this is something simple that I'm just overlooking and couldn't find in the documentation, but any help would be great thanks.
I haven't used Go casandra client before but I guess passing map as a map instead of string should work:
mapValue := map[string]string{"key": "value"}
if err := session.Query("INSERT INTO emergency_records (mapColumn) VALUES (?)", mapValue).Exec(); err != nil {
log.Fatal(err)
}
I think I did a silly mistake somewhere, but could not figure where for long time already :( The code is rough, I just testing things.
It deletes, but by some reasons not all documents, I have rewritten to delete it all one by one, and that went OK.
I use official package for Couchbase http://github.com/couchbase/gocb
Here is code:
var items []gocb.BulkOp
myQuery := gocb.NewN1qlQuery([Selecting ~ 283k documents from 1.5mln])
rows, err := myBucket.ExecuteN1qlQuery(myQuery, nil)
checkErr(err)
var idToDelete map[string]interface{}
for rows.Next(&idToDelete) {
items = append(items, &gocb.RemoveOp{Key: idToDelete["id"].(string)})
}
if err := rows.Close(); err != nil {
fmt.Println(err.Error())
}
if err := myBucket.Do(items);err != nil {
fmt.Println(err.Error())
}
This way it deleted ~70k documents, I run it again it got deleted 43k more..
Then I just let it delete one by one, and it worked fine:
//var items []gocb.BulkOp
myQuery := gocb.NewN1qlQuery([Selecting ~ 180k documents from ~1.3mln])
rows, err := myBucket.ExecuteN1qlQuery(myQuery, nil)
checkErr(err)
var idToDelete map[string]interface{}
for rows.Next(&idToDelete) {
//items = append(items, &gocb.RemoveOp{Key: idToDelete["id"].(string)})
_, err := myBucket.Remove(idToDelete["id"].(string), 0)
checkErr(err)
}
if err := rows.Close(); err != nil {
fmt.Println(err.Error())
}
//err = myBucket.Do(items)
By default, queries against N1QL use a consistency level called 'request plus'. Thus, your second time running the program to query will use whatever index update is valid at the time of the query, rather than considering all of your previous mutations by waiting until the index is up to date. You can read more about this in Couchbase's Developer Guide and it looks like the you'll want to add the RequestPlus parameter to your myquery through the consistency method on the query.
This kind of eventually consistent secondary indexing and the flexibility is pretty powerful because it gives you as a developer the ability to decide what level of consistency you want to pay for since index recalculations have a cost.