Is there no migration file at all in GORM? - go

If I use GORM, there is no migration file?? So far, I googled and searched for this information, there is no migration file in GORM based on my understanding, and If I want to generate a migration file I have to use CLI. The reason why the GORM didn't generate a migration file is that " It WON’T delete unused columns to protect your data." (https://gorm.io/docs/migration.html#Auto-Migration)
How do we keep track of the changes? In Django, it generates a migration file, and we can keep track of the changes whenever we migrate.
In advance, I am sorry if I understand something wrong... I am just getting started learning golang and database a few days ago.

I believe GORM doesn't have the solution you want. There are some CLIs from the GORM team and from other enthusiasts but they really don't do what we actually want (Yep, I needed the same tool as well). At the end of the day, only GIT is the friend in the case of using GORM.
P.S. I found a good solution in Facebook's Ent(The entity framework for Go) which is a significantly better option for interacting with DBs in Go. They have a built-in solution for your needs - the WriteTo function which writes the schema changes to f instead of running them against the database.
func main() {
client, err := ent.Open("mysql", "root:pass#tcp(localhost:3306)/test")
if err != nil {
log.Fatalf("failed connecting to mysql: %v", err)
}
defer client.Close()
ctx := context.Background()
// Dump migration changes to an SQL script.
f, err := os.Create("migrate.sql")
if err != nil {
log.Fatalf("create migrate file: %v", err)
}
defer f.Close()
if err := client.Schema.WriteTo(ctx, f); err != nil {
log.Fatalf("failed printing schema changes: %v", err)
}
}
Or you can simply print the changes to the terminal by setting os.Stdout as a target output location for WriteTo.
client.Schema.WriteTo(ctx, os.Stdout)
The reference | Database Migration - Offline Mode
I hope, this will help you to have a better option next time by using Ent which is created, open-sourced, and maintained by Facebook for its needs and scale. Also, you might be interested in the post from Ariel Mashraki - Introducing ent.

Related

How do I access custom fields in an error?

Objective
Add a command to dropbox's CLI tool to get the shared link for the given path (file or folder).
The changes are here: github fork.
Background
The dropbox-go-sdk has a function that takes a path, and returns a new shared link, or returns an error containing the existing shared link.
I don't know how to use the error to extract the existing shared link.
Code
on github, and snippet here:
dbx := sharing.New(config)
res, err := dbx.CreateSharedLinkWithSettings(arg)
if err != nil {
switch e := err.(type) {
case sharing.CreateSharedLinkWithSettingsAPIError:
fmt.Printf("%v", e.EndpointError)
default:
return err
}
}
This prints the following:
&{{shared_link_already_exists} <nil> <nil>}found unknown shared link typeError: shared_link_already_exists/...
tracing:
CreateSharedLinkWithSettings --> CreateSharedLinkWithSettingsAPIError --> CreateSharedLinkWithSettingsError --> SharedLinkAlreadyExistsMetadata --> IsSharedLinkMetadata
IsSharedLinkMetadata contains the Url that I'm looking for.
More Info
The API docs point to CreateSharedLinkWithSettings, which should pass back the information in the error including the existing Url.
I struggle to understand how to deal with the error and extract the url from it.
The dbxcli has some code doing a similar operation, but again, not sure how it's working enough to apply it to the code I'm working on. Is it a Struct? Map? I don't know what this thing is called. There's some weird magic err.(type) stuff happening in the code. How do I access the data?
dbx := sharing.New(config)
res, err := dbx.CreateSharedLinkWithSettings(arg)
if err != nil {
switch e := err.(type) {
case sharing.CreateSharedLinkWithSettingsAPIError:
fmt.Printf("%v", e.EndpointError)
// type cast to the specific error and access the field you want.
settingsError := err.(sharing.CreateSharedLinkWithSettingsAPIError)
fmt.Println(settingsError.EndpointError.SharedLinkAlreadyExists.Metadata.Url)
default:
return err
}
}
The question was answered in the comments by #jimb. The answer is you access the fields like any other golang data structure - nothing special.
The errors I got when trying to access the fields were because the fields were not there.
The problem with the code was dependency issues. The code depends on an older version of the go-sdk and I referenced the latest version.
This question serves as a good explanation for how real golang programmers handle errors in their code with examples. I wasn't able to find this online, so I won't close the question.

How to get a list of all files in directory with Google Drive API(v3)

I have stuck with function that must to return me a list of all files from directory (in this case directory is "root"). When I call this function, it return me files that only I added with my program (this program also can upload files to Google Drive), not all files. And it also shows me files that I deleted :/. What I do wrong?
This function I was copied from Google Drive API Quickstart
service, err := getService()
if err != nil {
log.Fatalf("Unable to retrieve Drive client: %v", err)
}
r, err := service.Files.List().Q("'root' in parents").Do()
if err != nil {
log.Fatalf("Unable to retrieve files: %v", err)
}
fmt.Println("Files:")
if len(r.Files) == 0 {
fmt.Println("No files found.")
} else {
for _, i := range r.Files {
fmt.Printf("%v (%vs )\n", i.Name, i.Id)
}
}
You want to retrieve all files just under the root folder.
You want to achieve this using google-api-go-client with golang.
You have already been get and put values for Google Drive using Drive API.
If my understanding is correct, how about this answer? Please think of this as just one of several possible answers.
Issue and workaround:
From the situation of When I call this function, it return me files that only I added with my program (this program also can upload files to Google Drive), not all files., I thought that your scopes might include https://www.googleapis.com/auth/drive.file. When https://www.googleapis.com/auth/drive.file is used as the scope, only the files created by the application are retrieved.
In order to retrieve all files just under the root folder, please use the following scopes.
https://www.googleapis.com/auth/drive
https://www.googleapis.com/auth/drive.readonly
https://www.googleapis.com/auth/drive.metadata.readonly
https://www.googleapis.com/auth/drive.metadata.
If you want to retrieve only the file list, the scopes of .readonly can be used.
Modified script:
From your question, I could notice that you are using google-api-go-client with golang and Go Quickstart. In this case, how about the following modification?
If drive.DriveFileScope is included in the scopes, please modify as follows.
From:
config, err := google.ConfigFromJSON(b, drive.DriveFileScope)
To:
config, err := google.ConfigFromJSON(b, drive.DriveMetadataScope)
or
config, err := google.ConfigFromJSON(b, drive.DriveReadonlyScope)
If you want to also upload the file, please use drive.DriveScope.
Note:
When you modified the scopes, please remove the file of token.json of tokFile := "token.json". And please run the script and authorize again. By this, the modified scopes are reflected to the access token and refresh token. Please be careful this.
References:
google-api-go-client
Go Quickstart
Files: list
If I misunderstood your question and this was not the direction you want, I apologize.

Logged in user, Windows, in Golang

I need to get the currently logged in user(s) on the local Windows machine, using Golang. I'm not looking for the user currently running the application, which can be got from the built-in function user.Current().
I can call query user from cmd and this gives me the list (string manipulation required, but that is not a problem) of users I need.
The code I have tried is:
out, err := exec.Command("query", "user")
if err != nil {
panic(err)
}
// ...do something with 'out'
This produces the error panic: exit status 1. The same occurs if I do:
out, err := exec.Command("cmd", "/C", "query", "user")
...
As usually with such kind of questions,
the solution is to proceed like this:
Research (using MSDN and other sources) on how to achieve the
stated goal using Win32 API.
Use the built-in syscall package (or, if available/desirable,
helper 3rd-party packages) to make those calls from Go.
The first step can be this
which yields the solution
which basically is "use WTS".
The way to go is to
Connect to the WTS subsystem¹.
Enumerate the currently active sessions.
Query each one for the information about the
identity of the user associated with it.
The second step is trickier but basically you'd need to research
how others do that.
See this
and this
and this for a few examples.
You might also look at files named _windows*.go in
the Go source of the syscall package.
¹ Note that even on a single-user machine, everything
related to seat/session management comes through WTS
(however castrated it is depending on a particular flavor of Windows). This is true since at least XP/W2k3.
package main
import (
"fmt"
"os/exec"
)
func main() {
out, err := exec.Command("cmd", "/C", "query user").Output()
if err != nil {
fmt.Println("Error: ", err)
}
fmt.Println(string(out))
}

Go: Run test from multiple package with DB initialization

I have a GO project with this project structure (multiple couples of this kind of files in each package).
- api
- userHandler.go
- userHandler_test.go
- database
- user.go
- user_test.go
Inside user.go I have the User struct and the functions to Create/Get/Update a User (I'm using GORM but this is not the issue). In the user_test.go.
I'd like to have the DB cleaned (with all the data removed or in a certain state) for each different file, so I've tried to create 1 suite (using Testify) for each file, then use the SetupSuite function but the behaviour seems not deterministic, and probably I'm doing something wrong.
So my questions are:
Which is the best way to have a DB connection shared? Using a global variable is the best option?
Which is the best way to create the tables in the DB once and then init the DB with custom data before each file_test.go is run?
Right now I'm also having a strange bug: running
go test path/package1
go test path/package2
Everything works fine, but if I run (for testing all the packages)
cd path && go test ./...
I have errors that seems not to be deterministic, that's why I'm guessing that the DB connection is not handled properly
If your api package depends on your database package (which it appears to) then your api package should have a way to provide a database connection pool (e.g. a *sql.DB) to it.
In your tests for the api package, you should just pass in an initialised pool (perhaps with the test schema/fixtures pre-populated) that you can use. This can either be a global you initialise in init() for the api package or a setup() and defer teardown() pattern in each test function.
Here's the former (simplest) approach where you just create a shared database and schema for your tests to use.
package database
import testing
var testDB *sql.DB
// This gets run before your actual test functions do.
func init() {
var err error
db, err = sql.Open(...)
if err != nil {
log.Fatalf("test init failed: %s", err)
}
_, err := db.Exec(`CREATE TABLE ....`)
if err != nil {
log.Fatalf("test schema creation failed: %s", err)
}
}
Some tips:
You can also call a setup() function can create a table with a random suffix and insert your test data so that your tests don't use the same test table (and therefore risk conflicting or relying on each other). Capture that table name and dump it in your defer teardown() function.
https://medium.com/#benbjohnson/structuring-applications-in-go-3b04be4ff091 is worth reading for some additional perspective.

Use package file to write to Cloud Storage?

Golang provides the file package to access Cloud Storage.
The package's Create function requires the io.WriteCloser interface. However, I have not found a single sample or documentation showing how to actually save a file to Cloud Storage.
Can anybody help? Is there a higher level implementation of io.WriteCloser that would allow us to store files in Cloud Storage? Any sample code?
We've obviously tried to Google it ourselves but found nothing and now hope for the community to help.
It's perhaps true than the behavior is not well defined in the documentation.
If you check the code: https://code.google.com/p/appengine-go/source/browse/appengine/file/write.go#133
In each call to Write the data is sent to the cloud (line 139). So you don't need to save. (You should close the file when you're done, through.)
Anyway, I'm confused with your wording: "The package's Create function requires the io.WriteCloser interface." That's not true. The package's Create functions returns a io.WriteCloser, that is, a thingy you can write to and close.
yourFile, _, err := Create(ctx, "filename", nil)
// Check err != nil here.
defer func() {
err := yourFile.Close()
// Check err != nil here.
}()
yourFile.Write([]byte("This will be sent to the file immediately."))
fmt.Fprintln(yourFile, "This too.")
io.Copy(yourFile, someReader)
This is how interfaces work in Go. They just provide you with a set of methods you can call, hiding the actual implementation from you; and, when you just depend on a particular interface instead of a particular implementation, you can combine in multiple ways, as fmt.Fprintln and io.Copy do.

Resources