Build query conditionally and keeping the common part to execute differently - go

Trying to figure out how can I build query conditionally using gorm, save the common query part and execute it.
Suppose I am querying list of courses like below.
common_query_part := app.GetDB(). // <= this returns *gorm.DB
Model(&models.Course{}).
Where("status=?", models.CourseStatusPublished) //models.CourseStatusPublished is a constant
Now I would like to get the list of published courses. And also get its count. So I tried it like so.
published_course_count := *common_query_part
and then for the course list
result := common_query_part.
Offset(offset).
Limit(limit).
find(&courses)
if result.Error !=nil {
//handle error
}
and for the count
result = published_course_count.Count(&total)
if result.Error !=nil {
//handle error
}
The first part works perfectly. But second part of the query does not work, nor even generate any error. What should I do in this case? The common query part can be huge and complex. So rewriting it again just for getting the count can be error prone. So is there a way where I will keep the common query part and execute it sometimes for published courses.. sometimes for published courses count?

I don't believe GORM intends you to reuse your query that way. It is hard to say definitively but it most likely doesn't work because there is some state that is still shared.
This: published_course_count := *common_query_part does a copy by value of the gorm.DB struct, but if that struct contain any pointers(it does) then those are copied as well resulting in two separate struct with pointer to the same objects and thus still being "linked". You need a dedicated clone function, which does exist in gorm but is not exposed.
I would advise you to put the generation of the common part inside a function and call it twice, that way you don't have to copy-paste the same query.
func common_query_part() *gorm.DB {
return app.GetDB().
Model(&models.Course{}).
Where("status=?", models.CourseStatusPublished)
}

Related

How to access unconverted driver.Value slice of sql.Rows

My goal is to get to the raw driver.Value values as deserialized by a sql driver in its implementation of driver.Rows.Next(). I want to handle the conversion from the values returned by the driver to the needed target types, instead of relying on the automatic conversions built in to Rows.Scan. Note this question does not ask your opinion on whether Rows.Scan "should" be used. I don't want to use it, and I am asking if there is any way to avoid it.
A meaningful answer does not use Rows.Scan at all. The dynamic approach illustrated in Working with Unknown Columns is awful: It invokes all the overhead of Scan and destroys the type information of the source columns, instead shredding the actual driver.Values into SqlBytes.
The following hack works, but relies on the internal implementation detail that sql.Rows.Next() populates the internal field lastcols with exactly the unconverted values which I want:
vpRows := reflect.ValueOf(rows) // rows is a *sql.Rows
vRows := reflect.Indirect(vpRows) // now we have the sql.Rows struct
mem := vRows.FieldByName("lastcols") // unexported field lastcols
unsafeLastCols := unsafe.Pointer(mem.UnsafeAddr()) // Evil
plastCols := (*[]driver.Value)(unsafeLastCols) // But effective
for rows.Next() {
rowVals := *plastCols
fmt.Println(rowVals)
}
The normal solution is to implement your own sql.Scanner. But this does use rows.Scan, so it violates your mysterious requirement not to use rows.Scan.
If you truly must avoid rows.Scan, you'll need to write your own driver implementation (possibly wrapping an existing driver) which provides access to the driver.Value values without rows.Scan.

Subquery, for lack of a better term, when using an API written in GraphQL

I'm relatively new to GraphQL so please bear with me ...
That said, I'm writing an app in node.js to push/pull data from two disparate systems, one of which has an API written in GraphQL.
For the Graph system, I have, something like, the following types defined for me:
Time {
TimeId: Int
TaskId: Int
ProjectId: Int
Project: [Project]
TimeInSeconds: Int
Timestamp: Date
}
and
Task {
TaskId: Int
TaskName: String
TaskDescription: String
}
Where Project is another type whose definition isn't important, only that it is included in the type definition as a field...
What I would like to know is if there is a way to write a query for Time in such a way that I can include the Task type's values in my results in a similar way as the values for the Project type are included in the definition?
I am using someone else's API and do not have the ability to define my own custom types. I can write my own limited queries, but I don't know if the limits are set by the devs that wrote the API or my limited ability with GraphQL.
My suspicion is that I cannot and that I will have to query both separately and combine them after the fact, but I wanted to check here just in case.
Unfortunately, unless the Time type exposes some kind of field to fetch the relevant Task, you won't be able to query for it within the same request. You can include multiple queries within a single GraphQL request; however, they are ran in parallel, which means you won't be able to use the TaskId value returned by one query as a variable used in another query.
This sort of problem is best solved by modifying the schema, but if that's not an option then unfortunately the only other option is to make each request sequentially and then combine the results client-side.

graphql- same query with different arguments

Can the below be achieved with graph ql:
we have getusers() / getusers(id=3) / getusers(name='John). Can we use same query to accept different parameters (arguments)?
I assume you mean something like:
type Query {
getusers: [User]!
getusers(id: ID!): User
getusers(name: String!): User
}
IMHO the first thing to do is try. You should get an error saying that Query.getusers can only be defined once, which would answer your question right away.
Here's the actual spec saying that such a thing is not valid: http://facebook.github.io/graphql/June2018/#example-5e409
Quote:
Each named operation definition must be unique within a document when
referred to by its name.
Solution
From what I've seen, the most GraphQL'y way to create such an API is to define a filter input type, something like this:
input UserFilter {
ids: [ID]
names: [String]
}
and then:
type Query {
users(filter: UserFilter)
}
The resolver would check what filters were passed (if any) and query the data accordingly.
This is very simple and yet really powerful as it allows the client to query for an arbitrary number of users using an arbitrary filter. As a back-end developer you may add more options to UserFilter later on, including some pagination options and other cool things, while keeping the old API intact. And, of course, it is up to you how flexible you want this API to be.
But why is it like that?
Warning! I am assuming some things here and there, and might be wrong.
GraphQL is only a logical API layer, which is supposed to be server-agnostic. However, I believe that the original implementation was in JavaScript (citation needed). If you then consider the technical aspects of implementing a GraphQL API in JS, you might get an idea about why it is the way it is.
Each query points to a resolver function. In JS resolvers are simple functions stored inside plain objects at paths specified by the query/mutation/subscription name. As you may know, JS objects can't have more than one path with the same name. This means that you could only define a single resolver for a given query name, thus all three getusers would map to the same function Query.getusers(obj, args, ctx, info) anyway.
So even if GraphQL allowed for fields with the same name, the resolver would have to explicitly check for whatever arguments were passed, i.e. if (args.id) { ... } else if (args.name) { ... }, etc., thus partially defeating the point of having separate endpoints. On the other hand, there is an overall better (particularly from the client's perspective) way to define such an API, as demonstrated above.
Final note
GraphQL is conceptually different from REST, so it doesn't make sense to think in terms of three endpoints (/users, /users/:id and /users/:name), which is what I guess you were doing. A paradigm shift is required in order to unveil the full potential of the language.
a request of the type works:
Query {
first:getusers(),
second:getusers(id=3)
third:getusers(name='John)
}

What is the point of naming queries and mutations in GraphQL?

Pardon the naive question, but I've looked all over for the answer and all I've found is either vague or makes no sense to me. Take this example from the GraphQL spec:
query getZuckProfile($devicePicSize: Int) {
user(id: 4) {
id
name
profilePic(size: $devicePicSize)
}
}
What is the point of naming this query getZuckProfile? I've seen something about GraphQL documents containing multiple operations. Does naming queries affect the returned data somehow? I'd test this out myself, but I don't have a server and dataset I can easily play with to experiment. But it would be good if something in some document somewhere could clarify this--thus far all of the examples are super simple single queries, or are queries that are named but that don't explain why they are (other than "here's a cool thing you can do.") What benefits do I get from naming queries that I don't have when I send a single, anonymous query per request?
Also, regarding mutations, I see in the spec:
mutation setName {
setName(name: "Zuck") {
newName
}
}
In this case, you're specifying setName twice. Why? I get that one of these is the field name of the mutation and is needed to match it to the back-end schema, but why not:
mutation {
setName(name: "Zuck") {
...
What benefit do I get specifying the same name twice? I get that the first is likely arbitrary, but why isn't it noise? I have to be missing something obvious, but nothing I've found thus far has cleared it up for me.
The query name doesn't have any meaning on the server whatsoever. It's only used for clients to identify the responses (since you can send multiple queries/mutations in a single request).
In fact, you can send just an anonymous query object if that's the only thing in the GraphQL request (and doesn't have any parameters):
{
user(id: 4) {
id
name
profilePic(size: 200)
}
}
This only works for a query, not mutation.
EDIT:
As #orta notes below, the name could also be used by the server to identify a persistent query. However, this is not part of the GraphQL spec, it's just a custom implementation on top.
We use named queries so that they can be monitored consistently, and so that we can do persistent storage of a query. The duplication is there for query variables to fill the gaps.
As an example:
query getArtwork($id: String!) {
artwork(id: $id) {
title
}
}
You can run it against the Artsy GraphQL API here
The advantage is that the same query each time, not a different string because the query variables are the bit that differs. This means you can build tools on top of those queries because you can treat them as immutable.

Adding element to slice in handlerfunc and return as a whole

I'm writing a service to learn Go. My main function can be found below. It starts with reading an XML file and storing them in a slice. I have a /rss endpoint which outputs a RSS feed from the items stored in the "database". This is working fine. I also have an endpoint (/add/{base64}) which is used to add a new item to that slice. Unfortunately I don't know how to do this. For some reason I need to return the new database with the added record, so it gets available to the /rss. But how?
My concrete problem is:
I know how to add a record to database
But I don't know how to return the full (including the added) database so the /rss endpoint is able to use it. So I want to let the rest.AddArticle return the new database so the /rss endpoint knows the added item.
func main() {
defer glog.Flush()
// read database
database := model.ReadFileIntoSlice()
// initialise mux router
r := mux.NewRouter()
// http handles
r.HandleFunc("/add/{base64url}", rest.AddArticle(database))
r.HandleFunc("/rss", rest.GenerateRSS(database))
// start server
http.Handle("/", r)
glog.Infof("running on port %d", *port)
http.ListenAndServe(":"+strconv.Itoa(*port), nil)
}
Or is there some other solution which does the job? I just want database to be available through all packages.
From what I can tell the problem is that you're writing to your db but you're reading from the cached version so the response of rss just doesn't reflect the model at the time the request is made. If you look at this code;
database := model.ReadFileIntoSlice()
// initialise mux router
r := mux.NewRouter()
// http handles
r.HandleFunc("/add/{base64url}", rest.AddArticle(database))
you somehow need to modify the value of database. There are many ways you could do this. A few options would be 1) define it on some object or at a package level and then directly modify it from wherever AddArticle is defined. 2) refresh your in memory version, ie before you return the results, read from the db again so you're assured to have the latest (obv performance implications) 3) don't pass database by value, make the argument a pointer instead. AddArticle is getting a copy of database rather than the address to the version you're reading from in your rss call. You could instead pass a pointer into that method so that the original copy is modified (it also performs substantially better as your model gets larger).
Based on the simplicity of your program I'd probably do 3. Realistically 2 is a more robust solution and serious enterprise software probably would require something more along those lines (your model doesn't work if your app is load balanced or something like that).

Resources