Please take a moment to review my approach and share any valuable advice you may have [closed] - go

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 days ago.
Improve this question
Hello fellow Gophers,
I'm new to Go, and I'm looking to develop a REST API using it. The API is pretty simple and only has one endpoint, which is a GET request. The purpose of the API is to allow a third-party application (known domain) to access data that's being stored in our mobile app backend, which was written in Node.js and Express (not by me).
I want to authenticate the endpoint using an API key and allow access only from a known domain. My authentication code looks like this:
Authenticating the endpoint using API Key and allowing access from known domains will suffice since the endpoint does not require registration or login (that's what I assume). Correct me If I am wrong :)
`apiKey := r.Header.Get("Authorization")
domain := r.Header.Get("Origin")
if apiKey != "YOUR_API_KEY" {
http.Error(w, "Invalid API key", http.StatusUnauthorized)
return
}
if !isDomainAllowed(domain) {
http.Error(w, "Domain not allowed", http.StatusUnauthorized)
return
}`
The response JSON object will look like this:
`{
"username": "",
"email": "",
...
"data_a": [
{
"D_a": "..."
}
],
"data_b": [
{
"d_b": "..."
},
{
"d_c": "..."
},
...
]
}`
To generate this JSON response, I created a function in MySQL database that converts a big chunk of rows into a JSON object. The database will return the data as a JSON object, which will be parsed to a Go struct, marshaled, and sent back to the client. Is this approach good?
Prospective project structure
I want to keep the project structure minimal, so I'm planning to have a "controllers" folder with a user_controller.go file that contains the user struct, routes/handlers, and methods to fetch data from my db. The DB client will have a get_connection.go file for DB connection initialization. Finally, the main.go file. .
`- controllers
- user_controller.go
(user struct, routes/handler, db operation)
- db_client
- get_connection.go
(db connection initialization)
- main.go`
As I am more comfortable writing queries, I do not want to use ORM, and I want to make it as minimal as possible. I will use SQLx and net/http packages.
What do you think of this approach? Any feedback or suggestions are welcome.

Related

How can I obtain password hash from ParseUser?

I am currently in the process of migrating all user accounts of my parse-server backend to a 3rd-party SSO provider. The provider allows me to import users with pre-hashed passwords, allowing me to do the transition without needing the users to sign-in to finish the migration process.
I have been having issues trying to obtain the hashed password from the ParseUser object. I can see it in the MongoDB (the _hashed_password field), however I have been unable to extract the password field from the queried object.
I obtain the ParseUser object via the following query (simplified, removed async/await)
const query = new Parse.Query(Parse.User)
query.find({useMasterKey: true}).then(users => {
users.forEach(user => {
// obtain password here + do migration
})
});
I have attempted to get the password via
user.getPassword()
user.get("password")
user.get("_hashed_password")
query.select(["_hashed_password", "password"]).find({useMasterKey: true}).then(...)
The getPassword() function does not exist, but I wanted to try it anyway. the get("password") and get("_hashed_password) returns undefined.
The query.select(...) returns the entire user (except the password), even though I thought it would return either the password or an empty object.
My question is: How can I programatically get the hashed password of a user on the parse platform?
Currently for debugging purposes I am developing this migration as a cloud function. Once I have it working I was planning to move it as a job. I believe this should have no effect on the way the code works, but am leaving this note here just in case anyway.
Thanks in advance for your help.
Thanks to Davi Macêdo, I figured it out.
One has to use aggregate query, however the field _hashed_password gets filtered out by Parse even in aggregate queries, so we need to compute additional field based on the _hashed_password. The following code works:
new Parse.Query(Parse.User).aggregate({
"project": {
"myPassword": "$_hashed_password"
}
})

How to structure Shopify data into a Firestore collection that can be queried efficiently

The Background
In an attempt to build some back-end services for my e-commerce (Shopify based) site I have set up a Firestore trigger that writes order details with every new order created which is updated by a web hook POST function provided by Shopify - (orders/Create webhook).
My current cloud function -
exports.saveOrderDetails = functions.https.onRequest((req, res) => {
var docRef = db.collection('orders').doc(req.body.name);
const details = req.body;
var setData = docRef.set(req.body).then( a =>{
res.status(200).send();
});
});
Which is able to capture the data from the webhook and store it in the order number's "name" document within my "orders" collection. This is how it looks in Firestore:
My question is - with the help of body-parser (already parsing out "name" which is represented as #9999 in my screenshot, to set my document name value) - how could I improve my cloud function to handle storing this webhook POST in a better data structure for Firestore and to query it later?
After reviewing the comments on this question, I moved this question over to Firebase-Talk and it appears the feature I am attempting here would be close to what is known as "collection group queries" and was informed I should adjust my data model approach since this feature is currently still on the road map - and perhaps look into the Firestore REST API as suggested by #jason-berryman
Besides the REST APi, #frank-van-puffelen made a great suggestion to look into working with Arrays, Lists, Sets for Firebase/Firestore
Another approach that could mitigate this in my scenario is to have my HTTP Firestore cloud trigger have multiple parsing arguments that create top more top level documents - however this could cause a point of scaling failure or an increase of cost factor due to putting more parsing processing logic in my cloud function and adding additional latency...
I will mark my question as answered for the time being to hopefully help others to understand how to work with documents in a single collection in Firestore and not attempt to query groups of collections before they get too far into modelling and need to restructure their app.

What is the idiomatic way to handle api versions [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm creating a server in Go intended for a mobile app. I need to be able to support multiple versions of the API for cases were users don't update the app. The main concern with the versioning, is to return data in the correct format for the version of the mobile app.
I've seen that there are three basic ways to do this.
A. One way is by having one route handler on "/", and then allowing that function to parse the url for versioning.
Example:
func main() {
http.HandleFunc("/", routes.ParseFullURI)
}
B. Use a library such as gorilla/mux to handle patterns within the router, but I saw some warnings that this can be too slow.
Example:
func main() {
mux.HandleFunc("{version:}/", routes.ParseVersionForHome)
mux.HandleFunc("{version:}/getData", routes.ParseVersionForGetDAta)
mux.HandleFunc("{version:}/otherCall", routes.ParseVersionForOtherCall)
}
C. Have individual urls that don't change, but based on the header, split into different versions.
Example:
func main() {
http.HandleFunc("/", routes.ParseHeaderForVersionForHome)
http.HandleFunc("/getData", routes.ParseHeaderForVersionForGetData)
http.HandleFunc("/otherCall", routes.ParseHeaderForVersionForOtherCall)
}
I'm concerned that option 1 will be too messy code wise. I'm concerned that option 2 will be too slow performance wise, and I'm concerned that option 3 will be difficult for the client to handle, or will get confusing since the version isn't clearly labeled.
Which method is the most idiomatic for Go, and will result in the greatest performance for a mobile app which will be polling often?
There are many routing frameworks that allow for grouping, for instance with echo (a very good framework if you want speed)
package main
import "github.com/labstack/echo"
func ping(c *echo.Context) {
c.String(200, "pong")
}
func main() {
e := echo.New()
v1 := e.Group("/v1")
v1.Get("/ping", ping)
v2 := e.Group("/v2")
v2.Get("/ping", ping)
e.Run(":4444")
}
I think this is quite clean.
I am sure many other frameworks allow for this. I know for a fact martini does, but that is not an idiomatic framework...

Document HAL "_links" from Spring Hateoas (with swagger)? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I have a REST service I want to document for my client developing team.
So I added some Links from Spring-Hateoas to my resources API, and plugged into it swagger-springmvc #Api... annotations to document everything and make a good API reference for my Angular team to be able to understand my REST service.
The problem is that swagger is unable to discover what links are possible, and just give me a big array of Links without saying their possible values.
Here is a (simple) example. Swagger detects :
Model Schema
CollectionListResource {
collections (array[CollectionResource]): All available collections,
links (array[Link]): Relations for next actions
}
CollectionResource {
collectionId (string): Collection Unique Id,
name (string): Human readable collection name,
links (array[Link]): Relations for next actions
}
Link {
rel (string, optional),
templated (boolean, optional),
href (string, optional)
}
And I get in fact in HAL :
{"collections":
[{"collectionId":"5370a206b399c65f05a7c59e",
"name":"default",
"_links":{ [
"self":{
"href":"http://localhost:9080/collections/5370a206b399c65f05a7c59e"
},
"delete":{
"href":"http://localhost:9080/collections/5370a206b399c65f05a7c59e"
}
]}
}, ...]}
I've tried to extend Link and ResourceSupport to have annoted version of them but this lead me to nowhere.
Is there a way/tool I could use to generate a good API doc telling that a self relation is to get the content, and a delete relation is to delete the collection ?
I liked swagger for its good UI, but I don't mind changing my documentation tool if its help having the doc really complete.
I could eventually think of changing spring-hateoas for another link generator, but I'm not sure there is a better tool available right now.
Swagger-UI as such is not hypermedia aware; or atleast its limited in that it can ONLY navigate from top level apis to api listings. That has not changed much in v2.0 of the spec either, with the notable addition of linking to external documents for out of band documentation.
What you need is a hybrid of the HAL browser and the swagger-ui. As you rightly pointed out, there is a semantic gap between the word "delete" and what it means in the context of a collection resource. HAL uses a combination of curies and optionally a profile document ALPS to bridge this gap. Curies are nothing but namespaced versions of linked relations so that they dont collide with other domains. Restful Web APIs is a great resource to learn more about these ideas and how to design media types. The spring data rest project also has a great example of how to achieve that.
One of the ways that I think would work is to adjust swagger specification to support an operations oriented view rather than api listing oriented view, not really possible in the the timeframe that you're possibly working with.
Use existing RFC's like rfc5023 to have a shared understanding of what "edit"ing a resource means.
Lastly if none of the standard link relationships express the intent of the action, define your own application specific semantics that provide documentation for these application specific link relationships. That way clients of your service will have a shared understanding of these relations within the context of your application.
Below is an example that demonstrates and combines these approaches.
{"collections":
[{"collectionId":"5370a206b399c65f05a7c59e",
"name":"default",
"curies": [
{
"name": "sw",
"href": "http://swagger.io/rels/{rel}",
"templated": true
},
{
"name": "iana",
"href": "https://www.rfc-editor.org/rfc/rfc5023",
"templated": false
},
{
"name": "col",
"href": "http://localhost:9080/collections/{rel}",
"templated": false
}
],
"_links":{ [
"self":{
"href":"http://localhost:9080/collections/5370a206b399c65f05a7c59e"
},
"sw:operation":{
"href":"http://localhost:9080/api-docs/collections#delete"
},
"sw:operation":{
"href":"http://localhost:9080/api-docs/collections#search"
},
"sw:operation":{
"href":"http://localhost:9080/api-docs/collections#update"
},
"iana:edit":{
"href":"http://localhost:9080/collections/5370a206b399c65f05a7c59e"
},
"col:delete": {
"href":"http://localhost:9080/collections/5370a206b399c65f05a7c59e"
}
]}
}, ...]}
So from most generic to most specific, the solution (in that order) is
The iana qualified links have a specific meaning, in this case the "edit" has a very specific meaning that every restful client can implement. This is a generialized link type.
The sw qualified link relations have a special meaning as well, it implies the href deep links to operations in the swagger api documentation.
The col qualified links are application specific links that only your application knows about.
I know this doesn't answer your question precisely, and the tools in this space are still evolving. Hope this helps.
DISCLAIMER: I'm one of the core maintainers of springfox which is a spring integration solution that makes it easy to provide swagger service descriptions. So any feedback on how you'd like it to solve this problem is very welcome!

Connection String vs Data Service call

I have 2 separate MVC 3 websites (A & B), both with their own SQL Azure databases which may or may not be on the same server. Both are using Code First Entity Framework and will be deployed to windows azure.
Website A is considered the master website and database. This holds data of the clients using our software along with usernames and passwords. I want website B to connect to website A's database when user logs in or registers. Website B will also need to hit website A's database in order to get some of the client's data after the user is logged in.
Right now I just have this one website B hitting website A's database, but in the future I will have more of these websites like website B hitting the main database for the same reasons.
My question is what is the best way to send and receive data between these smaller websites and the master database?
At first I was just using 2 connection strings in website B with two different contexts(one for each db). I liked this because the object types all flowed together, there wasn't any converting to do.However, I wasn't sure if this was the best and most secure way to go.
Another option I have been looking at is oData Services. I do like the idea of having everything separated and just calling the service when needing data from the master database. The issue I am having though is transferring the data from the service into my model's objects. I am having to do nasty things like this foreach statement:
public ActionResult GetMovies()
{
var ctx = new MovieODataService.MovieContext(new Uri("http://localhost:54274/MovieService.svc/"));
DataServiceQuery<MovieODataService.Movie> query = ctx.Movies;
var response = query.Execute() as QueryOperationResponse<MovieODataService.Movie>;
var model = new MovieModel();
model.Movies = new List<Movie>();
foreach (var item in response)
{
model.Movies.Add(new Movie
{
Title = item.Title,
ReleaseDate = item.ReleaseDate
});
}
return View(model);
}
I am also open to any other suggestions. Thanks in advance!
OData is great for a lot of things, but where it really shines is in exposing rich queryability over (preferably) schematized information. That doesn't really feel like a great fit for authentication calls.
Have you looked at the more traditional authentication protocols, such as OAuth? That seems like a much better fit for what you're trying to achieve.

Resources