API Blueprint - Use Data Strucutres in response, without Attributes - apiblueprint

I'm using Apiary to mock out a new API.
I'm trying to avoid having to write out all the JSON responses over and over again. If I do that using a + Attributes(user) then it will auto generate a bunch of attributes blocks in the machine panel, which is super confusing in my mind (especially when you have multiple responses).
The resulting documentation looks way better if you write out the JSON request/response blocks manually.
Is there a way to store Request/Response objects as a Data Structure? Maybe a Model perhaps?
I'd love to be able to do something like this:
## Users [/auth]
A user object contains the these attributes.
+ Attributes (user) <!-- I like this here -->
### Refresh a token for a user [POST /auth/refresh]
+ Request (application/json)
+ Headers
Authorization: Bearer jsonWebToken
+ Response 200 (application/json)
+ Body
{
"data": [
(user) <!-- I wish this was the user data structure as JSON -->
],
"meta": {
"access_token": "jsonWebToken",
"token_type": "Bearer",
"expires_in": 3600
}
}
# Data Structures
## user (object)
+ id: 123 (number)
+ email: drew#funkhaus.us
Note: The user object is 30 attributes long in real life.

Unfortunately that's not a supported scenario, you can't but data structures into your JSON payloads.
So if I understand correctly - using Attributes is fine but you would like hide them in documentation. Could you confirm that?

Related

GraphQL Request: Determine requested resource directly out of request

unlike REST, GraphQL has only one endpoint, usually called /graphql.
I have had good experiences with REST by outsourcing the authorisation to a separate upstream service (e.g. to a proxy like Nginx / Envoy in combination with Open Policy Agent) and using the path and the HTTP verb for the decision. For example, the GET /billing route could only be used by a user with the JWT roles claim "accountant".
Now I am looking for a way to adapt this with GraphQL.
The only possibility I have found is to interpret the query in the request body, e.g.:
body: {
query: 'query {\r\n cats {\r\n id,\r\n name\r\n }\r\n}\r\n'
}
However, this seems to be quite complex and error-prone, as a lot of knowledge and logic would have to be outsourced, especially since the proxies (resp. OPA / other authorisation solutions) don't necessarily have any GraphQL capabilities.
Is there any better way to trustworthily identify which resolver / query / mutation / entity is being requested in a GraphQL request? Headers and other enrichments set by the client are not suitable here, right?
I would highly appreciate any appraoch!
That does indeed look error prone. The GraphQL docs recommend moving authorization checks to the business logic layer. Quoting their example here for completeness:
// Authorization logic lives inside postRepository
var postRepository = require('postRepository');
var postType = new GraphQLObjectType({
name: ‘Post’,
fields: {
body: {
type: GraphQLString,
resolve: (post, args, context, { rootValue }) => {
return postRepository.getBody(context.user, post);
}
}
}
});
So rather than trying to parse the query the authz check is done in the resolver. Some discussion on using OPA with GraphQL can be found in this issue from the OPA contrib repo.

Querydict not recognizing json array in django

Django and Django Rest Framework is not sensing the array in the following JSON object:
{
"datum":
[
{'proposed':'20/sep/2018', "pk":"475"},
{'proposed':'20/sep/2018', "pk":"517"}
]
}
When I do a print(request.data) this is the output:
<QueryDict: {'{"datum":[{"proposed_submission_date":"20/Sep/2018","pk":"475"},{"proposed_submission_date":"20/Sep/2018","pk":"512"}]}': ['']}>
and when I do a print(request.data.keys())I get:
{"datum":[{"proposed_submission_date":"20/Sep/2018","pk":"475"},{"proposed_submission_date":"20/Sep/2018","pk":"512"}]}
You can see that its taking the json string as the key, and not assigning "datum" as the key.
Do I need to do something else with the JSON string?
I'm doing an AJAX PUT to the Django rest framework backend.
the fact that you see a QueryDict rather than just a dict is a sign that you sent your data as application/x-www-form-urlencoded or multipart/form-data.
Ensure you send the request with a application/json content type and it should be just fine.

How to test and automate APIs implemented in GraphQL

In our company, we are creating an application by implementing graphQL.
I want to test and automate this APIs for CI/CD.
I have tried REST-assured but since graphQL queries are different than Json,
REST-assured doesn't have proper support for graphQL queries as discussed here.
How can we send graphQL query using REST-assured?
Please suggest the best approach to test and automate graphQL APIs
And tools which can be used for testing and automation.
So I had the same issue and I was able to make it work on a very simple way.
So I've been strugling for a while trying to make this graphQL request with Restassured in order to validate the response (amazing how scarce is the info about this) and since yesterday I was able to make it work, thought sharing here might help someone else.
What was wrong? By purely copying and pasting my Graphql request (that is not json format) on the request was not working. I kept getting error "Unexpected token t in JSON at position". So I thought it was because graphql is not JSON or some validation of restassured. That said I tried to convert the request to JSON, imported library and lot of other things but none of them worked.
My grahql query request:
String reqString = "{ trade { orders { ticker } }}\n";
How did I fixed it? By using postman to format my request. Yes, I just pasted on the QUERY window of postman and then clicked on code button on the right side (fig. 1). That allowed my to see my request on a different formatt, a formatt that works on restassured (fig. 2). PS: Just remeber to configure postman, which I've pointed with red arrows.
My grahql query request FORMATTED:
String reqString = {"query":"{ trade { orders { ticker } }}\r\n","variables":{}}
Fig 1.
Fig 2.
Hope it helps you out, take care!
You can test it with apitest
{
vars: { #describe("share variables") #client("echo")
req: {
v1: 10,
}
},
test1: { #describe("test graphql")
req: {
url: "https://api.spacex.land/graphql/",
body: {
query: `\`query {
launchesPast(limit: ${vars.req.v1}) {
mission_name
launch_date_local
launch_site {
site_name_long
}
}
}\`` #eval
}
},
res: {
body: {
data: {
launchesPast: [ #partial
{
"mission_name": "", #type
"launch_date_local": "", #type
"launch_site": {
"site_name_long": "", #type
}
}
]
}
}
}
}
}
Apitest is declarative api testing tool with JSON-like DSL.
See https://github.com/sigoden/apitest

AWS AppSync resolver cache layer?

What is a common pattern to have AWS AppSync resolvers cache their output?
I'm writing an API that fronts data that will not change at all over time. The API returns the contents of books (title, author, chapters, etc.).
My initial idea was to have the resolver request some JSON payload from CloudFront. If the requested document is not in CloudFront, CloudFront would trigger a Lambda function, which would know how to fetch the JSON document (from a database), then put the payload in CloudFront. This seems weird conceptually, but it would solve the caching problem.
Example
const query = `{
bookById(bookID: "468c95") {
bookID
title
author
chapters {
title
text
}
}
}`;
const book = query(query);
// book => {
// bookId: "468c95",
// title: "AppSync for Normal People",
// author: null,
// chapters: [
// {
// title: "Chapter 1: Dawn of Men",
// text: [
// "It was the best of times, it was the worst of times.",
// "..."
// ]
// },
// { ... }
// ]
// }
}
In other words, calling the fictitious query method will trigger some resolver in AppSync. That resolver will absolutely always return the same data. Thus, why not have the data that the resolver works with (I guess you can view that as the input to the resolver) be cached in CloudFront so it can be served from memory instead of having to hit some backend storage (like a database) or trigger a Lambda?!

Twitter typeahead.js remote and search on client

As of my understanding typeahead.js got three ways of fetching data.
Local: hardcoded data
Prefetch: Load a local json file, or by URL
Remote: Send a query to the backend which responds with matching results
I want to fetch all data from the backend and then
process it on the client.
The data my server responds with got the following structure:
[{id:2, courseCode:IDA530, courseName:Software Testing, university:Lund University},
{id:1, courseCode:IDA321, courseName:Computer Security, university:Uppsala University}, ...]
I want it to search on all fields in each entry. (id, courseCode, courseName, university)
I wanna do more on the client and still fetching one time for each user (instead of every time a user are typing), I probably misunderstood something here but please correct me.
You should re-read the docs. Basically there are two things you need:
Use the prefetch: object to bring all the data from the backend to the client only once (that's what you are looking for, if I understand correctly.)
Use a filter function to transform those results into datums. The returned datums can have a tokens field, which will be what typeahead searched by, and can be built from all your data.
Something along the lines of:
$('input.twitter-search').typeahead([{
name: 'courses',
prefetch: {
url: '/url-path-to-server-ajax-that-returns-data',
filter: function(data) {
retval = [];
for (var i = 0; i < data.length; i++) {
retval.push({
value: data[i].courseCode,
tokens: [data[i].courseCode, data[i].courseName, data[i].university],
courseCode: data[i].courseCode,
courseName: data[i].courseName,
template: '<p>{{courseCode}} - {{courseName}}</p>',
});
}
return retval;
}
}
}]);

Resources