Referencing a property inside a defined data structure in API Blueprint/MSON - apiblueprint

I've been working with the API Blueprint format for designing an API and have had a lot of fun working with it. When I was declaring my data structures I came upon a code reuse or model reuse issue.
According to the documentation when declaring a Response it has to be done like this:
For example:
+ Request 200 (application/json)
However when multiple people work on the document I don't want to instruct them what return code to use since we have them defined and they're numbers so people forget them. So to avoid having to go back and forth, instead I made this with the idea that I can use/reference one of the properties:
# Data Structures
## HttpCode (object)
+ success: 200 - Request processed successfully
+ not_found: 404 - Content requested not found
+ forbidden: 403 - Access to content is forbidden
I would reference it as such:
+ Response (HttpCode.success) (application/json)
...
It obviously does not work and I cannot find anything that pertains to what I want to do in the docs. Maybe i've missed it.
So how do you do it? Is it possible?
Thanks!

This is not possible. Their representative in GitHub closed my issue.
https://github.com/apiaryio/api-blueprint/issues/411

Related

what is the difference between request()->json() and request()->input()

What is the difference between request()->json() and request()->input() in laravel:
is there any difference in functionality in laravel.
Both are almost same but having slight difference. Since $request->input() is smart enough to pull userdata from get, post or json. laravel offers $request->json(). there are 2 reasons you might prefer $request->json().
1) You might want to just be more explicit to other programmers on your project about where you're expecting the data to come from.
2) If POST doesn't have the correct application/json headers, $request->input() won't pick it up as JSON, but $request->json() will do.
Basicaly they have a same funcionalty the only difference is at naming meaning when you see json you know that you are expecting json data
while with input you can expecting jason data but also http request GET OR POST

Google Api Ruby Client to return the actual HTTP response, not the helper object

Is there an easy way to ask the google api ruby client to just give you back the stock HTTP response, rather than to perform the lovely, but slightly limiting translation into one of their ruby representable objects?
e.g.
response = Gmail.client.get_user_message("me", id)
=> #<Google::Apis::GmailV1::Message
response = Gmail.client.list_user_messages("me")
=> #<Google::Apis::GmailV1::ListMessagesResponse
but
response = Gmail.client.delete_user_message("me", id)
=>nil #successfully deleted
Now that's all fine and dandy, except that sometimes I just want to know what sort of response is going to come back. i.e. an HTTP response with maybe some JSON in the body. And then I'll worry about what I do with it...
I can take the response and use the
response.to_json
to get the body of the json that would have come back (though I still won't have the response code, and I need to KNOW that it's one of those objects first).
The client library is definitely getting that, it's just converting it into these objects before it lets me see it. And if I don't know that it's a google object (and not nil) I can't run that to_json consistently....
Any ideas other than second guess what google is going to send me back?
(I should note that this has come about when trying to move a library from dealing with their 0.8 api to their 0.9 api, so call me a cynic if you must but my faith that google won't make breaking changes to those objects returned is at a low ebb...
As far as I know, it is possible to ask the server to send only the fields you really need and get a partial response instead of the default full response as mentioned in Performance Tips.
However, I suggest that you please check the documentation for the specific API you are using to see if the field you're looking for is currently supported. For the Gmail API, you may go through Working with partial resources.
Here are the two types of partial requests that you can use:
Partial response: A request where you specify which fields to include in the response (use the fields request parameter).
Patch: An update request where you send only the fields you want to change (use the PATCH HTTP verb).
Hope that helps!

Setting noindex on Amazon S3 objects

We have some publicly shared S3 files that we want to make sure won't be indexed by Google. I can't seem to find any documentation on how to do this. Is there a way to set a "noindex" x-robots-tag response header on individual S3 objects?
(We're using the Ruby AWS client)
There does not appear to be a way to do this.
Only certain headers from an S3 PUT object request are documented as being returned when the object is fetched.
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
Anything else you send appears to be simply disregarded, as long as it doesn't actually invalidate the request.
Actually, that's what I thought before researching this, and it's almost true.
The documentation here seems incomplete, and elsewhere suggests the following request headers, if sent with the upload, will appear in the download:
Cache-Control
Content-Disposition
Content-Encoding
Content-Type
x-amz-meta-*
Other headers are listed at the latter link, but some of these like Expect wouldn't make sense on a GET request, so they logically wouldn't appear.
So far, this is all consistent with my experience with S3.
If you send a random but not-invalid header with your request, it's ignored. Example:
X-Foo: bar
S3 seems to accepts this on upload, but discards it (presumably doesn't store it)... downloading the object does not return the X-Foo header.
But X-Robots-Tag appears to be an undocumented exception to this.
Uploading a file with X-Robots-Tag: noindex (for example) does indeed result in the same header and value being returned with the object when you GET it.
Unless somebody can cite the documentation that explains why this works, we're operating in distinctly undocumented territory.
But, if you're interested in going there, the simple answer appears to be, you just add this header to the HTTP PUT request you send to the REST API to upload the object.
"Not so fast," you say, "I'm using the Ruby SDK." Indeed. The AWS Ruby client seems to be too "helpful" to let you get away with this, at least, not easily. The docs there show how to add "metadata" --
:metadata (Hash) — A hash of metadata to be included with the object. These will be sent to S3 as headers prefixed with x-amz-meta. Each name, value pair must conform to US-ASCII.
Well, that's not going to work, because you'd get x-amz-meta-x-robots-tag.
How do you set other headers in the upload? Every other header you'd normally set is an element of the options hash, like :cache_control, which turns into Cache-Control: in the upload request. Unless they're blindly applying the keys from that hash to the upload transaction (which would be terrible design combined with excellent luck) then you may not have a straightforward way to get here from there. I can't be much more specific, because the only I really know about Ruby is the same thing I know about Java -- from what I've seen of it, I don't like it. :)
But X-Robots-Tag does appear to be a custom header S3 supports, to some extent, without clear documentation of that fact. It's, at least, accepted by the REST API.
Failing the above, you can manually add this header to the metadata in the S3 console after uploading the object. (Note, X-Foo: Bar doesn't work from the S3 console, either -- it's silently discarded, with no error -- but X-Robots-Tag: works fine).
You can also, of course, put a publicly-readable robots.txt file (with the appropriate directives in it) in the root of the bucket. Depending on your cobtent mix, path hierarchy, and other factors, that isn't (perhaps) as simple as selectively setting headers, but if the entire bucket is comprised of information you don't want indexed, it should easily accomplish what you want, since content should not be indexed if disallowed in robots.txt, even when a search spider follows a link to it from another site -- every domain (and subdomain)'s robots.txt file stands alone.
#Michael - sqlbot is correct. The SDKs don't support it by default and it won't show in the AWS Console, but if you set it directly with the REST API it works. For those who don't want to figure out the REST API and its authentication method, I was able to modify the node.js aws-sdk to support this feature.
Amazon stores the method params configuration and validation in a large json file: apis/s3-2006-03-01.min.json . I guess that the other SDKs may implement their validation in the same way.
You can go to the "PutObject" command, and under "input.members" you can add a new parameter "XRobotsTag". Configure it as a "header" and set the location to "X-Robots-Tag".
"XRobotsTag": {
"location": "header",
"locationName": "X-Robots-Tag"
}
Your local aws-sdk is now configured to support X-Robots-Tag on your putObject requests. In node.js this would look like this:
s3.putObject({
ACL: "public-read",
Body: "hello world",
Bucket: "my-bucket",
CacheControl: "public, max-age=31536000",
ContentType: "text/plain",
Key: "hello.txt",
XRobotsTag: "noindex, nofollow"
}, function(err, resp){});

Right way for blog URLs

I'm trying to do blog application and I want main page to have this URL: http://localhost/blog and posts to have URLs like this: http://localhost/blog/post-slug-name. So now I'm trying to understand how other actions should look like. Should it be something like this?
http://localhost/blog/post-slug-name/edit (GET/POST)
http://localhost/blog/post-slug-name (DELETE)
http://localhost/blog/create_new (GET/POST)
But I don't like to have "special case" create_new (because pattern is the same as for regular post). What is common way to do this?
If you have full control over how your server maps HTTP requests, you could use a POST to http://localhost/blog/post-slug-name/create to create a post using that slug name, returning a 201 Created status on success and a 409 Conflict if the page exists. The advantage of using a single create_new method is that it can handle conflict avoidance transparently and obviously.

GET vs POST in AJAX?

Why are there GET and POST requests in AJAX as it does not affect page URL anyway? What difference does it make by passing sensitive data over GET in AJAX as the data is not getting reflected to page URL?
You should use the proper HTTP verb according to what you require from your web service.
When dealing with a Collection URI like: http://example.com/resources/
GET: List the members of the collection, complete with their member URIs for further navigation. For example, list all the cars for sale.
PUT: Meaning defined as "replace the entire collection with another collection".
POST: Create a new entry in the collection where the ID is assigned automatically by the collection. The ID created is usually included as part of the data returned by this operation.
DELETE: Meaning defined as "delete the entire collection".
When dealing with a Member URI like: http://example.com/resources/7HOU57Y
GET: Retrieve a representation of the addressed member of the collection expressed in an appropriate MIME type.
PUT: Update the addressed member of the collection or create it with the specified ID.
POST: Treats the addressed member as a collection in its own right and creates a new subordinate of it.
DELETE: Delete the addressed member of the collection.
Source: Wikipedia
Well, as for GET, you still have the url length limitation. Other than that, it is quite conceivable that the server treats POST and GET requests differently; thus the need to be able to specify what request you're doing.
Another difference between GET and POST is the way caching is handled in browsers. POST response is never cached. GET may or may not be cached based on the caching rules specified in your response headers.
Two primary reasons for having them:
GET requests have some pretty restrictive limitations on size; POST are typically capable of containing much more information.
The backend may be expecting GET or POST, depending on how it's designed. We need the flexibility of doing a GET if the backend expects one, or a POST if that's what it's expecting.
It's simply down to respecting the rules of the http protocol.
Get - calls must be idempotent. This means that if you call it multiple times you will get the same result. It is not intended to change the underlying data. You might use this for a search box etc.
Post - calls are NOT idempotent. It is allowed to make a change to the underlying data, so might be used in a create method. If you call it multiple times you will create multiple entries.
You normally send parameters to the AJAX script, it returns data based on these parameters. It works just like a form that has method="get" or method="post". When using the GET method, the parameters are passed in the query string. When using POST method, the parameters are sent in the post body.
Generally, if your parameters have very few characters and do not contain sensitive information then you send them via GET method. Sensitive data (e.g. password) or long text (e.g. an 8000 character long bio of a person) are better sent via POST method.
Thanks..
I mainly use the GET method with Ajax and I haven't got any problems until now except the following:
Internet Explorer (unlike Firefox and Google Chrome) cache GET calling if using the same GET values.
So, using some interval with Ajax GET can show the same results unless you change URL with irrelevant random number usage for each Ajax GET.
Others have covered the main points (context/idempotency, and size), but i'll add another: encryption. If you are using SSL and want to encrypt your input args, you need to use POST.
When we use the GET method in Ajax, only the content of the value of the field is sent, not the format in which the content is. For example, content in the text area is just added in the URL in case of the GET method (without a new line character). That is not the case in the POST method.

Resources