I want to consume data from a GraphQL API.
how can I achieve that? I am new to webapi and any help is appreciated
Assuming the API you want to consume uses HTTP, you should be able to use cUrl, wget, Charles, Postman or even just the URL bar in a browser to make a request.
To write your first query, you can start with the following:
query theNameOfMyQuery {
}
Now that you have a named query, you can start populating it with whatever fields your GraphQL server is exposing. For a blog example, you might have something like this:
query theNameOfMyQuery {
posts {
title
author
}
}
Now, to turn that into something you can request, all you need to do is URL encode it and add it to your URL. A typical URL looks like this:
https://www.someserver.com/?query=...&variables=...
So for the above example, you would have the above query
https://www.someserver.com/?query=query%20theNameOfMyQuery%20%7B%0D%0A%20%20posts%20%7B%0D%0A%20%20%20%20title%0D%0A%20%20%20%20author%0D%0A%20%20%7D%0D%0A%7D
Some Resources:
Evolution of API Design - this video explains some of the concepts of GraphQL and why it exists.
howtographql.com - This is an amazing set of tutorials for every implementation you could imagine
Related
I'm trying to create a small app that displays some simple visualizations from data indexed on Elasticsearch (on an AWS managed Elasticsearch service).
Since, to the best of my knowledge, the degree of access control that AWS offers over its ES service is based on allowing specific HTTP verbs (GET, POST, etc), to simplify my life and the ES admin's, I'm granting this app "read only" permissions, so only GET and HEAD.
However, I see that for its search API, ES exposes a GET endpoint that works with query string parameters, and a POST endpoint that works with a JSON based "Query DSL". This DSL seems to be the preferred method in all examples I have seen online and in the books.
Given the predominance of the Query DSL throughout the documentation, I was wondering:
Does the the Query DSL exposes functionality that standard query string parameters don't, or are they both functionally equivalent?
Does the POST search endpoint result in any data being actually POSTED, or is this only a workaround to allow to send JSON as a query that breaks a little bit with REST conventions?
As per the docs
You can use query parameters to define your search criteria directly in the request URI, rather than in the request body. Request URI searches do not support the full Elasticsearch Query DSL, but are handy for testing.
The GET behavior is slightly confusing but even Kibana sends a POST in the background when you perform a GET with a body. If you have to use GET, some query results might be unexpected. What's your exact use case? Which queries are we talking?
FYI more useful info is here and here.
I'm looking for the correct API for the events that show up in a regular Google Search, the ones that are structured (with name, datetime, location)
Any help or guidance is appreciated
I have tried the Custom Search with no luck, and also the Calendar API (which seems to require a calendar ID, more so for personal calendars or targeted public ones)
We've actually just made an API to scrape the Google event results. You can query it directly like this:
https://serpapi.com/search.json?engine=google_events&q=Events+in+Austin
Or if you are using Ruby, you can do something like this:
require 'google_search_results'
params = {
engine: "google_events",
q: "Events in Austin",
}
client = GoogleSearchResults.new(params)
events_results = client.get_hash[:events_results]
Some documentation: https://serpapi.com/google-events-api
I had a quick look - while I didn't find a fully programatic API yet, here are two things that can get you started on more:
How to search the events page directly: use the following URL schema: https://www.google.com/search?q=cool+conferences&oq=cool+conferences&ibp=htl;events&rciv=evn - replacing "cool+conferences" with any string you like - this can let you create dynamic URLs for event searches.
How to access event metadata for a given page - google is pushing a standard to structure data on webpages to support "smart" searches such as for events. They are using a data structure called JSON-ld. More details. If you want to read such metadata from a webpage, here is one scraper I have found that does that - extruct (though I didn't get a change to test it yet).
Hope this helps :)
We are creating a REST API using OpenRasta and apart from regular GET, POST, PUT and DELETE on all resources, we are also providing GET on resources with plural names. So a consumer of the API can GET, POST, PUT and DELETE on User and also perform GET on Users which will return List<Users>. Now we want the clients to be able to filter and sort it by it's properties and allow to support paging for showing data in paged tabular formats.
Although, I looked at WCF Data Services Toolkit home page and looks like it can be useful but after looking at blog posts and Getting Started page, I couldn't understand how I can use it to solve my problem in OpenRasta.
Or is there anything else simpler that I can do?
OR doesn't support stuff like OData for that functionality, mainly because it leads to very unrestful systems.
If /users is "the list of users", then it is a different resource than /users/1 (the first page of users) or /users/byName/1 (the first page of users ordered by name).
You can of course implement all this easily by registering a URI that has query parameters, as those are optional
.AtUri("/users?page={page}&filter={filter}
And your handler can look like
public List<User> Get(int page = 0, string filter = null) { ... }
I am trying to make a RESTful api and have some function which needs credentials. For example say I'm writing a function which finds all nearby places within a certain radius, but only authorised users can use it.
One way to do it is to send it all using GET like so:
http://myapi.heroku.com/getNearbyPlaces?lon=12.343523&lat=56.123533&radius=30&username=john&password=blabla123
but obviously that's the worst possible way to do it.
Is it possible to instead move the username and password fields and embed them as POST variables over SSL, so the URL will only look like so:
https://myapi.heroku.com/getNearbyPlaces?lon=12.343523&lat=56.123533&radius=30
and the credentials will be sent encrypted.
How would I then in Sinatra and Ruby properly get at the GET and POST variables? Is this The Right Way To Do It? If not why not?
If you are really trying to create a restful API instead if some URL endpoints which happen to speak some HTTP dialect, you should stick to GET. It's even again in your path, so you seem to be pretty sure it's a get.
Instead of trying to hide the username and password in GET or POST parameters, you should instead use Basic authentication, which was invented especially for that purpose and is universally available in clients (and is available using convenience methods in Sinatra).
Also, if you are trying to use REST, you should embrace the concept of resources and resoiurce collections (which is implied by the R and E of REST). So you have a single URL like http://myapi.heroku.com/NearbyPlaces. If you GET there, you gather information about that resource, if you POST, you create a new resource, if you PUT yopu update n existing resource and if you DELETE, well, you delete it. What you should do before is th structure your object space into these resources and design your API around it.
Possibly, you could have a resource collection at http://myapi.heroku.com/places. Each place as a resource has a unique URL like http://myapi.heroku.com/places/123. New polaces can be created by POSTing to http://myapi.heroku.com/places. And nearby places could be gathered by GETing http://myapi.heroku.com/places/nearby?lon=12.343523&lat=56.123533&radius=30. hat call could return an Array or URLs to nearby places, e.g.
[
"http://myapi.heroku.com/places/123",
"http://myapi.heroku.com/places/17",
"http://myapi.heroku.com/places/42"
]
If you want to be truly discoverable, you might also embrace HATEOAS which constraints REST smentics in a way to allows API clients to "browse" through the API as a user with a browser would do. To allow this, you use Hyperlink inside your API which point to other resources, kind of like in the example above.
The params that are part of the url (namely lon, lat and radius) are known as query parameters, the user and password information that you want to send in your form are known as form parameters. In Sinatra both of these type of parameters are made available in the params hash of a controller.
So in Sinatra you would be able to access your lon parameter as params[:lon] and the user parameter as params[:user].
I suggest using basic or digest authentication and a plain GET request. In other words, your request should be "GET /places?lat=x&lon=x&radius=x" and you should let HTTP handle the authentication. If I understand your situation correctly, this is the ideal approach and will certainly be the most RESTful solution.
As an aside, your URI could be improved. Having verbs ("get") and query-like adjectives ("nearby") in your resource names is not really appropriate. In general, resources should be nouns (ie. "places", "person", "books"). See the example request I wrote above; "get" is redundant because you are using a GET request and "nearby" is redundant because you are already querying by location.
I'm making a pretty standard AJAXy (well, no XML actually) web page. The browser makes a bunch of API queries that return JSON to run the site. The problem is, I need to add to the API interface each time the page needs to do something new. The new API interface is usually little more than a database query followed by mapping the returned objects to JSON.
What I'd like to do is to get rid of all that server-side duplication and just have the page make database requests itself (using the model interface), but in a way that is safe (i.e. just read only ones). I think this would amount to an interface for constructing Q objects using JSON or something like that, and then send that up to the server, run the query, and return the results. Before I go making my own half-broken architecture for this, I'm wondering if this has already been done well. Also, is this even the best way to go about eliminating this duplication?
Thanks
Search multiple fields of django model without 3rd party app
Django SQL OR via filter() & Q(): Dynamic?
Generate a django queryset based on dict keys
Just replace with operator.and_ where appropriate.