how to implement Complex Web API queries in ASP Core - asp.net-web-api

I'm new to web API design, so I've tried to learn best practices of web API design using these articles:
1.Microsoft REST API Guidelines
2.Web API Design-Crafting Interfaces that Developers Love from "Apigee"
Apigee is recommending web API developers to use these recommendations to have better APIs.
I quote here two of the recommendations:
I need C# code for implementing these recommendations in my Web APIs (in ASP Core) which is a back-end for native mobile apps and AngularJs web site.
Sweep complexity behind the ‘?’
Most APIs have intricacies beyond the base level of a resource. Complexities can include many states that can be updated, changed, queried, as well as the attributes associated with
a resource.
Make it simple for developers to use the base URL by putting optional states and attributes behind the HTTP question mark. To get all red dogs running in the park:
GET /dogs?color=red&state=running&location=park
Partial response allows you to give developers just the information they need.
Take for example a request for a tweet on the Twitter API. You'll get much more than a typical twitter app often needs - including the name of person, the text of the tweet, a timestamp, how often the message was re-tweeted, and a lot of metadata.
Let's look at how several leading APIs handle giving developers just what they need in
responses, including Google who pioneered the idea of partial response.
LinkedIn
/people:(id,first-name,last-name,industry)
This request on a person returns the ID, first name, last name, and the industry.
LinkedIn does partial selection using this terse :(...) syntax which isn't self-evident.
Plus it's difficult for a developer to reverse engineer the meaning using a search engine.
Facebook
/joe.smith/friends?fields=id,name,picture
Google
?fields=title,media:group(media:thumbnail)
Google and Facebook have a similar approach, which works well.
They each have an optional parameter called fields after which you put the names of fieldsyou want to be returned.
As you see in this example, you can also put sub-objects in responses to pull in other information from additional resources.
Add optional fields in a comma-delimited list
The Google approach works extremely well.
Here's how to get just the information we need from our dogs API using this approach:
/dogs?fields=name,color,location
Now I need C# code that handles these kind of queries or even more complex like this:
api/books/?publisher=Jat&Writer=tom&location=LA?fields=title,ISBN?$orderBy=location desc,writerlimit=25&offset=50
So web API users will be able to send any kind of requests they want with different complexities, fields, ordering,... based on their needs.

Related

FHIR: Get all encounters, patients, appointments for the practice

I'm trying to use FHIR to pull all patients, encounters and appointments into an intermediate database for further analysis. Most of the FHIR API's appear to be designed to handle one patient ID at a time, or one encounter at a time, etc. What is the most efficient way to pull the full set of encounters and then keep it current, as well as appointments, etc.?
Please take a look at the FHIR specification website, specifically the page about the RESTFul API and look at search. The APIs have methods to support the interactions described on the website.

What are the good practices and useful design pattern to provide an API which supports internationalization?

Given an existing API working well with the usual mono-locale approach, what are the steps one should follow to turn it into an internationalized version, which enable user to send/receive keys/values in localized versions?
End user interface internationalization is a well covered topic.internationalized. But how can it be pushed further on request and response? If someone hits an API and wants the response in German how can I do so ?
It is kind of tricky answering without any kind of context, or sample API, or a even a high-level description of what your API does.
But in general it is best to separate your APIs into several layers.
The core functionality and data at the application level should be locale independent, something like this http://mihai-nita.net/2005/10/25/data-internationalization/
Then you would have the presentation layers, with things like date/time/number formatters, collation, localization, etc.

Yahoo, Google, Yelp API

Is there any php API to gather information about a business(address, reviews) by its phone number from Yelp, Google, Insiderpages, Yahoo..
Please help, i have done research about these, but did't get the right info, though yelp is providing info by it's phone number but there they ask ywsid as mendatory (http://api.yelp.com/phone_search?phone=1234567890&ywsid=XXXXXXXXXXXXXXXX) but i want by phone number only.
Please note that all APIs have terms, most of them won't allow you to store their data in your own database and most of them have display requirements, before proceeding with any developing please read carefully their display requirements and terms.
Depending on your project you might not be allowed to use the data as you might need/want to on your project.
The other alternative would be scrapping sites, but most sites have rules against scraping too...
And again read a lot before putting too effort on something you are prohibited to in first place.
Yelp
ywsid = API key, you need to get your own key if you are using the yelp API, get it here
if you are using it to add it to your own database or storing the information anywhere it is against their policies display requirements & api terms.
if you are using any API you must read their terms before even thinking of doing anything.
Google Places
API
Insiderpages
I Don't think they have one but you could use the citygrid API that does a [lot of sites] search at once.
Yahoo
Yahoo API
CityGrid
As mentioned before citygrid API
Foursquare
Foursquare API
Merchant Circle
Merchant Circle API
White Pages
White Pages API
Yellow Pages
Yellow Pages API
Bottom line is, all these companies have put a lot of time and effort and money to build their databases, and they want you to redirect people back to their pages so they can make their money back/profit.

How to fill out AJAX form programmatically and scrape results?

Basically, I want to use the Facebook Ads Manager Tool to estimate the number of users targeted by a particular set of targeting parameters. I know there is a published API available, but it is only usable if you are on their advertising application "whitelist." I am sure what I am asking is possible. Plus, it would be interesting to learn more about scraping.
Facebook's Ads Manager Tool is basically an AJAX UI for their ads API. In the process of creating a campaign, you can specify targeting parameters, and the page will dynamically report the number of users targeted as you modify the parameters. From what I've read on the web and here on stackOverflow, it is possible to use Firebug or a similar tool to pick apart what requests are being made by the page and to where, then mimicking these calls to get the information you want.
I'm having trouble interpreting the panels of Firebug. I think the URI I'm trying to send a request to is www.facebook.com/ajax/inventory_estimator.php, though I'm not sure how to form a call.
So, if I want to write a script or program that takes a list of words to use as keywords and returns the estimated number of users for each keyword, how could I do it?
Link to Facebook's Ads Manager Tool, Campaign Creation Page:
http://www.facebook.com/ads/create
yes using an extension like firebug to examine the HTTP requests is a good way to do this.
The Net tab is the one you want (last one).
Have you tried irobotsoft webscraper? It has a good ajax support.
Check their forum here: http://irobotsoft.org/bb/YaBB.pl

Scraping tweets - better to use the site or api?

I'm using the twitter gem to build a Twitter bot in Ruby. I am trying to make it self-sustainable as it were, so I want it to generate its own content to tweet by scraping tweets of users outside its social circle (and then perhaps garbling them with Markov chain generator).
Which one is a better strategy?
Search for tweets via api
Load Twitter pages and scrape tweets with Hpricot or Nokogiri
Also, how can I try to ensure the base tweets come from outside my bot's followers' friends so it's harder to tell it's a bot?
At the moment I use a .yml file with tweets I generated by hand, which is far from ideal.
There's two questions here.
It's always better to use an API where one is available. This will future-proof you against the bot randomly breaking if a simple html element is changed, and it will also allow the website (ie, twitter) to rate limit your searches in case you put too high a load on the service. Although this is unlikely for twitter, it's good practice.
Sometimes, the information you want is unobtainable via the API. In this case, you should consider if you really need to scrape it, and if so, how to limit yourself to be polite.
Basically, if the API allows you to do what you want, use it for maintainability.
As for your second question, I do not have any experience with the twitter API. Is there a method to get twitter IDs of all your followers, and who they follow? If not, you'll be forced to scrape as earlier mentioned - if you really do need this information.
Once you have a list of those who your followers follow, you can check if the ID of the poster of what you want to repost falls inside this set.
Would you consider retweeting for this aspect of the bot?
One thing to also note is performance. If you were to scrape the website, you would have to download the entire page, then scrape the page(which is processor intensive as it is). As opposed to hitting the API, which would only return JSON/XML data.
So from strictly a performance standpoint, I would go with the API.

Resources