First off let me state this is HOMEWORK for school. I am looking for general ideas and direction, not "this is exactly what to write". We did not really cover API's so I am trying to learn as I go.
I have been asked to design an API that can be used with Facebook, for things like posting on my feed or adding new friends. I have been doing tutorials online and most seem to have me add some "ruby gem" that has whatever websites methods and stuff. For example I did a Twilio.com demo that needed require 'twilio-ruby' and a twitter one that needed require 'twitter'. To my understanding, these are "gems" and not "API's" correct? That being said, is Koala an API or a simply a gem that contains the methods I need for writing an API (specifically for Facebook)? If I were to use Koala and it was an API, I feel that would sort of defeat the purpose of writing an API (just reusing their methods and such).
Any other ruby API Facebook help would be greatly appreciated!
Have a look at https://developers.facebook.com/docs/other-sdks Koala is listed as "other" SDK, meaning that it provides a wrapper around the low-level Facebook Graph API requests. So, no, it's not an own (web) API IMHO.
In a narrower interpretation, an API is just an "application programming interface". This would not necessarily have to deal with being accessible for example via a REST interface. I think it depends on the definition of API, respectively what you professor expects. If this is unclear, I'd check back with him/her.
Check
http://en.wikipedia.org/wiki/Api
http://en.wikipedia.org/wiki/Web_API
http://en.wikipedia.org/wiki/Representational_state_transfer
Related
I'm currently studying computing and it tackles about API. I keep reading this term 'APIs provide an interface for communicating software' but i'm not really sure what is an interface to API? May i ask for your help to explain it?
Not sure if i get your question right, but let me have a try:
So basically a simple architecture of building for example an app is splitting it up in front- and backend. For example, in a ToDo-List app there is a server-side software which manages all data and we have a Mobile App which shows the data to the user. The backend is an "abstract" program. I mean with that that you can't click buttons or something else. So, when you want to create a task in your frontend app you have to tell the backend (for example written in Java or Python) that you want to make this. For this you can use an API.
This is basically an Call of an Website. The backend recognizes is and loads data out of an database, manages it and displays it for example in JSON Format. This format is send to the website.
Look here: https://jsonplaceholder.typicode.com/todos/1
The app now is able to automatically get this data and manages it.
Obviously real APIs are much more complex. For example you have to authenticate, you can hand over parameters and so on. You have POST, GET, DELETE Methods.
But this is simply the basic concept of an API.
Look here to know how to create an API (for example in Java): https://spring.io/guides/gs/rest-service/
Look here to know how for example an Mobile App consumes the API (in JavaScript): https://www.taniarascia.com/how-to-connect-to-an-api-with-javascript/
I hope i could help you :)
Best regards
Sebastian
I'm using the twitter gem to build a Twitter bot in Ruby. I am trying to make it self-sustainable as it were, so I want it to generate its own content to tweet by scraping tweets of users outside its social circle (and then perhaps garbling them with Markov chain generator).
Which one is a better strategy?
Search for tweets via api
Load Twitter pages and scrape tweets with Hpricot or Nokogiri
Also, how can I try to ensure the base tweets come from outside my bot's followers' friends so it's harder to tell it's a bot?
At the moment I use a .yml file with tweets I generated by hand, which is far from ideal.
There's two questions here.
It's always better to use an API where one is available. This will future-proof you against the bot randomly breaking if a simple html element is changed, and it will also allow the website (ie, twitter) to rate limit your searches in case you put too high a load on the service. Although this is unlikely for twitter, it's good practice.
Sometimes, the information you want is unobtainable via the API. In this case, you should consider if you really need to scrape it, and if so, how to limit yourself to be polite.
Basically, if the API allows you to do what you want, use it for maintainability.
As for your second question, I do not have any experience with the twitter API. Is there a method to get twitter IDs of all your followers, and who they follow? If not, you'll be forced to scrape as earlier mentioned - if you really do need this information.
Once you have a list of those who your followers follow, you can check if the ID of the poster of what you want to repost falls inside this set.
Would you consider retweeting for this aspect of the bot?
One thing to also note is performance. If you were to scrape the website, you would have to download the entire page, then scrape the page(which is processor intensive as it is). As opposed to hitting the API, which would only return JSON/XML data.
So from strictly a performance standpoint, I would go with the API.
So far I've come across the following Facebook API libraries for Ruby/Ruby-on-Rails:
Facebooker
Koala
Mogli
Facebooker2
fb_graph
facebook_oauth
I was wondering if anyone knows why there are so many, and if anyone has a rough idea of which to use when?
I did the same search recently and ultimately chose Koala. Facebooker was the clear choice a couple years ago, but it's out of date now with so many recent facebook api changes. Koala and fb_graph seem to be the most popular now. Koala is easy to use for accessing the graph api. I haven't used it for the older rest api, though Koala does support it. The only difficult part I've found is the facebook authentication with oauth - though that's probably facebook itself rather than Koala.
Relevant discussion here as well: Is fb_graph or Koala ruby gem better than facebooker2, using the facebook graph?
I have had the same problem : "which one to use ?" and i tried Facebooker, Mogli and fb_graph.
I can say that fb_graph is the best and the most mature and up to date and well documented ( you can read the facebook DOC and apply it to fb_graph. it works like magic).
I'm trying to make a web app in Sinatra, and I was wondering if there was a good solution for user sign-up with email verification, as well as authentication - perhaps as rack middleware? OpenID support would be nice to have too.
I suppose I can roll my own, but I didn't want to reinvent the wheel. If I have to do so, can anyone point me to the libraries I might want to use, maybe even example code? I'm also worried I might end up forgetting to implement something important with signup/authentication, since I've never done this before.
In case I need a homemade solution, I've found bcrypt-ruby for password encryption and Sinatra::Mailer or Pony for email. For signing on with OpenID support, there's hancock and hancock-client, though I'm not entirely clear on usage and I don't actually need single sign-on support. Maybe I should just use a ruby openid library? Do I need anything else?
This is a pretty muddled question, but I hope someone more experienced can point me in the right direction.
You might be interested in Authlogic. You'll need to implement the e-mail verification yourself, but it will provide you with a good foundation for supporting this.
Authlogic can be
used in any ruby framework you want:
Rails, Merb, Sinatra, Mack, your own
framework, whatever. It’s not tied
down to Rails. It does this by
abstracting itself from these
framework’s controllers by using a
controller adapter. Thanks to Rack,
there is a defined standard for
controller structure, and that’s what
Authlogic’s abstract adapter follows.
So if your controller follows the rack
standards, you don’t need to do
anything.
When you go to edit your favorite music or movies on Facebook, you will notice an autocomplete suggest list that is basically a list of "everything" (brand names, music artists, movies, etc.) How can someone consume that list in their own code? Is it part of the Facebook API?
They wrap some of the functionality in their FBML fields, but their developer wiki shows how they do what they do. If you want to consume their data though, you're going to have to play with an HTTP proxy and figure out what parameters to send to their server. There are also a couple parameters that seem to be session based, so I don't know how well you're going to be able to integrate this into your own application.
This was working for awhile, but now they require the session cookie, so we'll have to hope they add support for this to the graph api, unless you want to fight w/ the proxy.