How to get the direct link from a twitter short link - yahoo-pipes

twitter rss feed links comes short (like http://t.co/rwkYrSPD)
I want to get the direct link with using yahoo pipes
How to do it?

Well, obviously you can simply go to the page. for example, http://t.co/rwkYrSPD becomes http://trailers.apple.com/trailers/focus_features/paranorman/

Does Yahoo Pipes have the ability to execute a HEAD request against a URL? If you run a HEAD against that url (curl -I 'http://t.co/rwkYrSPD') you'll get back an HTTP/1.1 301 Moved Permanently response with the actual URL as the Location header.

I realise it's quite a bit later, now, but check out the source of this Pipe, which uses the LongURL API (e.g., http://api.longurl.org/v2/expand?url=http://t.co/rwkYrSPD) together with YQL.

Related

Use GET params to provide the Web interface with a specific text to annotate

I would like to link to my instance of CoreNLP server, with a specified text (and possibly, a specified set of annotators). (i.e. without having to paste the text then click on Submit)
Is there a way to do this?
(I know and use the API version, but I'm looking for the Web visualisation)
No, the current visualization doesn't let you specify text in the URL (though pull requests are always welcome; the source code lives here).
The server does respond to regular POST requests, e.g., if you want to call CoreNLP from your own webpage though Javascript. For example, the given curl command (from the documentation page:
curl --data 'The quick brown fox jumped over the lazy dog.' 'http://localhost:9000/?properties={%22annotators%22%3A%22tokenize%2Cssplit%2Cpos%22%2C%22outputFormat%22%3A%22json%22}' -o -
The visualization is done with more or less vanilla brat.
So, answering my own question: this is now possible (I believe from CoreNlp 3.8), after a successfully merged pull request from yours truly.
This is the relevant pull request: https://github.com/stanfordnlp/CoreNLP/pull/423

Google Drive API v3 : there isn't any way to get a download url for a google document?

The Google Drive API v2 to v3 migration guide says:
The exportLinks field has been removed from files. To export Google Documents, use the files.export method instead.
I don't want to export (download) the file right away. "files.export" will actually download the file. I want a link to download the file, later. This was possible in v2 by means of the exportLinks.
How can I in v3 accomplish the same? If it is not possible, why was this useful feature removed?
Besides, (similar problem to above) downloadUrl was also removed, and the suggested alternative ("files.get with ?alt=media") downloads the file instead of providing a download link. This means there is no way in v3 to get a public short lived URL for a file?
EDIT:
there is no way in v3 to get a public short lived URL for a file?
For regular files, apparently yes.
This seems to work fine (a public short lived link to the file with its right name and contents):
https://www.googleapis.com/drive/v3/files/ID?alt=media&access_token=TOKEN
For google apps files, no (not even private, as v2 exportLinks used to be).
https://www.googleapis.com/drive/v3/files/ID/exportmimeType=TYPEv&access_token=TOKEN
Similar to regular files, this URL is a short lived link to the file contents, but lacking of its right name.
BTW, I see the API is not behaving consistently: /drive/v3/files/FILEID delivers the right file name, but /drive/v3/files/FILEID/export does not.
I think the API itself should be setting the right Content-Disposition, as it is apparently doing when issuing a /drive/v3/files/FILEID call.
This file naming problem invalidates the workaround to the lack of ExportLinks in v3.
The v2 ExportLinks allowed me to link a file (which is not the same as getting its content right away). Anyone logged in and with the proper permissions was able to access it, and the link didn't needed any access_token, and it wasn't short lived. It was good and useful.
Building a link with a raw API call like /drive/v3/files/FILEID/export (with mandatory access_token) would be an close enough workaround (it is temporary and public, not the same as it was, anyway). However, the naming problem invalidates it.
In v2, regular files have a WebContentLink and google apps files have exportLinks. In v3 exportLinks are gone, and I don't see any suitable alternative to them.
Once you query for your file by id you can use the function getWebContentLink() to get the download link of the file (eg. $file->getWebContentLink() ).
I think you're placing too much emphasis on the word "method".
There is still a link to export a file, it's https://www.googleapis.com/drive/v3/files/fileIdxxxxx/export&mimeType=xxxxx/xxxxx. Make sure you URL encode the mime type.
Eg
https://www.googleapis.com/drive/v3/files/1fGBQ81haNU_nEiC5GITZD3bxT0ppL2LHg-C0ubD4Q_s/export?mimeType=text/csv&access_token=ya29.Gmo0BMvO-pVEPKsiD9j4D-NZVGE91MChRvwOcBSg3cTHt5uAClf-jFxcovQScbO2QQhwHS95eSGW1eQQcK5G1UQ6oI4BFEJJkntEBkgriZ14GbHuvpDL7LT2pKA--WiPuNoDDIuZMm5lWtlr
These links form part of the API, so the expectation is that you've written a client that sends authenticated requests, and deals with the response data. This explains why, if you simply paste the link into a browser without an access_token, it will fail. It also explains why the filename is export, ie. it isn't intended that your client would ever use a filename, but rather it should receive the data as a stream. This SO answer discusses the situation in more detail How to set name of file downloaded from browser?

any gem available that will pull out the get and post parameters ofsource code that I've got using net/http?

I am pulling the source code of a webpage using net/http and I was wondering if there was anyway to parse out the get and post parameters so that they can be listed?
Thanks,
Tom
GET and POST parameters are in the HTTP header, not the HTML source. So the answer is no, you can't get it from the source, unless you know that the information has been somehow encoded in HTML, or can do that yourself.
However, any GET or POST parameters would have been sent by your Net::HTTP code, so you can print those out yourself.

Cross domain content usage from client script (security issues)

I'm trying to load some external content using jQuery load function to div on my page. load method works ok, with local content, but if you want something out of your domain, it won't work.
$("#result").load("http://extrnal.com/page.htm #data);
(it actually works in IE with security warning, but refuses to work in Chrome at all). jQuery documentation says that it is right, because cross-domain content is restricted because of security reasons. Same warning I get if use .getJSON method.
OK, after a googling a bit I found very interesting approach of using YQL for loading content, I've tried some examples, like this:
var request = "http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20html%20where%20url%3D%22http%3A%2F%2Ffinance.yahoo.com%2Fq%3Fs%3Dyhoo%22&format=json&diagnostics=true&callback=?";
$.getJSON(request, function (json) {
alert(json);
}
);
And it really works!
What I dont understand now is that http://query.yahooapis.com is also cross-domain resouce but browser (both IE and Chrome) works OK with that?
Whats the difference? What am I missing?
Thank you
The results you are getting back from YQL are in JSON format which is permitted for cross site AJAX calls like this. Its the same mechanism that allows you to communicate with web services for external sites via JSON (Ie. the twitter API).
Details here - http://www.wait-till-i.com/2010/01/10/loading-external-content-with-ajax-using-jquery-and-yql/
you can make on external site JSON like this:
callback({key:value,etc:1})
and define
function callback(json) {
..here is processing..
}
Thanks for your answers, but unfortunately both of them do not answer my orginal question..
I've checked out related questions on stackoverflow (i know i need to do that first) and found the reason of such behavior.
First code snipset uses AJAX/JSON to retrive the data and it is permitted because of Same Origin Policy. But request to YQL uses JSONP instead, that is OK.
The JSONP was something that I don't know about, that's why I didn't undrestand the behaviour.
Introduction info on JSONP could be found here:
http://ajaxian.com/archives/jsonp-json-with-padding

Is there a way to see the final URL retrieved by an XMLHttpRequest?

I'm doing an AJAX download that is being redirected. I'd like to know the final target URL the request was redirected to. I'm using jQuery, but also have access to the underlying XMLHttpRequest. Does anyone know a way to get the final URL?
It seems like I'll need to have the final target insert its URL into a known location in the headers or response body, then have the script look for it there. I was hoping to have something that would work regardless of the target though.
Additional note: I'm asking how my code can get the full url from production code, which will run from the user's system. I'm not asking how I can get the full url when I'm debugging.
The easiest way to do this is to use Fiddler or Wireshark to examine the HTTP traffic. Use Fiddler at the client if your interface uses a browser, otherwise use Wireshark to capture the traffic on the wire.
One word - Firebug, it is a Firefox plugin. Never do any kind of AJAX development without it.
Activate Firebug and select Net, then perform your AJAX request. This will show the URL that is called, the entire request (header and body) and the entire response (once again, header and body). It also allows you to step through your JavaScript and debug it - breakpoints, watches, etc.
I'll second the Firebug suggestion. You'll see the url as the "Location" header in the http response.
It sounds like you also want to get this url in js? If so, you can get it off the xhr response object in the callback (which you can also inspect using FB!). :)

Resources