Can the default media receiver for Chromecast handle a contentId/contentUrl that results in a redirect? - chromecast

I'm using a service for live-streaming videos that provides a link (say https://example.com/video.m3u8) to the relevant file in order to cast it. Following the link manually, I go through a 302 redirect (to e.g. https://anotherdomain.com/video.m3u8), and get to the video file. When I try to cast the link (both chrome & android sender sdk's, the latter through a cordova plugin) I see an error if I use the example.com url, but succeed using the anotherdomain.com url.
I can check for redirects manually before casting, but ideally the default receiver would do that for me.
Is this failing on a redirect the expected behaviour? Is there any way to configure it so that it follows redirects?
I'm not sure if the redirect being of the wrong Content-Type (text/html rather than application/x-mpegURL) could be part of the cause

Related

Google Drive API Console: Error saving Drive UI integration page

I have a webapp in production that interacts with Google Drive through Google Drive API.
I need to change some settings in Drive interaction but I can't save.
When I save the Drive UI integration page, I receive this error:
There's a problem at our end.
Please try again. If the problem persists, please let us know using
the "Send feedback" link below. Thanks!
(spying Network console: there is an Internal Server Error in a POST call)
I tried to send feedback for months: nobody answers and the bug is still there.
I tried also to create another project: I can save the first time but then the bug returns.
How can I do? Has someone the same problem?
Is there a way to receive a reply from Google? Is there some workaround?
Thank you.
i think that problem must be Client ID
before adding Client ID, go to the Credentials -> OAuth 2.0 Client IDs
then select edit your Client ID. after that your production site url add to Authorized JavaScript origins and Authorized redirect URIs.
then enter your Client ID in Drive UI integration page
For myself trying to get the Drive UI configured I noticed a couple of errors (that don't have any specific error messages)
When adding in an Open URL it has to be a valid domain, so for instance I tried to test it out with local host, to no avail. However something like https://devbox.app.com worked, but something like https://localhost:8888 does not. Even though https://localhost is a valid javascript origin in the client_id configuration (at least for the app I am working on, not sure about other apps), localhost doesn't work as an open URL.
When adding in the mimeTypes it needs to be in the format */* and can include custom mimeTypes like application/custom+xml and application/custom-name+json not sure for other custom types that are not in a particular format like xml or json. Also not sure about wildcards.
When adding in file extensions do not add in the '.' just the name of the file extension.
The app icon I found only failed to upload the image when the image wasn't the exact dimensions, I actually ended up editing some icons in photoshop to change the pixel x pixel values as a quick work around during dev.
That worked for me to get it to save and I tested it with a file that had a custom mimeType (application/custom-name+xml specifically) and custom file extension!

How do I Create a Custom Connector in Microsoft Flow with the correct request URL?

I am attempting to create a custom connector for the Clio API (https://app.clio.com/api/v4/documentation). I was able to successfully authenticate and access the API in Postman, testing out quite a few different types of requests with good results.
Then I exported the collection to a Postman file and imported it into a new custom connector in my MS Flow account as instructed at https://learn.microsoft.com/en-us/connectors/custom-connectors/define-postman-collection. As part of that process, I entered the following settings:
Scheme: HTTPS
host: app.clio.com
Base URL: /
Within the custom connector requests, all the definitions looked acceptable, except that instead of having the fully qualified request URL, they did not include https://app.clio.com.
For example, one request should use the following address:
https://app.clio.com/api/v4/contacts.json
The field in MS Flow, where URL should be entered, is grayed out and only includes /api/v4/contacts.json and looks like this:
The grayed out field cannot be typed in. Instead, I have clicked "Import from sample," which leads to a window where I can type in the fully qualified URL. After I do that and click the "import" button, the window still lists the partial URL as shown above.
At first I thought that was intentional, since I had entered the host elsewhere for the connector, and I thought that Flow would put them together to send the request to the right URL. But it did not: when I tested the operation, I got a 404 error:
{
"error": "{\r\n \"code\": 404,\r\n \"message\": \"Unable to match incoming request to an operation.\",\r\n \"source\": \"msmanaged-na.azure-apim.net\",\r\n \"path\": \"\",\r\n \"clientRequestId\": \"500779d5-356d-4c79-bf96-caf2-f5bc2919\"\r\n}"
}
When I looked at the request, this is the URL:
https://msmanaged-na.azure-apim.net/apim/clio2.5fb03ce8462066f352.5fdeb6bc35b813689d/92053762-68ce-4c1d-9085-0785-0fd98c3b/api/v4/contacts.json?type=Person
So obviously Flow is not using the correct request URL, and I cannot figure out how to enter the fully qualified request URL. Can anybody tell me what I am doing wrong?
I found another comment where someone else is having the same problem: https://stackoverflow.com/a/48813209/7191369 so I'm not the only one. Thanks in advance for your help.
Edit:
After some additional searching, the address in the request (with https://msmanaged-na.azure-apim.net) is the required redirect URL for the proxy per this post: https://powerapps.microsoft.com/en-us/blog/custom-api-with-authentication/, and is used when processing OAuth. But the crappy part of this is that I can't see the request URL so I can't troubleshoot. Is there any way to see what request the proxy server is sending out to the Clio API?
It's been a while since this question was posted, but let me give you a suggestion to include the /api/v4 part of the URL inside the Base URL property of the Flow. This way all your endpoints will use the specified version and you will not have to define them one by one in each request.
Except if you intentionally want to use different versions across the requests :) Anyways, I'm glad that you've been able to resolve the issue.

How can I scrape an image that doesn't have an extension?

Sometimes I come across an image that I can't scrape so that it can be saved. An example of this is:
https://s3.amazonaws.com/plumdistrict.com-production/perks/12321/image/original.?1325898487
When I hit the url from Internet Explorer I see the image but when I try to get it from the code below I get the following error message "System.Net.WebException The remote server returned an error: (403) Forbidden" error with GetResponse:
string url = "https://s3.amazonaws.com/plumdistrict.com-production/perks/12321/image/original.?1325898487";
WebRequest request = WebRequest.Create(url);
WebResponse response = request.GetResponse();
Any ideas on how to get this image?
Edit:
I am able to get to save images that do have extensions. For example I can scrape the following image just fine:
https://s3.amazonaws.com/plumdistrict.com-production/perks/12659/image/original.jpg?1326828951
Although HTTP is originally supposed to be stateless, there are a lot of implementations that rely on it being stateless. I could configure my webserver to only accept requests for "http://mydomain.com/sexy_avatar.jpg" if you provide a cookie proving you were logged in. If not, I send you a redirect 303 to "http://mydomain.com/avatar_for_public_use.jpg".
Amazon could be doing the same. Try to load the web page using Chrome, and look at the Network view in developer mode (CTRL+SHIFT+J) to see all headers supplied to the website. Maybe you even need to do a full navigation in the same session before you are allowed to see the image. This is certainly the case in many web applications I have developed :-)
Well, it looks like it's being generated from a script (possibly being retrieved from a database). The server should be sending a file/content type to go along with that... but it doesn't seem to be, which I believe is a violation of standards.
My Linux box knows full well that that's a JPEG image once it's on my hard drive, because it examines file headers rather than relying on extensions. Perhaps there is a tool to do the same in Windows?
Edit: Actually, on further contemplation, it seems odd that you'd get a 403 for that. Perhaps the server is actually blocking you from retrieving the file in that manner.

How can I validate http response headers?

It's the first time I am doing something with headers. I am mainly concerned with Cache-Control but there may be others I will need to check as well. For example, I try to send the following header to the browser (based on tutorials I just read):
Cache-Control:private, max-age=2011-12-30 11:40:56
Google Chrome displays it this way in Network -> Headers -> Response headers, but how do I know if it's correct, that there aren't any typos, syntax errors and such? Will it really work? Will the browser behave like I want it to, or will it treat it like a gibberish (something like "unknown header/value")? I've tried sending nonsensical headers on purpose but they got displayed with the rest. Is there any Chrome tool / addon for that, or any other way? Thank you in advance!
I'm afraid you won't be able to check if the resource has been cached by proxies en route, but you can check if your browser has cached it.
While in the Network panel of Chrome DevTools, hit F5 to reload your page. You should see something like "304 Not Modified" in the status field for the resource you are treating (which means the resource has not been modified and its contents were not received from the server but rather loaded from the browser's cache.)

Allowing Cross domain ajax calls from firefox

I want to change the settings of firefox so as to allow it to make cross domain ajax calls. Since due to the security feature of the firefox it doen't allow ajax calls to be made. I know if it is in same domain it will allow. I have a code given bellow which in safari works fine but firefox doesn't display the results when it calls csce server then since the code is on local machine doesn't allow it and returns error. I know it will start working if I load my this code to csce server but I want to run the code from my machine. So can anyone help me in resolving this. I have spent past couple of days just searching for this solution.
Kindly suggest how to achieve this or should I go with some older version of firefox?
I googled and set the parameters of browser in config file as specified in this site but it still doesn't work.
http://code.google.com/p/httpfox/issues/detail?id=20
Maybe you could use privoxy and tell it to inject something like "Access-Control-Allow-Origin: *" in the server response.
To do this, you would have to go into the file user.filter (create it if it doesn't exist) in privoxys configuration directory and insert something like this:
SERVER-HEADER-FILTER: allow-crossdomain
s|Server: .*|Access-Control-Allow-Origin: *|
Instead of Server, you can also use any other header that's always present and you don't need.
And this into user.action:
{+server-header-filter{allow-crossdomain}}
csce.unl.edu
Note: I didn't test it.
https://developer.mozilla.org/En/HTTP_access_control
http://config.privoxy.org/user-manual/
This appears to enable XSS from file:// pages in Firefox 4, although it prompts you so might not be suitable for more than simple test pages:
netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect");

Resources