So I have implemented fineuploader uploading to azure blob storage and fully followed all instructions in the guides, including setting up CORS, error handling, signature creation, file completion notification etc. But for some reason Im getting inconsistent results from different people around the world.
I have tested successfully on Chrome 53 uploading large files. I have seen other users upload using Safari 9.1 on OS X 10.11 failing with errors that do not get logged to my server even though I have implemented an AJAX call when onError callback occurs.
Up until yesterday I had used fineuploader with AWS for the past 4 months with a success rate of around 80% of video files uploaded successfully. But I decided it was time to try Azure to see if I can get a higher success rate. Unfortunately so far I cant.
Would REALLY love any advice anyone has before I have to start looking at an alternative way for people to upload videos to our website....
Sometimes, users’ networks could be unreliable, and uploads could fail. Fine Uploader offers the ability to retry via enable the retry option. You could check your code to make sure whether you enable the retry option.
retry: {
enableAuto: true
}
Besides, it seems that onError callback function doesn’t help you collect the error logs, and you don’t want to let users report error details. You could try to setup same environment with Safari 9.1 on OS X 10.11 to reproduce the issue on your side. And Fine Uploader provides the debug option which will make the plugin spit out logging messages to the browser's developer tools console. You could enable the Debug mode to diagnose application errors, and then you could see errors in the console tab.
debug: true,
For more information about the retry option and the debug option, you could check this documentation.
Related
I have a problem with live video streams in the system I am developing that happens only in Firefox and only in normal mode.
The player correctly loads the stream, but after a few seconds can't continue to load and just keeps trying and trying forever.
This doesn't happen in Chrome, nor if I load the page in Private Mode, nor with normal videos. Just with live streams, just in Firefox, just in normal mode.
This happens both in local development (home, remote connection) and in the corporative cloud.
It's an Angular 8/NodeJs system and the player I use is Clappr. I changed to Video.js and the problem continued.
The stream is coming from a load balancer with 6 children servers, each one with an apache server who have a proxy to an icecast server that originates the stream.
[load balancer] < [6 child servers with apache server proxy] > [icecast server]
I work for a very large company that has an IPS system installed. It was the first thing I thought. But the IPS team could not find any blocked traffic. Also if it was that, why would the traffic not be blocked in private mode?
So I thought about trying to pinpoint what the exact configuration is different in private mode that does the trick and I figured out that disabling all history (not only navigation and downloads or forms) makes it work too.
Does anyone knows what exactly happens when the navigation history is disabled? Besides not saving history, does it have an impact on something else? Some type of cache, network or something like that? Anyone has any idea about how to make stream work without disabling history? I can't ask my users to disable history just to use my system.
EDIT
One thing that may be relevant to the issue is that in Firefox it doesn't show LIVE label when the transmission starts. It shows a negative number. Maybe this could create some problem with the history.
I couldn't find the information on what exactly happens when we disable history in Firefox, but I could solve the problem of playing the stream in Firefox, so I won't accept this answer, but leave it here for future reference in case someone has a similar problem.
I solved it by adding ?nocache=<random integer of length 10> to the video url. Please notice that if you already have some parameter in your url, you can't have 2 ? characters in your url and have to mix parameters correctly.
I have recently published a webapp to Heroku built using the Plotly Dash libary. The webapp is dependent on uploading files using Dash Core Components (dcc) dcc.Upload and then saving the files using dcc.Store. This works perfectly on the localhost, however, when trying to upload files to the live version hosted by Heroku, the app.callback depending on the uploaded files won't fire.
Is there any issues with using dcc.Upload or dcc.Storeon Heroku? I haven't found any related issues on forums. The files are not large (< 1 MB), and as I said, it all works on the localhost.
Since it's not on my localhost I'm having problems with troubleshooting. Is there any easy way to troubleshoot a live webapp on Heroku?
It is possible that the user through which your app is running do not have write permissions to the directory in which you are saving the file after upload.
Try checking the permissions on the directory. If still not working, please share the errors you are getting. That would be easier to share solutions.
After some digging I found that the abrupt shortcoming of the callbacks were due to a timeout error with the code H12 from Heroku. H12 states that any call without a response wilthin 30 seconds is terminated. The solution was therefore to reduce the callbacks into smaller components so that no function exceeded the 30 second limit.
No matter what we try, all YouTube API requests we make are failing.
As we first thought this was a propagation issue, we waited out 10 minutes, then 30 minutes, 2 hours and now over 24 hours, to no avail.
We have found this thread, which covers a similar issue with an iOS app, but does not correspond to our use case.
Here is a run-down of what the problem is:
Activating the "Youtube Data API v3" API for our account shows as successful, and the API shows as enabled.
A POST to https://www.googleapis.com/upload/youtube/v3/videos (videos insert) consistently fails with the following error, despite the fact that we have waited hours for the API enabling to propagate:
Access Not Configured. YouTube Data API has not been used in project XXXXXXXXXXXX before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/youtube.googleapis.com/overview?project=928939889952 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.
Although the error does not directly point to this, our "Queries per day" quota for the YouTube Data API is showing as "0". We are not able to understand why this is showing as zero, and unfortunately, all attempts to edit it to something higher have failed, and disabling and then re-enabling the API has not solved the problem. In a completely separate project/account, this shows as "10,000" when enabling the YouTube Data API, and indeed video insert API calls work under that project.
This is a significant roadblock for us, as it prevents us from deploying our application: any help would be appreciated.
No access configured
Actually means that you dont have permission to access the api. It basically means you have enabled the api but dont have the quota to use it. Its different then the you have run out of quota error message.
After a strange change sometime last year by default the quota for the Youtube api is now 0. You need to request a quota extension it can take anywhere between a week to several months to get permission to use it.
It took me three months. No i dont have any idea how they expect anyone to develop a new application without any quota and to know ahead of time that they need to apply for quota in order to start developing their application. Its quite frustrating.
I have a very tremendous issue in my app. App is using webrtc to create video connection between two people.
Currently the app it's in the test phase.
Everything is working fine on Chrome, but on Firefox there is a strange issue.
When the second Peer connects I receive this error:
Error adding ice candidate for pcInvalidStateError: setRemoteDescription needs to called before addIceCandidate
I know that the error message seems to be clear, but how it's possible that on Chrome this error does not exist?
I mean, maybe there is a bigger issue, not completely depending on this error message.
Do you have any ideas or solutions to this?
A part from WebRTC doc: (See Deprecated Exception section)
Deprecated exceptions
When using the deprecated callback-based version of
setRemoteDescription(), the following exceptions may occur:
InvalidStateError The connection's signalingState is "closed",
indicating that the connection is not currently open, so negotiation
cannot take place.
You should check you are not using deprecated callback version of this function. Also you should keep an eye on signalingState of the peer connection.
Hope it helps!
As can be seen in the title, when I run the example of physiJS (from github repo) it show only background, fps counter, but no physiJS functionality at all (pure three.js works fine). When I run on the: http://chandlerprall.github.io/Physijs/examples/vehicle.html everything runs ok. I have no idea right now where to start looking and where the problem is. Any ideas of what the cause could be?
PhysiJS uses a web worker to run the updating functionality, and web workers are not allowed on local systems as they require the loading of additional resources through JavaScript (and this is not allowed by cross-origin policies on some browsers). It's related to your browser, on my mac Safari allows it, but Chrome throws an error:
Uncaught SecurityError: Failed to construct 'Worker': Script at 'file://physijs_worker.js' cannot be accessed from origin 'null'.
The worker is required to run PhysiJS, so you should use a local server like MAMP to test it on your local machine.