Does OpenTok provide HIPAA compliant video recording? - opentok

I am building a patient-therapist video conference app which needs the meeting recording should be stored only on my server, not on any third party servers like TokBox cloud storage.
The author in this article https://support.tokbox.com/hc/en-us/articles/204951424-How-can-I-use-archiving-as-part-of-a-HIPAA-compliant-application- says we can turn off the cloud storage fallback.
Does OpenTok upload the meeting recording in my Amazon S3 storage after the meeting is stopped or initiates the recording upload when the meeting is in progress?
In the above article, the author says, immediately after attempting the upload, the file is deleted from the server where it was recorded.
Does this mean, OpenTok save meeting recording even if the cloud storage is turned off?
Can I claim my website supports HIPAA compliant video conference meeting and video recording if I integrate OpenTok in my application?

OpenTok QA Engineer here.
Your recording is uploaded to your storage right after the archive is stopped. The live session can still continue, but the rest of it will not be recorded, then. After the file has been successfully uploaded, it is completely removed from our servers. Should any error happen, we will retry uploading it for up to 24 hours, after which, we won't retry anymore. 72 hours after the archive was started, the files will be wiped out from the servers, no matter if they have been successfully uploaded or not.
If the OpenTok fallback storage is turned off, the archive will be recorded, and only be in our servers the minimum required time to generate the MP4 file, and upload it.
Yes, you can claim it, as long as you disable the OpenTok fallback.

Related

Download OpenTok calls locally at the end of call

I'm looking to use OpenTok to record my video calls using archives. It seems as if most examples they have on their site save their recordings online.
Is it possible to use it to record my call, then once I signal to stop archiving (still in the call), simply save and download the file locally? I would like to avoid saving my videos directly on OpenTok for privacy reasons.
Thank you so much for your question about the OpenTok archiving feature.
I completely understand your need to save locally due to privacy concerns.
Currently with OpenTok the way to save is to:
Use AmazonS3 or a Windows Azure container
Use OpenTok Cloud
Then you can save your file locally. You’ll have to save to online first.
Please let us know if you have any other questions or concerns.

Transferring google app to another account while app update is still prending

perhaps someone has more experience in this, I need to transfer google app to another developer's account, therefore I have done some recent updates to release and now these updates are processing/pending. Question is - if I will initiate app transfer while new app release is still in process to be published, will another (target) developer's account receive transfer with my latest updates or same processing/status? Thanks :)

YouTube API Quota Extension for research

I need an extension to my YouTube API quota allotment to conduct research for my dissertation. I have been trying to get an estimate of the available resources and costs for extensions for an NSF grant but have not been able to get in contact with a human for several weeks despite filing the quota extension form.
Currently, I've been trapped in a loop with the youtube API compliance team where they continuously ask me for the following info.
In order to proceed further, please provide us the following information:
Provide API Client link and demo credentials
Screenshots and/or video recording(s) that clearly demonstrates how your API Client and its users access and use the YouTube API Services.
Documents relating to your implementation, access and use of YouTube API Services.
I have attached the required responses mulitple times and still receive the same message. For the first I have attached my python code for accessing the API (the only usage of the service), the second I have attached the pictures of the terminal window and the data output, for the third I have attached the project summary, description, and data collection plan for the project plus the first paper I published using the limited quota on YouTube.
I've repeatedly asked to be connected with a human to go through their needs but have had no response. The project has received a great deal of interest in the Economics community and I am under a great deal of pressure to continue the work, it is very stressful for a graduate student to bear especially when barred by an automated response. Please help D:
The service tag is 1-0726000027117

How to upload huge files into webserver

I have a virtual machine on google cloud and i create a webserver on this machine (ubuntu 12.04). I will service my website on this machine.
My website shows huge size images which format is jpeg2000. Also my website supports, users can upload their images and share other people.
But problem is images' size about 1 ~ 3 gb and i can not use standart file upload methods (php file upload) because when the connection gone, that upload starts again. So i need to better way ?
I am thinking about google drive api. If i create a common google drive account and users upload this account on my website using google drive api. Is it will good way ?
Since you're uploading files to Drive, you can use the Upload API with uploadType=resumable.
Resumable upload: uploadType=resumable. For reliable transfer, especially important with larger files. With this method, you use a session initiating request, which optionally can include metadata. This is a good strategy to use for most applications, since it also works for smaller files at the cost of one additional HTTP request per upload.
However, do note that there's a storage limit for the account. If you want have more capacity, you'll have to purchase it.

How to pipe upload stream of large files to Azure Blob storage via Node.js app on Azure Websites?

I am building a Video service using Azure Media Services and Node.js
Everything went semi-fine till now, however when I tried to deploy the app to Azure Web Apps for hosting, any large files fail with 404.13 error.
Yes I know about maxAllowedContentLength and not, that is NOT a solution. It only goes up to 4GB, which is pathetic - even HDR environment maps can easily exceed that amount these days. I need to enable users to upload files up to 150GB in size. However when azure web apps recieves a multipart request it appears to buffer it into memory until a certain threshold of either bytes or just seconds (upon hitting which, it returns me a 404.13 or a 502 if my connection is slow) BEFORE running any of my server logic.
I tried Transfer-Encoding: chunked header in the server code, but even if that would help, since Web Apps doesn't let the code run, that doesn't actually matter.
For the record: I am using Sails.js at backend and Skipper is handling the stream piping to Azure Blob Service. Localhost obviously works just fine regardless of file size. I made a duplicate of this question on MSDN forums, but those are as slow as always. You can go there to see what I have found so far: enter link description here
Clientside I am using Ajax FormData to serialize the fields (one text field and one file) and send them, using the progress even to track upload progress.
Is there ANY way to make this work? I just want it to let my serverside logic handle the data stream, without buffering the bloody thing.
Rather than running all this data through your web application, you would be better off having your clients upload directly to a container in your Azure blob storage account.
You will need to enable CORS on your Azure Storage account to support this. Then, in your web application, when a user needs to upload data you would instead generate a SAS token for the storage account container you want the client to upload to and return that to the client. The client would then use the SAS token to upload the file into your storage account.
On the back-end, you could fire off a web job to do whatever processing you need to do on the file after it's been uploaded.
Further details and sample ajax code to do this is available in this blog post from the Azure Storage team.

Resources