I am using FlowPlayer to stream using RTMP from Wowza using Amazon EC2 instance for the file stored in Amazon S3 bucket. I want the streaming to be secure. In flow player we can accomplish that. Secure Streaming
If we include the secret token in javascript as shown, it can easily be read by anyone. So it is not so much secure. Another way mentioned is to compile the FlowPlayer source file. I couldn't find any tutorial regarding this. Can anyone tell me which file to edit and what to edit ?
Compiling flowplayer isn't simple at all. they have detailed instructions here:
http://flowplayer.org/documentation/developer/development-environment.html
They conveniently do not tell you what and where to hardcode the token. I'm still trying to figure it out. Took me 4 hours to get the environment setup.
you basically need java JDK (get the 32bit even if you're on x64), ant, and flex framework.
I'll let you know if I find where to compile the token.
K so once you get the compilng working, all you need to do is change the existing token string in your securestreaming source which is located in the following file:
src\actionscript\org\flowplayer\securestreaming\Config.as
Related
My question might sound weird if this is not possible but just bear with me.
I would like to know if it is possible to download stuff to a cloud storage (generally) like you can to your local storage.
I want to build a small bot that can download media(pdf, vid, audio,...) and send to me. As of now, it downloads the file to my local storage before sending the file. However, when I will host it(I plan to do so on a free service since it's small), I suspect that might not be possible and even if it were the storage for the app itself will be too small to accommodate more than a few files.
As such, I want to use a cloud service of some sort to serve as an intermediate where I can download the file, before sending it. But I don't know if that is possible or even makes sense.
After looking around, I have learnt of some cloud-to-cloud services that can directly extract data from a link and store it in my cloud.
However, this is not applicable in my case since some modifications will have to be done to the files before sending. For example, I have some code below that downloads the audio from a youtube video
from pytube import YouTube
import os
def download(URL: str) -> str:
yt = YouTube(url)
video = yt.streams.filter(only_audio=True).first()
out_file = video.download(output_path="./music")
base, ext = os.path.splitext(out_file)
new_file = base + '.mp3'
os.rename(out_file, new_file)
return new_file
As in this case, I only want to download the audio of the video. So my question is, is it possible for me to download to some cloud storage (the same way I would download to my local storage) ...[ as in out_file = video.download(output_path="path/to/some/cloud/storage")
Thanks for helping out! :)
If you were dealing with a straight forward file download, then maybe. You'd need the cloud storage service to have an API which would let you specify a URL to pull a file from. The specifics would depend on which cloud storage service you were using.
You aren't dealing with a straight forward file download though. You're pulling a stream from YouTube and stitching the pieces together before saving it as a single file.
There's no way to do that sort of manipulation without pulling it down to the machine the program is running on.
It is possible to treat some cloud storage systems as if they were local directories. Either by using software which synchronises a local directory with the cloud (which is pretty standard for the official software) or one which moves files up and down on demand (such as s3fs). The latter seems to fit most of your needs, but might have performance issues. That said, both of these are scuppered for you since you intend to use free hosting which wouldn't allow you to install such software.
You are going to have the store the files locally.
To avoid running into capacity limits, you can upload them to whatever cloud storage service you are using (via whatever API they provide for that) as soon as they are downloaded … and then delete the local copy.
I'm trying to learn about grpc and protocol buffers while creating a test app using square's wire library. I've got to the point of generating both client and server files but I got stuck in creating a server side app. The documentation is too simple and the examples in the library have to much going on.
Can anyone point out what are the next steps after generating the files?
Thanks.
I'm not sure what language your are using, but each gRPC language has similar set of examples. For example, for a simple server-side app, you can find the Python example at https://github.com/grpc/grpc/blob/master/examples/python/helloworld/greeter_server.py
I have a java/j2ee application deployed in tomcat container on a windows server. The application is a training portal where the training files such as pdf/ppt/flash/mp4 files are read from a share path. When the user clicks a training link, the associated file from the share folder is read downloaded from the share path to the client machine and start running.
If the user clicks mp4/flash/pdf files, it is taking too much time to get opened.
Is there anything in the application level, we need to configure? or it is a configuration for load in the server? or is it something needs attention from a WAN settings?
I am unable to find out a solution for this issue?
Please post your thoughts.
I'm not 100% sure because there is not so much details, but I'm 90% sure that the application code is not the main problem.
For example:
If the user clicks mp4/flash/pdf files, it is taking too much time to get opened.
A PDF is basically just a string. Flash is a client-side technology. And I'm pretty sure that you just send a stream to a video player in order to play a MP4. So the server is supposed to just take the file and send it. The server is probably not the source of your problem if we assume that he can handle the number of requests.
So it's about your network: it is too slow for some reasons. And it's difficult to be more specific without more details.
Regards.
I'm currently using Amazon S3 to host all static content on my site. The site has a lot of static files, so I need an automated way of syncing the files on my localhost with the remote files. I currently do this with s3cmd's sync feature, which works wonderfully. Whenever I run my deploy script, only the files that have changed are uploaded, and any files that have been deleted are also deleted in S3.
I'd like to try Rackspace CloudFiles; however, I can't seem to find anything that offers the same functionality. Is there anyway to accomplish this on Rackspace Cloud Files short of writing my own syncing utility? It needs to have a command-line interface and work on OS X.
The pyrax SDK for the Rackspace Cloud has the sync_folder_to_container() method for cloudfiles that sounds like what you're looking for. It will only upload new/changed files, and will optionally remove files from the cloud that are deleted locally.
As far as the initial upload is concerned, I usually use eventlet to upload files in as asynchronous a manner as possible. The total time will still be limited by your upload speeds (I don't know of any SDK that can get around that), but the non-blocking code will certainly help overall performance.
If you have any other questions, feel free to ask here on on the GitHub page.
-- Ed Leafe
The Rackspace Python SDK can do that for you. There's a script called cf_pyrax.py that does, more or less, what I think you're trying to do. There's a write up on it in this blog post.
While setting up a Token Vending Machine is well documented, I am having a hard time finding sample code for requesting temporary credentials using Ruby (on Rails).
How would one go about interacting with the TVM using Ruby (on Rails)? Is there any sample code that lays out the process of making a request to a TVM and obtaining temporary credentials to access the various AWS services?
Bart: after a bit of digging, I found the following link for AWS with Ruby: Getting Started. This link is a bit of a walk-through for getting set up, and contains some example code to demonstrate how to authenticate and such.
You can also refer to the AWS SDK for Ruby, which has code examples, browse the GitHub Source Code Repository for the SDK, and check out the Ruby AWS Developer Center, Ruby AWS Developer Forums, and Ruby AWS FAQs.
I would also familiarize myself with the AWS SDK API Reference which is an invaluable reference guide for doing exactly what you're looking for.
EDIT: A few more resources:
You might take a look at the following file: TokenVendingMachinePolicy.json which contains a configuration to be used in conjunction with the AWS TVM process. While this file is used by a Java project, the JSON should be re-usable for your purposes.
You may also get some headway by taking a look at the rest of the code (while written in Java) which utilizes that file.
The major pieces appear to be:
DeviceAuthentication.java
UserAuthentication.java
GetTokenServlet.java
If you start with these 3 files, I think you should be able to make some headway in translation to Ruby.
It looks like Amazon also has a Ruby SDK with a section on STS. Hope this helps others in the future.