Video Playlist from different vendor sources (Anvato/Brightcove) - brightcove

Is it possible to create a video playlist using videos either from Anvato or Brightcove at the same time?
So far I created playlists from one or the other using their respective docs, but I would like one that takes videos from either.
Context:
You have settings to create a video from Brightcove or from Anvato.
Brightcove requires:
Video Id
Account Id
Player Id
Anvato requires:
Account Id
Video Id
Using their respective SDKs you can load a video using those settings. But both BC and Anvato provide mechanisms to load a playlist from a list of Video Ids.
So to create a BC Playlist you use their docs and use the markup along with your list of Video Ids. Same for Anvato.
But in theory it should be possible to have one video player that runs all of the video sources if we could get the straight url to the source file.

You could certainly do this with the Brightcove Player. By default, it assumes you're pulling all of the videos from the Video Cloud catalog, as that's what most people do. However, there is a lower level API that allows you to pass an array of video objects in to the playlist. It's call player.catalog.load().
So if you wanted to merge playlists from Anvato and Brightcove:
1) Call Anvato's API to get the list of video metadata/URLs.
2) Call catalog.getPlaylist() on the Video Cloud APIs.
3) Append the data from Anvato to the array you got in #2
4) Pass the merged list into player.catalog.load()
Here's some more info.
http://docs.brightcove.com/en/video-cloud/brightcove-player/guides/playlist-api.html

Related

YouTube search API not filtering embeddable/syndicated videos

Somebody already asked this question a year ago here, but it didn't get much elaborated answers.
I'm trying to embed YouTube videos into my application. I use YouTube Data API v3 - Search: list to search for videos to embed. To check if video is embbedable and syndicated, I set the videoEmbeddable and videoSyndicated parameters to true (as well as the type parameter to video). But this doesn't work most of the time.
For example, when I search for some random commercial song (e.g. Weeknd - Blinding Lights), the first result I get is some unofficial lyrics video (this), which is fine but can't be embedded into my application. Here's a screenshot of how it looks like when I embed it:
I know this happens due to copyright issues, but is there a way to be sure if video can be played outside of YouTube?

Retrieve audio data in JSON format and play it through Amazon Alexa

I am working on a custom lambda function in JavaScript for the Amazon Alexa. Amazon's docs have clear details on building custom skills, and I have successfully built several "stock" skills from their templates.
I am writing a unique skill now which must retrieve the JSON data located at this link:
https://api.ense.nyc/latest
and then 'play' that data (since the data is snippets of audio) through the Alexa. I am not sure what to write to bring about this functionality.
This is a bit complicated than your average stock skills, from the url it looks like a podcast skill.
You need to
Parse The JSON and get the audiourl from the list.
Set the skill state to PLAY_MODE.
Keep track of audio progress with audio event handlers.
Probably use a dynamodb alike database to persist incase your session ends and your audios are long so they keep on playing.
here is a sample skill, that parses a RSS feed for a podcast then plays the audios in a row
https://github.com/bespoken/streamer
It seems that the audio files are short. In that case connect to the endpoint using an http fetch library (eg. httpm module, node-fetch or axios in node.js). Once you get the json file navigate to the properties that have the audio, get the url, surround them by audio tags <audio src="url"/> ands send them in a standard speech response of your skill. The audio tag has time and quality limitations so if you run into issues the audio is probably longer or a different quality than expected.
1)The audio should be available to public in (.mp3)
2)The audio should be in Alexa-friendly format
Converting audio files to an Alexa-friendly format using Audacity
1)Open the file to convert.
2)Set the Project Rate in the lower-left corner to 16000.
3)Click File > Export Audio and change the Save as type to MP3 Files.
4)Click Options, set the Quality to 48 kbps and the Bit Rate Mode to Constant.

Record from waveOutGetDevCaps? DirectShow?

I listed the waveInGetDevCaps and it shows me microphone. I however need to record the speaker audio. Is it possible to record devices listed by waveOutGetDevCaps? All examples I find are of waveIn
I am trying to record audio of system. Not audio of mic.
I have two goals, one is a record the sound then do music recognition on it, and the second goal is to record screen and system audio togather. Does DirectShow apis record audio as well?
Edit: So I started the DirectShow thing and am able to list CLSID_AudioInputDeviceCategory but I can't find an example out there of how to record system audio, does anyone know of one or can provide one?

Dynamic video creation using multiple images

I want to create a user video which should take a photo album as input and play exactly like Facbook Look back video.
I have looked at couple of option including imagemagick and ffmpeg. Are there any good alternatives available for doing this.
If you want to create a video dynamically through the browser you cannot do this on client side (not in a convenient way anyways). There is no functionality in browsers today that allows you to create video files (only streams) and the option is to write JavaScript code to do all the low-level encoding etc. which will take ages (to write but also in processing) and be prone to errors etc.
Your best option is to send the individual frames to server as for example jpeg (or png if you need high quality) and process it there using jobs where the processing can be done with f.ex. FFMpeg (which is great for these things).
Track the job id using some sort of user id and have a database or file updated with current status so the user can come back and check.

Can I get raw video frames from DirectShow without playback

I'm working on a media player using Media foundation. I want to support VOB files playback. However, media foundation currently does not support the VOB container. Therefore I wish to use DirectShow for the same.
My idea here is not to take an alternate path using a DirectsShow graph, but just grab a video frame and pass it to the same pipeline in media foundation. In media foundation, I have an 'IMFSourceReader' which simply reads frames from the video file. Is there a direct show equivalent, which just gives me the frames without needing to create a graph, start playback cycle, and then trying to extract frames from the renders pin? (To be more clear, does DirectsShow support an architecture wherein it could give me raw frames without actually having to play the video?)
I've read about ISampleGrabber but its deprecated and I think it won't fit my architecture. I've not worked on DirectShow before.
Thanks,
Mots
You have to build a graph and accept frames from the respective parser/demultiplexer filter which will read container and deliver individual frames on its output.
The playback does not have to be realtime, nor you need to fake painting those video frames somewhere. Once you get the data you need in Sample Grabber filter, or a customer filter, you can terminate pipeline with a Null Renderer. That is, you can arrange getting frames you need in a more or less convenient way.
You can use Monogram frame grabber filter to connect the VOB DS filter's output - it works great. See the comments there for how to connect the output to external application.

Resources