I want distinguish between limited and members video using a YouTube Data APIs 'Video' information.
But those video has same privacy status that "unlisted".
Is there other keys to dintinguish it?
Related
I am very new to the video world, but have noticed social media services.. particular snapchat and instagram do a great job of getting videos to load fast even on poorer connections. I know some of this is how the videos are transcoded.
I have gathered some presets I think I should be using when transcoding with ffmpeg, but am not sure of what formats or other parts of it. I would love to hear what people think!
ffmpeg()
.input(remoteReadStream)
.outputOptions('-preset fast')
.outputOptions('-movflags +faststart')
Other than that I am not entirely sure what else..
If you want fast start of the video you must ensure that the first frame is key-frame. Use -force_key_frames '00:00:00.000' parameter of ffmpeg for such task.
But in fact the main method for fast video response on poor connections is adaptive bitrate streaming (https://en.m.wikipedia.org/wiki/Adaptive_bitrate_streaming). It selects video source with bitrate apropriate for user bandwith. So you need to encode your video in different sizes with different qualities and bitrates and assemble them in special playlist for adaptive streaming.
I am working on a custom lambda function in JavaScript for the Amazon Alexa. Amazon's docs have clear details on building custom skills, and I have successfully built several "stock" skills from their templates.
I am writing a unique skill now which must retrieve the JSON data located at this link:
https://api.ense.nyc/latest
and then 'play' that data (since the data is snippets of audio) through the Alexa. I am not sure what to write to bring about this functionality.
This is a bit complicated than your average stock skills, from the url it looks like a podcast skill.
You need to
Parse The JSON and get the audiourl from the list.
Set the skill state to PLAY_MODE.
Keep track of audio progress with audio event handlers.
Probably use a dynamodb alike database to persist incase your session ends and your audios are long so they keep on playing.
here is a sample skill, that parses a RSS feed for a podcast then plays the audios in a row
https://github.com/bespoken/streamer
It seems that the audio files are short. In that case connect to the endpoint using an http fetch library (eg. httpm module, node-fetch or axios in node.js). Once you get the json file navigate to the properties that have the audio, get the url, surround them by audio tags <audio src="url"/> ands send them in a standard speech response of your skill. The audio tag has time and quality limitations so if you run into issues the audio is probably longer or a different quality than expected.
1)The audio should be available to public in (.mp3)
2)The audio should be in Alexa-friendly format
Converting audio files to an Alexa-friendly format using Audacity
1)Open the file to convert.
2)Set the Project Rate in the lower-left corner to 16000.
3)Click File > Export Audio and change the Save as type to MP3 Files.
4)Click Options, set the Quality to 48 kbps and the Bit Rate Mode to Constant.
Is it possible to create a video playlist using videos either from Anvato or Brightcove at the same time?
So far I created playlists from one or the other using their respective docs, but I would like one that takes videos from either.
Context:
You have settings to create a video from Brightcove or from Anvato.
Brightcove requires:
Video Id
Account Id
Player Id
Anvato requires:
Account Id
Video Id
Using their respective SDKs you can load a video using those settings. But both BC and Anvato provide mechanisms to load a playlist from a list of Video Ids.
So to create a BC Playlist you use their docs and use the markup along with your list of Video Ids. Same for Anvato.
But in theory it should be possible to have one video player that runs all of the video sources if we could get the straight url to the source file.
You could certainly do this with the Brightcove Player. By default, it assumes you're pulling all of the videos from the Video Cloud catalog, as that's what most people do. However, there is a lower level API that allows you to pass an array of video objects in to the playlist. It's call player.catalog.load().
So if you wanted to merge playlists from Anvato and Brightcove:
1) Call Anvato's API to get the list of video metadata/URLs.
2) Call catalog.getPlaylist() on the Video Cloud APIs.
3) Append the data from Anvato to the array you got in #2
4) Pass the merged list into player.catalog.load()
Here's some more info.
http://docs.brightcove.com/en/video-cloud/brightcove-player/guides/playlist-api.html
I listed the waveInGetDevCaps and it shows me microphone. I however need to record the speaker audio. Is it possible to record devices listed by waveOutGetDevCaps? All examples I find are of waveIn
I am trying to record audio of system. Not audio of mic.
I have two goals, one is a record the sound then do music recognition on it, and the second goal is to record screen and system audio togather. Does DirectShow apis record audio as well?
Edit: So I started the DirectShow thing and am able to list CLSID_AudioInputDeviceCategory but I can't find an example out there of how to record system audio, does anyone know of one or can provide one?
Background: I know the process's ID and as a result I can find the audio session created by it. But one session can contain several streams and each stream can play its own sound and be paused/resemed.
How can I enumerate all media streams linked to a media session?
Thanks.
Information you have about the audio sessions externally is available though IAudioSessionControl interface, obtained for example via session enumeration.
You don't have the granularity to enumerate further in depth, you only have: state, notifications, grouping and consmetic/UI information.