I have a live stream service using wowza. I want to add text chat in it, so viewers could comment about what they watch! The problem is if I use a socket to send comments it wouldn't be synchronized with the correct frame of video. I need help to match the time of video and comments in a correct time. So when viewers replay that VOD, they could see the comments in a right time too. I found some solution like ntp which periscope uses. but I don't know how should I use this.
When you track your comments in the database, just track their timestamps there. (Database server time, with NOW() or CURRENT_TIMESTAMP as appropriate for your database server.) Also track the time at which the video started. Then build your player to play comments at the right time.
You could even dynamically serve WebVTT if you wanted.
Related
I'm trying to obtain playback video streams from some Axis and Hikvision cameras, using Onvif.
I'm doing this in a C# application, and the resulted stream is played in VLC.
Using the FindRecordings/GetRecordingSearchResult calls and then GetReplayUri I can obtain the playback stream (RTSP/H264), but here I have this problem: this behaves like a live stream - I can only use play and pause. I cannot use the time cursor to seek, cannot play in reverse.
So I find this unusable for a playback application - you have to watch the entire recording (days or hours of recording!) in order to see a specific event in time. And once you play it, you cannot go back 1 minute to see it again.
This seems quite stupid to me, so I believe that I'm doing something wrong in my code. Maybe I'm missing some configuration in order to obtain a 'true' playback stream.
My question is: is this playback stream behavior the 'standard' one, and I cannot expect more on this? Or some of you have this working ok (seek, reverse, frame by frame stepping), so I will know it can be done.
Thank you.
Reverse playback is possible, but it is not easy. First, the reverse replay is initiated using the Scale header field with a negative value. As an example:
PLAY rtsp://192.168.0.1/path/to/recording RTSP/1.0
Cseq: 123
Session: 12345678
Require: onvif-replay
Range: clock=20090615T114900.440Z-
Rate-Control: no
Scale: -1.0
After the stream is initialized, you will get GOPs in reverse order, not just reversed frames. I don't know if VLC supports this way of operating.
Be aware that only devices with the ReversePlayback capability support reverse playback.
Please refer to the streaming specification for further details.
This is not a real solution to the problem above, but maybe it would help others to deal with this situation.
Some cameras with which I worked were continuously recording on the same video file (so the time range was not known) and they were reporting (via RTSP) the available time interval like this:
range:npt=0-
Due to this, VLC was not displaying any time interval in the time slider, so it was not
allowing for seek. In my case, it was a requirement to use VLC, so I had to find a workaround to the problem.
This was a module which was acting like a proxy, and it sit between VLC and the RTSP source (camera). So all RTSP traffic between VLC and camera was going via this module which I controlled, so I could easily change the responses from camera in a way which was ok for VLC, so I got the seek capability available in VLC.
I am working on a custom lambda function in JavaScript for the Amazon Alexa. Amazon's docs have clear details on building custom skills, and I have successfully built several "stock" skills from their templates.
I am writing a unique skill now which must retrieve the JSON data located at this link:
https://api.ense.nyc/latest
and then 'play' that data (since the data is snippets of audio) through the Alexa. I am not sure what to write to bring about this functionality.
This is a bit complicated than your average stock skills, from the url it looks like a podcast skill.
You need to
Parse The JSON and get the audiourl from the list.
Set the skill state to PLAY_MODE.
Keep track of audio progress with audio event handlers.
Probably use a dynamodb alike database to persist incase your session ends and your audios are long so they keep on playing.
here is a sample skill, that parses a RSS feed for a podcast then plays the audios in a row
https://github.com/bespoken/streamer
It seems that the audio files are short. In that case connect to the endpoint using an http fetch library (eg. httpm module, node-fetch or axios in node.js). Once you get the json file navigate to the properties that have the audio, get the url, surround them by audio tags <audio src="url"/> ands send them in a standard speech response of your skill. The audio tag has time and quality limitations so if you run into issues the audio is probably longer or a different quality than expected.
1)The audio should be available to public in (.mp3)
2)The audio should be in Alexa-friendly format
Converting audio files to an Alexa-friendly format using Audacity
1)Open the file to convert.
2)Set the Project Rate in the lower-left corner to 16000.
3)Click File > Export Audio and change the Save as type to MP3 Files.
4)Click Options, set the Quality to 48 kbps and the Bit Rate Mode to Constant.
I want to create a user video which should take a photo album as input and play exactly like Facbook Look back video.
I have looked at couple of option including imagemagick and ffmpeg. Are there any good alternatives available for doing this.
If you want to create a video dynamically through the browser you cannot do this on client side (not in a convenient way anyways). There is no functionality in browsers today that allows you to create video files (only streams) and the option is to write JavaScript code to do all the low-level encoding etc. which will take ages (to write but also in processing) and be prone to errors etc.
Your best option is to send the individual frames to server as for example jpeg (or png if you need high quality) and process it there using jobs where the processing can be done with f.ex. FFMpeg (which is great for these things).
Track the job id using some sort of user id and have a database or file updated with current status so the user can come back and check.
I have a link to some video stream (web cam that is always recording some place). I would like to be able to take a screenshot of what ever is on that video stream at the moment a user goes to my app.
Can it be done and how?
I have looked but all I could find was for taking screenshots out of a movie/video, not out of a streaming video.
I suspect ffmpeg connected to the streaming service as an input could probably extract thumbnails for you. You could either leave it running and pick up latest thumbnails, or fire it up with a system command and make it connect and emit a single screenshot. The latter would be more efficient and easier to code if you have a low number of hits, but would have a high latency on each request.
I did a quick search for you, but the most common uses of ffmpeg with streaming input is to re-format and re-stream, or to use it in personal video recorder setup. Ffmpeg is quite complex, so I could not complete the search in the time I have had so far.
I'm searching for a way to analyse the content of internet radios. I want to write a ruby client that can get the current track, next track, band, bpm and other meta information from a stream (e.g. a radio on shoutcast).
Does anybody know how to do this? And how do I record that stream into a mp3 or aac file?
Maybe there is a library that can already do this, I haven't one so far.
regards
I'll answer both of your questions.
Metadata
What you are seeking isn't entirely possible. Information on the next track is not available (keep in mind not all stations are just playing songs from a playlist... many offer live content). Advanced metadata such as BPM is not available. All you get is something like this:
Some Band - Some Song
The format of {artist} - {song title} isn't always followed either.
With those caveats, you can get that metadata from a stream by connecting to the stream URL and requesting the metadata with the following request header:
Icy-MetaData: 1
That tells the server to send the metadata, which is interleaved into the stream. Every 8KB or so (specified by the server in a response header), you'll find a chunk of metadata to parse. I have written up a detailed answer on how to parse that here: Pulling Track Info From an Audio Stream Using PHP The prior question was language-specific, but you will find that my answer can be easily implemented in any language.
Saving Streams to Disk
Audio playing software is generally very resilient to errors. SHOUTcast servers are built on this principal, and are not knowledgeable about the data going through them. They just receive data from an encoder, and when the client requests the stream, they start sending that data at an arbitrary point.
You can use this to your advantage when saving stream data. It is possible to simply write the stream data as it comes in to a file. Most audio players will play them without problem. I have tested this with MP3 and AAC.
If you want a more conformant file, you will have to use a library or parse the stream yourself to split on the appropriate frames, and then handle bit reservoir issues in your code. This is a lot of work, and generally isn't worth doing unless you find your files have real compatibility problems.