I need a way to stream live audio over HTTP with VLC from the command line. I've been able to do this with the GUI, but it's inefficient, and I was hoping I could do the same thing with a shell script. I've looked at VLC's wiki and guides, but there has been nothing about streaming audio live from a device. Any help would be much appreciated! Thanks!
Related
I'm trying to build a music player/playlist maker with Ruby and TK. I still haven't figured out a way to stream the youtube video (actually only the sound) I don't want to download and then play the song, cause that would take too long. And I couldn't find any information regarding streaming directly without some kind of embedded player.
Does anyone know how I could best tackle this?
Seems to be an intersting project, so I searched a little. From this reddit post: Tip: Use mpv + youtube-dl as streaming audio player there's this code using the mpv program to stream audio:
mpv "https://www.youtube.com/watch?v=sVK5Z6wnMxg" --no-video
That URL is a livestream of the Bonaroo music festical that's currently happening. I tried it and it does start the audio. Under the hood this is using youtube-dl which has this note in the man page:
How do I stream directly to media player?
You will first need to tell youtube-dl to stream
media to stdout with -o -, and also tell your me‐
dia player to read from stdin (it must be capable
of this for streaming) and then pipe former to
latter. For example, streaming to vlc
(http://www.videolan.org/) can be achieved with:
youtube-dl -o - "http://www.youtube.com/watch?v=BaW_jenozKcj" | vlc -
So if you wanted to pass the stream to a different media player, that would be a good place to start.
In terms of Ruby; well, this isn't really a Ruby solution per se, and you'd simply call the shell program from Ruby using backticks, system, Process.spawn, fork, etc.
I coded an application to receive RTP packets via TCP (no packets are lost) from a hardware camera and dump its H264 packets to a file so I could play the video using MPlayer or VLC. This is already working and I pretty much did the steps described here. The commands to play the video are mplayer -fps 24 -demuxer h264es foobar.h264 and vlc foobar.h264.
The issue is now when I play the video. The camera changes the FPS frequently and because I drop the RTP info when writing the H264 file, the timestamp of each frame is lost. My question is: what do I have to do to fix the frame frequency? Should I create empty/blank P-frames (if that is possible)? If so, how would I do it?
Any solution using Linux compatible tools or libraries (like ffmpeg, libx264, libavcodec) using shell, C/C++ or Python is very much welcome.
PS: I have almost no experience with video encoding and RTP.
There is no timing information in a raw h.264 stream. The stream needs to be put into a container such as MP4 or FLV where you can tag each frame with a PTS/DTS.
i am trying to use ffmpeg as a live transcoder to transcode tv channels from udp input to rtmp output to a wowza server.
i have 2 kinds of input channels in 1st kind the input audio is mp2 and in the second kind the input audio is acc_latm.
my problem is when i transcode the mp2 channels everything is fine but when i try to transcode the aac channel the audio is muted after few hours. but the video is fine.
the output codecs are : libx264 for video and faac or fdk-aac for audio output
i tried both aac encoders but it did not change.
i think it is the ffmpeg aac decoder's problem. but i cannot fix this.
i need a way to detect the problem online and restart the ffmpeg. or change the ffmpeg decoder codec.
please help.
thanks.
Yeah, ffmpeg is not guaranteed stable. Zoneminder used to detect crashes and restart processes when that happens. You may look at their code although IIRC they were only looking for video.
I think it would be simpler if you can enable some level of verbosity or debugging (-v loglevel) and see what messages indicated a crash (use grep to detect and some script to restart). That would be most efficient.
Another thing comes to mind is use ffmpeg/avconv to extract your resulting audio track and monitor it for some pattern in the file. Or play the resulting file and use an alsa device piping to a script. But it is under question if you would be able to reliably detect broken from legitimate silence. Much less efficient as well. Let me know if you can't figure the alsa device setup if you go that route, I don't have it handy right now.
I use Qt & OpenCV to record video and QAudioInput to record audio into wav format. I want to combine them into one video file. How can I accomplish this? I have researched so much but I can't seem to find a command to accomplish this.
I use both Windows and Mac.
FYI, this operation seems to be accomplished through the cmd-line in this thread. This approach may turn into an easy hack since you can call this command using system().
But if you still want to do it programatically, I suggest you take a look at Dranger's FFmpeg tutorials. It provides 8 interesting tutorials that shows how to do simple stuff, from taking snapshots of a video to more complex stuffs like writing a simple video player with audio/video sync.
These tutorials teach how to work independently with audio and video streams, which is what you need to do: read the audio stream from the WAV file and then insert it as the audio stream of a video file.
Maybe not directly related to what you are aim for, but this answer demonstrates how to use FFmpeg to retrieve the audio stream of one file and play it with SDL, while simultaneously using OpenCV to retrieve video frames and display them in a SDL window.
How can I playback an audio stream from a Icecast on WP7
I have tried SMF, SmoothStreaming Client and the MediaElement.
None of these have worked. The formats are either asx or and wma.
Edit:
Recently I found a new stream. this stream works when I'm in the designer. But it does not work on the device. On the device the stream is opened and closed immediately.
this stream is from an IceCast server in MP3 format. with a ?.mp3 extention. or without.
When you are streaming live radio, the stream may be encoded by an IceCast server or ShoutCast server. To read these streams, you will need to decode the stream in memomry and pass it to the MediaElement once it has been decoded.
have a look at Mp3MediaStreamSource
and Audio output from Silverlight
I lost tons of time on this, and this is the best solution I found so far.
Having had a quick look at the Icecast web site (I'm not familiar with their service) it seems that most of what they offer for streamed audio is offered in MP3 format, but that they provide this as playlists in either M3U or XSPF format. You can't provide this to any of the built-in controls or classes in the WP7 framework, but you can parse the contents of the file and pass that to a MediaElement to play individual files.
The M3U file is a simple list of the consituent URLs, so is the simplest to deal with, but the XSPF format (which is an XML format) provides more information, such as the title. You can easily use the XDocument class to parse the XSPF file and then use LINQ to query the contents.
You're not adding the ?.mp3 to the pls file right, to the embedded URL? IF you are using the URL you get from the PLS/M3U file, you might need to append a file extension to it. You can often do this by adding ?ext=.mp3 or ?file.mp3 to the URL and it should play with MediaElement, as I read on the MS dev boards that people had been getting that to work with Shoutcast streams.
Does your stream work on the device when you unplug it from the computer? Media playing doesn't work while you're plugged into the Zune sync center.
Chris