WP7 BackgroundAudioAgent: get Meta data from Icecast - windows-phone-7

I have a Windows Phone project that streams radio stations from an Icecast server.
I am using Background Audio Agent to play the streams.
The Icecast stream provides Track Title - Artist name as the metadata.
Is there any way I can fetch the metadata from the Audio Player?
Right now I am fetching the metadata from a PHP script every 10 secs. If I get it directly from Icecast, it will be good.
In my IPhone application I am able to see the metadata. I am using the video player in IPhone app.
Tell me whether it can be done or not.
If not, please tell me whether I can read the stream byte by byte and send it to the Audio agent.
Thanks.

Related

How to change the result of IAudioClient->GetMixFormat() method?

In normal case, call IAudioClient->GetMixFormat() method on a device which only have stereo playback device will get default audio format on share mode which is 2 channels format.
But for some reason, I need to let all applicationes on this device get 6/8 channels format when they call IAudioClient->GetMixFormat().
Here is a section of decription of IAudioClient->GetMixFormat() method on MSDN website.
The mix format is the format that the audio engine uses internally for digital processing of shared-mode streams. This format is not necessarily a format that the audio endpoint device supports.
It is the mix format for shared usage mode when applications play audio [in compatible formats] and system mixes everything together for resulting playback stream. The format may be altered via Control Panel, see screenshot in this answer:
Not every device will offer 5.1 and 7.1 options though.

Tokbox Share System sound during screen sharing

I am trying to create a screen sharing app through Tokbox but I am not able to share the system audio. Instead, it is using the microphone to share the audio. Can you please suggest if it is possible to share system sound (like music etc.) rather than microphone?
It's not possible to access the system audio from JavaScript on Linux and Mac. As Adam mentions in the comment, you can get the system audio along with the screen on Windows.
For non-Windows you would need to use a native application to pipe the system audio output back into a virtual microphone, then use TokBox to access the microphone as normal.
See Access/process system audio with Javascript/Web Audio API
If you want to combine a screenshare with a microphone you will need to use the new custom audio source option (available from v2.13.0) combined with the screenshare video source:
Promise.all([
OT.getUserMedia({ videoSource: null}),
OT.getUserMedia({ videoSource: 'screen' }),
]).then(([audioStream, videoStream]) => {
const audioSource = audioStream.getAudioTracks()[0];
const videoSource = videoStream.getVideoTracks()[0];
const publisher = OT.initPublisher({ audioSource, videoSource });
session.publish(publisher);
});
If you want to stream 2 audio tracks then you will need to use multiple publishers. One publisher would have the screenshare video combined with the audio from the normal microphone. The other publisher would only have the virtual microphone (which has the system audio piped to it) and no video. You should use OT.getDevices() to determine which microphone should belong to each publisher.
You may be able to use the WebAudio API to combine the audio from both real and virtual microphones into a single track so you only need a single publisher, but that requires more effort.
Make sure enableStereo is turned on and audioBitrate is set to 64000 (or more) for the publisher that's streaming music.
If you need to support all operating systems then you could detect the OS based on the user agent and execute the appropriate strategy accordingly.

Does the "live=1" on ffmpeg rtmp urls mean that the stream is live and you cannot rewind or pause it?

Or does it have some other meaning? I have searched all over the internet, and the documentation is very thin on it... If someone could point me to something that explains exactly what it is, I would appreciate it.
I am talking about this:
ffmpeg "rtmp://...... live=1" .....
tia.
Short answer is yes.
RTMP has live streaming support and vod support. 'live=1' means the rtmp is running a live streaming. In this case, the media server is receiving video feed from source in real-time. Therefore, rewind back to a previous time is not a supported action. Without 'live=1', RTMP is running on vod mode, which means the entire video pre-exist on media server, then the server is capable of rewinding back, or seek to a random position of the video.
Although technically, on client side (preferably with a software, not webpages), if you maintain buffer your self, you can rewind or pause one way or another. Since you are saving data as you are receiving from media server and everything is under your control, you will be capable of rewind and pause live streams. But you will have to implement the buffering and decoding mechanism yourself. ffmpeg command will not be able to help on this.

stream webcam using ffmpeg and live555

I am new to live555.
I want to stream my webcam from a windows 7 (64-bit) machine behind home LAN using ffmpeg as the encoder to a live555 server running on a Debian 64-bit linux machine in a data center over the WAN. I want to send a H.264 RTP/UDP stream from ffmpeg and the "testOnDemandRTSPServer" should send out RTSP streams to clients that connect to it.
I am using the following ffmpeg command which sends UDP data to port 1234, IP address AA.BB.CC.DD
.\ffmpeg.exe -f dshow -i video="Webcam C170":audio="Microphone (3- Webcam C170)" -an
-vcodec libx264 -f mpegts udp://AA.BB.CC.DD:1234
On the linux server I am running the testOnDemandRTSPServer on port 5555 which expects raw UDP data from from AA:BB:CC:DD:1234. I try to open the rtsp stream in VLC using rtsp://AA.BB.CC.DD:5555/mpeg2TransportStreamFromUDPSourceTest
But I get nothing in VLC. What am I doing wrong? How can I fix it?
From what I remember, it was non-trivial to write a DeviceSource class, the problem you're describing is definitely something that's discussed quite frequently on the live555 mailing list - you need to get yourself approved to the list a.s.a.p if you want to do anything related to rtsp development.
The problem you seem to be having is related to the fact that some video formats are written with streaming in mind, and the rtsp server can easily stream certain formats because they contain "sync bytes" and other 'markers' which it can use to determine where frame boundaries end. The simplest solution you could use is to get your hands on the SDK for the camera, and use that to request data from the camera. There are many different libraries and toolkits that let you access data from the camera - one of which would be the DirectX SDK. Once you have the camera data, you would need to encode it into a streamable format, you might be able to get the raw camera frames using DirectX, then convert that to mp4 / h264 frame data using ffmpeg (libavcodec, libavformat).
Once you have your encoded frame data, you feed that into your DeviceSource class, and it will take care of streaming the data for you. I wish I had code on hand, but I was bound by NDA to not remove code from the premises, although the general algorithm is documented on the live555 website, so I am able to explain it here.
I hope you have a bit more luck with this. If you get stuck, then remember to add code to your question. Right now the only thing that's stopping your original plan from working (stream file to VLC) is the file format you chose to stream.
One thing you can try is to increase the logging verbosity level of VLC to 2: VLC expects in-band parameter sets in which case it will print a debug message that it is waiting for parameter sets on the messages window. Just having the parameter sets in the SDP of the RTSP DESCRIBE is not sufficient. IIRC you can configure x264 to output parameter sets periodically or at least with every IDR frame.
Other things you can try:
You can test the stream with openRTSP before using VLC. If you use the openRTSP -d 5 -Q rtsp://xxx.xxx.xxx.xxx:5555/mpeg2TransportStreamFromUDPSourceTest options openRTSP will print quality statistics after streaming for 5 seconds. Then you will be able to verify that the testOnDemandRTSPServer is indeed relaying the stream, and that there is not a problem between the ffmpeg application and the testOnDemandRTSPServer.
Have you tried a different stream? Also, I had a similar problem due to issues with my firewall, you might want to make sure you can actually stream data through those ports.
If you are missing a Sync Byte, it's probably a stream issue - try using a different data source and see if that helps, try an .avi file or an .mp4 file, usually .mp4 files are easy to stream. If the streaming works with the .mp4 file, and not with your mpegts file, then it's a problem in your file - ffmpeg is trying to figure out where each "frame" or "frame set" of data ends so that it can try to stream discrete chunks.
It's been over 2 years since I last worked with this stuff, so let me know if you get anywhere.

WP7 stream audio with dynamic URI

I used BackgroundAudioPlayer to play mp3 from internet and I know that background audio agent automatically streams the mp3
Does audio agent plays after completely streams the mp3 ?
If the mp3 URI is dynamic, how to play it ?
If it is a continuous audio stream, then the background audio agent will continue playing it. If it is a single file, the playback will stop on completion.
Also, there is no way you can specify a dynamic URL. There is the core location that you specify for the stream that has to be adjusted in-app.

Resources